From zaitcev at redhat.com Wed Jun 1 00:12:36 2016 From: zaitcev at redhat.com (Pete Zaitcev) Date: Tue, 31 May 2016 18:12:36 -0600 Subject: [rdo-list] Review login In-Reply-To: References: <20160529112741.7de18d51@lembas.zaitcev.lan> <20160530220650.00a306ed@lembas.zaitcev.lan> Message-ID: <20160531181236.50cf0991@lembas.zaitcev.lan> On Tue, 31 May 2016 18:30:21 +0200 Ha?kel wrote: > > If you used rdopkg clone you can now rdopkg review-spec but we also > > need to fix defaultremote in .gitreview, I've raised that in > > https://github.com/openstack-packages/rdopkg/issues/63 > As a workaround, I use that snippet to fix remotes. > for i in origin patches; do URL=`git remote get-url review-$i` && git > remote set-url $i $URL; done There's no such thing as "git remote get-url foo" here, only set-url (throws error: Unknown subcommand: get-url). Is it something clever in your ~/.gitconfig ? In any case, I updated rdopkg from Jakub's copr and re-ran rdopkg clone. Things progressed a bit: [zaitcev at lembas openstack-swift.master]$ rdopkg review-spec ## get_package_env ## review_spec git review -r review-origin rpm-master Problem running 'git remote update review-origin' Fetching review-origin Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. error: Could not fetch review-origin Problems encountered installing commit-msg hook The following command failed with exit code 1 "scp -P29418 zaitcev at review.rdoproject.org:hooks/commit-msg .git/hooks/commit-msg" ----------------------- Permission denied (publickey). ----------------------- command failed: git review -r review-origin rpm-master [zaitcev at lembas openstack-swift.master]$ How do you deal with it? Greetings, -- Pete From tdecacqu at redhat.com Wed Jun 1 00:36:45 2016 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 1 Jun 2016 00:36:45 +0000 Subject: [rdo-list] Review login In-Reply-To: <20160531181236.50cf0991@lembas.zaitcev.lan> References: <20160529112741.7de18d51@lembas.zaitcev.lan> <20160530220650.00a306ed@lembas.zaitcev.lan> <20160531181236.50cf0991@lembas.zaitcev.lan> Message-ID: <574E2E1D.2040002@redhat.com> On 06/01/2016 12:12 AM, Pete Zaitcev wrote: > The following command failed with exit code 1 > "scp -P29418 zaitcev at review.rdoproject.org:hooks/commit-msg .git/hooks/commit-msg" > ----------------------- > Permission denied (publickey). > ----------------------- > command failed: git review -r review-origin rpm-master > [zaitcev at lembas openstack-swift.master]$ > > How do you deal with it? That happen when your gerrit doesn't know your user ssh public key. You can upload your public key on this page: https://review.rdoproject.org/r/#/settings/ssh-keys Once added, you can check manually if it connects like that: ssh -p 29418 zaitcev at review.rdoproject.org If you use a special key, best is to use a .ssh/config like that: Host review.rdoproject.org Port 29418 User github-username IdentityFile ~/.ssh/commit-key Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From zaitcev at redhat.com Wed Jun 1 04:50:16 2016 From: zaitcev at redhat.com (Pete Zaitcev) Date: Tue, 31 May 2016 22:50:16 -0600 Subject: [rdo-list] Review login In-Reply-To: <574E2E1D.2040002@redhat.com> References: <20160529112741.7de18d51@lembas.zaitcev.lan> <20160530220650.00a306ed@lembas.zaitcev.lan> <20160531181236.50cf0991@lembas.zaitcev.lan> <574E2E1D.2040002@redhat.com> Message-ID: <20160531225016.46c49039@lembas.zaitcev.lan> On Wed, 1 Jun 2016 00:36:45 +0000 Tristan Cacqueray wrote: > > "scp -P29418 zaitcev at review.rdoproject.org:hooks/commit-msg .git/hooks/commit-msg" > > ----------------------- > > Permission denied (publickey). > That happen when your gerrit doesn't know your user ssh public key. Thanks, Tristan. The ssh key did the trick. One thing that made it trickier was the web design of the dashboard, where the user's identity was outside the right edge and neatly truncated by the browser, so I never suspected that something was there. Once you mentioned the Gerritt settings, I started searching them on purpose and found them. -- Pete From chkumar246 at gmail.com Wed Jun 1 14:20:09 2016 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 1 Jun 2016 19:50:09 +0530 Subject: [rdo-list] RDO Bug Statistics [2016-06-01] Message-ID: # RDO Bugs on 2016-06-01 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 127 - Fixed (MODIFIED, POST, ON_QA): 41 ## Number of open bugs by component dib-utils [ 1] + distribution [ 6] +++++++ Documentation [ 1] + instack [ 1] + instack-undercloud [ 9] ++++++++++ openstack-ceilometer [ 2] ++ openstack-cinder [ 2] ++ openstack-glance [ 1] + openstack-horizon [ 1] + openstack-ironic-disco... [ 1] + openstack-keystone [ 1] + openstack-neutron [ 6] +++++++ openstack-nova [ 4] ++++ openstack-packstack [ 33] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 7] ++++++++ openstack-sahara [ 2] ++ openstack-selinux [ 2] ++ openstack-tripleo [ 13] +++++++++++++++ openstack-tripleo-heat... [ 1] + openstack-tripleo-imag... [ 1] + openstack-trove [ 1] + Package Review [ 11] +++++++++++++ python-neutronclient [ 1] + python-novaclient [ 1] + rdo-manager [ 14] ++++++++++++++++ rdopkg [ 1] + RFEs [ 2] ++ tempest [ 1] + ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (127 bugs) ### dib-utils (1 bug) [1263779 ] http://bugzilla.redhat.com/1263779 (NEW) Component: dib-utils Last change: 2016-04-18 Summary: Packstack Ironic admin_url misconfigured in nova.conf ### distribution (6 bugs) [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2016-06-01 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages [1316169 ] http://bugzilla.redhat.com/1316169 (ASSIGNED) Component: distribution Last change: 2016-05-18 Summary: openstack-barbican-api missing pid dir or wrong pid file specified [1329341 ] http://bugzilla.redhat.com/1329341 (NEW) Component: distribution Last change: 2016-05-20 Summary: Tracker: Blockers and Review requests for new RDO Newton packages [1301751 ] http://bugzilla.redhat.com/1301751 (NEW) Component: distribution Last change: 2016-04-18 Summary: Move all logging to stdout/err to allow systemd throttling logging of errors [1290163 ] http://bugzilla.redhat.com/1290163 (NEW) Component: distribution Last change: 2016-05-17 Summary: Tracker: Blockers and Review requests for new RDO Mitaka packages [1337335 ] http://bugzilla.redhat.com/1337335 (NEW) Component: distribution Last change: 2016-05-25 Summary: Hiera >= 2.x packaging ### Documentation (1 bug) [1272108 ] http://bugzilla.redhat.com/1272108 (NEW) Component: Documentation Last change: 2016-04-18 Summary: [DOC] External network should be documents in RDO manager installation ### instack (1 bug) [1315827 ] http://bugzilla.redhat.com/1315827 (NEW) Component: instack Last change: 2016-05-09 Summary: openstack undercloud install fails with "Element pip- and-virtualenv already loaded." ### instack-undercloud (9 bugs) [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2016-04-18 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2016-04-18 Summary: wget is missing from qcow2 image fails instack-build- images script [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2016-04-18 Summary: Installing instack undercloud on Fedora20 VM fails [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2016-04-18 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1271200 ] http://bugzilla.redhat.com/1271200 (ASSIGNED) Component: instack-undercloud Last change: 2016-04-18 Summary: Overcloud images contain Kilo repos [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2016-04-18 Summary: instack-build-images does not stop on certain errors [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2016-04-22 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1265334 ] http://bugzilla.redhat.com/1265334 (NEW) Component: instack-undercloud Last change: 2016-04-18 Summary: rdo-manager liberty instack undercloud puppet apply fails w/ missing package dep pyinotify [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2016-04-18 Summary: Sphinx docs for instack-undercloud have an incorrect network topology ### openstack-ceilometer (2 bugs) [1265741 ] http://bugzilla.redhat.com/1265741 (NEW) Component: openstack-ceilometer Last change: 2016-04-27 Summary: python-redis is not installed with packstack allinone [1331510 ] http://bugzilla.redhat.com/1331510 (ASSIGNED) Component: openstack-ceilometer Last change: 2016-05-31 Summary: Gnocchi 2.0.2-1 release does not have Mitaka default configuration file ### openstack-cinder (2 bugs) [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2016-04-19 Summary: Configuration file in share forces ignore of auth_uri [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2016-04-19 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage ### openstack-glance (1 bug) [1312466 ] http://bugzilla.redhat.com/1312466 (NEW) Component: openstack-glance Last change: 2016-04-19 Summary: Support for blueprint cinder-store-upload-download in glance_store ### openstack-horizon (1 bug) [1333508 ] http://bugzilla.redhat.com/1333508 (NEW) Component: openstack-horizon Last change: 2016-05-20 Summary: LBaaS v2 Dashboard UI ### openstack-ironic-discoverd (1 bug) [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2016-02-26 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (1 bug) [1337346 ] http://bugzilla.redhat.com/1337346 (NEW) Component: openstack-keystone Last change: 2016-06-01 Summary: CVE-2016-4911 openstack-keystone: Incorrect Audit IDs in Keystone Fernet Tokens can result in revocation bypass [openstack-rdo] ### openstack-neutron (6 bugs) [1065826 ] http://bugzilla.redhat.com/1065826 (ASSIGNED) Component: openstack-neutron Last change: 2016-04-19 Summary: [RFE] [neutron] neutron services needs more RPM granularity [1282403 ] http://bugzilla.redhat.com/1282403 (NEW) Component: openstack-neutron Last change: 2016-04-19 Summary: Errors when running tempest.api.network.test_ports with IPAM reference driver enabled [1334797 ] http://bugzilla.redhat.com/1334797 (NEW) Component: openstack-neutron Last change: 2016-05-20 Summary: Ensure translations are installed correctly and picked up at runtime [1266381 ] http://bugzilla.redhat.com/1266381 (NEW) Component: openstack-neutron Last change: 2016-04-19 Summary: OpenStack Liberty QoS feature is not working on EL7 as is need MySQL-python-1.2.5 [1281308 ] http://bugzilla.redhat.com/1281308 (NEW) Component: openstack-neutron Last change: 2016-04-19 Summary: QoS policy is not enforced when using a previously used port [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2016-04-19 Summary: Use neutron-sanity-check in CI checks ### openstack-nova (4 bugs) [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2016-04-22 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2016-04-22 Summary: logrotate should copytruncate to avoid openstack logging to deleted files [1294747 ] http://bugzilla.redhat.com/1294747 (NEW) Component: openstack-nova Last change: 2016-05-16 Summary: Migration fails when the SRIOV PF is not online [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2016-05-11 Summary: Ensure translations are installed correctly and picked up at runtime ### openstack-packstack (33 bugs) [1200129 ] http://bugzilla.redhat.com/1200129 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-18 Summary: [RFE] add support for ceilometer workload partitioning via tooz/redis [1194678 ] http://bugzilla.redhat.com/1194678 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: On aarch64, nova.conf should default to vnc_enabled=False [1293693 ] http://bugzilla.redhat.com/1293693 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: Keystone setup fails on missing required parameter [1286995 ] http://bugzilla.redhat.com/1286995 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: PackStack should configure LVM filtering with LVM/iSCSI [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-18 Summary: API services has all admin permission instead of service [1063393 ] http://bugzilla.redhat.com/1063393 (ASSIGNED) Component: openstack-packstack Last change: 2016-05-18 Summary: RFE: Provide option to set bind_host/bind_port for API services [1297692 ] http://bugzilla.redhat.com/1297692 (ON_DEV) Component: openstack-packstack Last change: 2016-05-19 Summary: Raise MariaDB max connections limit [1302766 ] http://bugzilla.redhat.com/1302766 (NEW) Component: openstack-packstack Last change: 2016-05-19 Summary: Add Magnum support using puppet-magnum [1285494 ] http://bugzilla.redhat.com/1285494 (NEW) Component: openstack-packstack Last change: 2016-05-19 Summary: openstack- packstack-7.0.0-0.5.dev1661.gaf13b7e.el7.noarch cripples(?) httpd.conf [1316222 ] http://bugzilla.redhat.com/1316222 (ASSIGNED) Component: openstack-packstack Last change: 2016-05-18 Summary: Packstack installation failed due to wrong http config [1291492 ] http://bugzilla.redhat.com/1291492 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: Unfriendly behavior of IP filtering for VXLAN with EXCLUDE_SERVERS [1227298 ] http://bugzilla.redhat.com/1227298 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: Packstack should support MTU settings [1188491 ] http://bugzilla.redhat.com/1188491 (ASSIGNED) Component: openstack-packstack Last change: 2016-05-19 Summary: Packstack wording is unclear for demo and testing provisioning. [1208812 ] http://bugzilla.redhat.com/1208812 (ASSIGNED) Component: openstack-packstack Last change: 2016-05-19 Summary: add DiskFilter to scheduler_default_filters [1201612 ] http://bugzilla.redhat.com/1201612 (ASSIGNED) Component: openstack-packstack Last change: 2016-05-19 Summary: Interactive - Packstack asks for Tempest details even when Tempest install is declined [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2016-05-16 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-19 Summary: [RFE] Include Fedora cloud images in some nice way [1296899 ] http://bugzilla.redhat.com/1296899 (ASSIGNED) Component: openstack-packstack Last change: 2016-05-20 Summary: Swift's proxy-server is not configured to use ceilometer [1005073 ] http://bugzilla.redhat.com/1005073 (NEW) Component: openstack-packstack Last change: 2016-04-19 Summary: [RFE] Please add glance and nova lib folder config [903645 ] http://bugzilla.redhat.com/903645 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-18 Summary: RFE: Include the ability in PackStack to support SSL for all REST services and message bus communication [1239027 ] http://bugzilla.redhat.com/1239027 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: please move httpd log files to corresponding dirs [1324070 ] http://bugzilla.redhat.com/1324070 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: RFE: PackStack Support for LBaaSv2 [1168113 ] http://bugzilla.redhat.com/1168113 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-18 Summary: The warning message " NetworkManager is active " appears even when the NetworkManager is inactive [1292271 ] http://bugzilla.redhat.com/1292271 (ASSIGNED) Component: openstack-packstack Last change: 2016-05-19 Summary: Receive Msg 'Error: Could not find user glance' [1116019 ] http://bugzilla.redhat.com/1116019 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-18 Summary: AMQP1.0 server configurations needed [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2016-05-18 Summary: [RFE] SPICE support in packstack [1338496 ] http://bugzilla.redhat.com/1338496 (NEW) Component: openstack-packstack Last change: 2016-05-31 Summary: Failed to install with packstack [1312487 ] http://bugzilla.redhat.com/1312487 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-18 Summary: Packstack with Swift Glance backend does not seem to work [1184806 ] http://bugzilla.redhat.com/1184806 (NEW) Component: openstack-packstack Last change: 2016-04-28 Summary: [RFE] Packstack should support deploying Nova and Glance with RBD images and Ceph as a backend [1172310 ] http://bugzilla.redhat.com/1172310 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-19 Summary: support Keystone LDAP [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2016-04-19 Summary: swift: Admin user does not have permissions to see containers created by glance service [1286828 ] http://bugzilla.redhat.com/1286828 (NEW) Component: openstack-packstack Last change: 2016-05-19 Summary: Packstack should have the option to install QoS (neutron) [1172467 ] http://bugzilla.redhat.com/1172467 (NEW) Component: openstack-packstack Last change: 2016-04-19 Summary: New user cannot retrieve container listing ### openstack-puppet-modules (7 bugs) [1318332 ] http://bugzilla.redhat.com/1318332 (NEW) Component: openstack-puppet-modules Last change: 2016-04-19 Summary: Cinder workaround should be removed [1297535 ] http://bugzilla.redhat.com/1297535 (ASSIGNED) Component: openstack-puppet-modules Last change: 2016-04-18 Summary: Undercloud installation fails ::aodh::keystone::auth not found for instack [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2016-04-18 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1316856 ] http://bugzilla.redhat.com/1316856 (NEW) Component: openstack-puppet-modules Last change: 2016-04-28 Summary: packstack fails to configure ovs bridge for CentOS [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2016-04-18 Summary: trove guestagent config mods for integration testing [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2016-05-18 Summary: Offset Swift ports to 6200 [1289761 ] http://bugzilla.redhat.com/1289761 (NEW) Component: openstack-puppet-modules Last change: 2016-05-25 Summary: PackStack installs Nova crontab that nova user can't run ### openstack-sahara (2 bugs) [1305790 ] http://bugzilla.redhat.com/1305790 (NEW) Component: openstack-sahara Last change: 2016-02-09 Summary: Failure to launch Caldera 5.0.4 Hadoop Cluster via Sahara Wizards on RDO Liberty [1305419 ] http://bugzilla.redhat.com/1305419 (NEW) Component: openstack-sahara Last change: 2016-02-10 Summary: Failure to launch Hadoop HDP 2.0.6 Cluster via Sahara Wizards on RDO Liberty ### openstack-selinux (2 bugs) [1320043 ] http://bugzilla.redhat.com/1320043 (NEW) Component: openstack-selinux Last change: 2016-04-19 Summary: rootwrap-daemon can't start after reboot due to AVC denial [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2016-04-18 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) ### openstack-tripleo (13 bugs) [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1303614 ] http://bugzilla.redhat.com/1303614 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: overcloud deployment failed AttributeError: 'Proxy' object has no attribute 'api' [1341093 ] http://bugzilla.redhat.com/1341093 (NEW) Component: openstack-tripleo Last change: 2016-06-01 Summary: Tripleo QuickStart HA deployment attempts constantly crash [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI [1329095 ] http://bugzilla.redhat.com/1329095 (NEW) Component: openstack-tripleo Last change: 2016-04-22 Summary: mariadb and keystone down after an upgrade from liberty to mitaka [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1277990 ] http://bugzilla.redhat.com/1277990 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: openstack-ironic-inspector-dnsmasq.service: failed to start during undercloud installation [1277980 ] http://bugzilla.redhat.com/1277980 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: missing python-proliantutils [1334259 ] http://bugzilla.redhat.com/1334259 (NEW) Component: openstack-tripleo Last change: 2016-05-09 Summary: openstack overcloud image upload fails with "Required file "./ironic-python-agent.initramfs" does not exist." [1340865 ] http://bugzilla.redhat.com/1340865 (NEW) Component: openstack-tripleo Last change: 2016-05-31 Summary: Tripleo QuickStart HA deployment attempts constantly crash [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: User can not login into the overcloud horizon using the proper credentials ### openstack-tripleo-heat-templates (1 bug) [1266027 ] http://bugzilla.redhat.com/1266027 (NEW) Component: openstack-tripleo-heat-templates Last change: 2016-04-18 Summary: TripleO should use pymysql database driver since Liberty ### openstack-tripleo-image-elements (1 bug) [1303567 ] http://bugzilla.redhat.com/1303567 (NEW) Component: openstack-tripleo-image-elements Last change: 2016-04-18 Summary: Overcloud deployment fails using Ceph ### openstack-trove (1 bug) [1327068 ] http://bugzilla.redhat.com/1327068 (NEW) Component: openstack-trove Last change: 2016-05-24 Summary: trove guest agent should create a sudoers entry ### Package Review (11 bugs) [1279513 ] http://bugzilla.redhat.com/1279513 (ASSIGNED) Component: Package Review Last change: 2016-04-18 Summary: New Package: python-dracclient [1326586 ] http://bugzilla.redhat.com/1326586 (NEW) Component: Package Review Last change: 2016-04-13 Summary: Review request: Sensu [1272524 ] http://bugzilla.redhat.com/1272524 (ASSIGNED) Component: Package Review Last change: 2016-05-19 Summary: Review Request: openstack-mistral - workflow Service for OpenStack cloud [1318310 ] http://bugzilla.redhat.com/1318310 (NEW) Component: Package Review Last change: 2016-05-17 Summary: Review Request: openstack-magnum-ui - OpenStack Magnum UI Horizon plugin [1331952 ] http://bugzilla.redhat.com/1331952 (ASSIGNED) Component: Package Review Last change: 2016-06-01 Summary: Review Request: openstack-mistral-ui - OpenStack Mistral Dashboard [1341687 ] http://bugzilla.redhat.com/1341687 (NEW) Component: Package Review Last change: 2016-06-01 Summary: Review request: openstack-neutron-lbaas-ui [1272513 ] http://bugzilla.redhat.com/1272513 (ASSIGNED) Component: Package Review Last change: 2016-05-20 Summary: Review Request: Murano - is an application catalog for OpenStack [1329125 ] http://bugzilla.redhat.com/1329125 (ASSIGNED) Component: Package Review Last change: 2016-04-26 Summary: Review Request: python-oslo-privsep - OpenStack library for privilege separation [1331486 ] http://bugzilla.redhat.com/1331486 (NEW) Component: Package Review Last change: 2016-05-24 Summary: Tracker bugzilla for puppet packages in RDO Newton cycle [1312328 ] http://bugzilla.redhat.com/1312328 (NEW) Component: Package Review Last change: 2016-05-19 Summary: New Package: openstack-ironic-staging-drivers [1318765 ] http://bugzilla.redhat.com/1318765 (NEW) Component: Package Review Last change: 2016-05-25 Summary: Review Request: openstack-sahara-tests - Sahara Scenario Test Framework ### python-neutronclient (1 bug) [1132541 ] http://bugzilla.redhat.com/1132541 (ASSIGNED) Component: python-neutronclient Last change: 2016-04-18 Summary: neutron security-group-rule-list fails with URI too long ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2016-05-02 Summary: Missing versioned dependency on python-six ### rdo-manager (14 bugs) [1306350 ] http://bugzilla.redhat.com/1306350 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: With RDO-manager, if not configured, the first nic on compute nodes gets addresses from dhcp as a default [1272376 ] http://bugzilla.redhat.com/1272376 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: Duplicate nova hypervisors after rebooting compute nodes [1271289 ] http://bugzilla.redhat.com/1271289 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: overcloud-novacompute stuck in spawning state [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2016-04-18 Summary: No way to increase yum timeouts when building images [1270370 ] http://bugzilla.redhat.com/1270370 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: [RDO-Manager] bulk introspection moving the nodes from available to manageable too quickly [getting: NodeLocked:] [1273541 ] http://bugzilla.redhat.com/1273541 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: RDO-Manager needs epel.repo enabled (otherwise undercloud deployment fails.) [1292253 ] http://bugzilla.redhat.com/1292253 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: Production + EPEL + yum-plugin-priorities results in wrong version of hiera [1271726 ] http://bugzilla.redhat.com/1271726 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: 1 of the overcloud VMs (nova) is stack in spawning state [1270910 ] http://bugzilla.redhat.com/1270910 (ASSIGNED) Component: rdo-manager Last change: 2016-04-18 Summary: IP address from external subnet gets assigned to br-ex when using default single-nic-vlans templates [1306364 ] http://bugzilla.redhat.com/1306364 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: With RDO-manager, using bridge mappings, Neutron opensvswitch-agent plugin's config file don't gets populated correctly [1273680 ] http://bugzilla.redhat.com/1273680 (ASSIGNED) Component: rdo-manager Last change: 2016-04-18 Summary: HA overcloud with network isolation deployment fails [1270805 ] http://bugzilla.redhat.com/1270805 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: Glance client returning 'Expected endpoint' [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: there is a newer image that can be used to deploy openstack [1294683 ] http://bugzilla.redhat.com/1294683 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: instack-undercloud: "openstack undercloud install" throws errors and then gets stuck due to selinux. ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (2 bugs) [1193886 ] http://bugzilla.redhat.com/1193886 (ASSIGNED) Component: RFEs Last change: 2016-04-18 Summary: RFE: wait for DB after boot [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2016-05-20 Summary: [RFE] Provide easy to use upgrade tool ### tempest (1 bug) [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (41 bugs) ### distribution (4 bugs) [1328980 ] http://bugzilla.redhat.com/1328980 (MODIFIED) Component: distribution Last change: 2016-04-21 Summary: Log handler repeatedly crashes [1336566 ] http://bugzilla.redhat.com/1336566 (ON_QA) Component: distribution Last change: 2016-05-20 Summary: Paramiko needs to be updated to 2.0 to match upstream requirement [1317971 ] http://bugzilla.redhat.com/1317971 (POST) Component: distribution Last change: 2016-05-23 Summary: openstack-cloudkitty-common should have a /etc/cloudkitty/api_paste.ini [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2016-04-18 Summary: Tuskar Fails After Remove/Reinstall Of RDO ### instack-undercloud (1 bug) [1270033 ] http://bugzilla.redhat.com/1270033 (POST) Component: instack-undercloud Last change: 2016-05-05 Summary: [RDO-Manager] Node inspection fails when changing the default 'inspection_iprange' value in undecloud.conf. ### openstack-ceilometer (1 bug) [1287252 ] http://bugzilla.redhat.com/1287252 (POST) Component: openstack-ceilometer Last change: 2016-04-18 Summary: openstack-ceilometer-alarm-notifier does not start: unit file is missing ### openstack-cinder (1 bug) [1212899 ] http://bugzilla.redhat.com/1212899 (POST) Component: openstack-cinder Last change: 2016-05-20 Summary: [packaging] missing dependencies for openstack-cinder ### openstack-glance (1 bug) [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2016-04-19 Summary: Glance api ssl issue ### openstack-ironic-discoverd (1 bug) [1322892 ] http://bugzilla.redhat.com/1322892 (MODIFIED) Component: openstack-ironic-discoverd Last change: 2016-05-31 Summary: No valid interfaces found during introspection ### openstack-keystone (2 bugs) [1341332 ] http://bugzilla.redhat.com/1341332 (POST) Component: openstack-keystone Last change: 2016-06-01 Summary: keystone logrotate configuration should use size configuration [1280530 ] http://bugzilla.redhat.com/1280530 (MODIFIED) Component: openstack-keystone Last change: 2016-05-20 Summary: Fernet tokens cannot read key files with SELInux enabled ### openstack-neutron (2 bugs) [1049807 ] http://bugzilla.redhat.com/1049807 (POST) Component: openstack-neutron Last change: 2016-04-19 Summary: neutron-dhcp-agent fails to start with plenty of SELinux AVC denials [1281920 ] http://bugzilla.redhat.com/1281920 (POST) Component: openstack-neutron Last change: 2016-04-19 Summary: neutron-server will not start: fails with pbr version issue ### openstack-nova (1 bug) [1301156 ] http://bugzilla.redhat.com/1301156 (POST) Component: openstack-nova Last change: 2016-04-22 Summary: openstack-nova missing specfile requires on castellan>=0.3.1 ### openstack-packstack (19 bugs) [1335612 ] http://bugzilla.redhat.com/1335612 (MODIFIED) Component: openstack-packstack Last change: 2016-05-31 Summary: CONFIG_USE_SUBNETS=y won't work correctly with VLAN [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: packstack requires 2 runs to install ceilometer [1288179 ] http://bugzilla.redhat.com/1288179 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: Mitaka: Packstack image provisioning fails with "Store filesystem could not be configured correctly" [1018900 ] http://bugzilla.redhat.com/1018900 (MODIFIED) Component: openstack-packstack Last change: 2016-05-18 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1285314 ] http://bugzilla.redhat.com/1285314 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: Packstack needs to support aodh services since Mitaka [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1302275 ] http://bugzilla.redhat.com/1302275 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: neutron-l3-agent does not start on Mitaka-2 when enabling FWaaS [1302256 ] http://bugzilla.redhat.com/1302256 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: neutron-server does not start on Mitaka-2 when enabling LBaaS [1266028 ] http://bugzilla.redhat.com/1266028 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: Packstack should use pymysql database driver since Liberty [1282746 ] http://bugzilla.redhat.com/1282746 (POST) Component: openstack-packstack Last change: 2016-05-18 Summary: Swift's proxy-server is not configured to use ceilometer [1150652 ] http://bugzilla.redhat.com/1150652 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: PackStack does not provide an option to register hosts to Red Hat Satellite 6 [1297833 ] http://bugzilla.redhat.com/1297833 (POST) Component: openstack-packstack Last change: 2016-04-19 Summary: VPNaaS should use libreswan driver instead of openswan by default [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-packstack Last change: 2016-05-18 Summary: Horizon help url in RDO points to the RHOS documentation [1187412 ] http://bugzilla.redhat.com/1187412 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: Script wording for service installation should be consistent [1255369 ] http://bugzilla.redhat.com/1255369 (POST) Component: openstack-packstack Last change: 2016-05-19 Summary: Improve session settings for horizon [1298245 ] http://bugzilla.redhat.com/1298245 (MODIFIED) Component: openstack-packstack Last change: 2016-04-18 Summary: Add possibility to change DEFAULT/api_paste_config in trove.conf [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1124982 ] http://bugzilla.redhat.com/1124982 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: Help text for SSL is incorrect regarding passphrase on the cert [1330289 ] http://bugzilla.redhat.com/1330289 (POST) Component: openstack-packstack Last change: 2016-05-21 Summary: Failure to install Controller/Network&&Compute Cluster on RDO Mitaka with keystone API V3 ### openstack-utils (1 bug) [1211989 ] http://bugzilla.redhat.com/1211989 (POST) Component: openstack-utils Last change: 2016-04-18 Summary: openstack-status shows 'disabled on boot' for the mysqld service ### Package Review (2 bugs) [1323219 ] http://bugzilla.redhat.com/1323219 (ON_QA) Component: Package Review Last change: 2016-05-12 Summary: Review Request: openstack-trove-ui - OpenStack Dashboard plugin for Trove project [1323222 ] http://bugzilla.redhat.com/1323222 (ON_QA) Component: Package Review Last change: 2016-05-12 Summary: Review request for openstack-sahara-ui ### python-keystoneclient (1 bug) [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2016-04-19 Summary: user-get fails when using IDs which are not UUIDs ### rdo-manager (2 bugs) [1271335 ] http://bugzilla.redhat.com/1271335 (POST) Component: rdo-manager Last change: 2016-04-18 Summary: [RFE] Support explicit configuration of L2 population [1268990 ] http://bugzilla.redhat.com/1268990 (POST) Component: rdo-manager Last change: 2016-04-18 Summary: missing from docs Build images fails without : export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-deps.repo" ### rdo-manager-cli (2 bugs) [1278972 ] http://bugzilla.redhat.com/1278972 (POST) Component: rdo-manager-cli Last change: 2016-04-18 Summary: rdo-manager liberty delorean dib failing w/ "No module named passlib.utils" [1273197 ] http://bugzilla.redhat.com/1273197 (POST) Component: rdo-manager-cli Last change: 2016-04-18 Summary: VXLAN should be default neutron network type Thanks, Chandan Kumar From amoralej at redhat.com Wed Jun 1 16:08:38 2016 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Wed, 1 Jun 2016 18:08:38 +0200 Subject: [rdo-list] [Meeting] RDO meeting (2016-06-01) Minutes Message-ID: ============================== #rdo: RDO meeting (2016-06-01) ============================== Meeting started by amoralej at 15:01:00 UTC. The full logs are available at https://meetbot.fedoraproject.org/rdo/2016-06-01/rdo_meeting_(2016-06-01).2016-06-01-15.01.log.html . Meeting summary --------------- * roll call (amoralej, 15:01:12) * DLRN instance migration to ci.centos infra (amoralej, 15:03:02) * ACTION: trown babysit tripleo promote and make sure repo gets promoted correctly on internal dlrn (trown, 15:21:20) * ACTION: dmsimard will confirm with kb status of buildlogs repos (amoralej, 15:26:16) * Increase timeout for Packstack CI jobs (amoralej, 15:26:35) * ACTION: amoralej investigate about sending info to rdo alerts about slow gate jobs (amoralej, 15:31:28) * sync maintainers from rdoinfo to review.rdoproject.org (add people listed as maintainers in core) (amoralej, 15:31:38) * LINK: https://gist.github.com/hguemar/4550930637f9163b2748e650b47e48c9 (number80, 15:32:56) * ACTION: number80 to submit sync rdoinfo maintainer script in rdo_gating_scripts.git (number80, 15:34:57) * LINK: https://gist.github.com/hguemar/4550930637f9163b2748e650b47e48c9#file-sync_rdoinfo_maintainers-sh-L20 (number80, 15:35:29) * RHOSP/third-party repositories statuses in RDO documentation (Cf. https://github.com/redhat-openstack/website/pull/589) (amoralej, 15:38:05) * ACTION: number80 prototype warning about third-party repo (number80, 15:43:04) * chair for next meeting (amoralej, 15:43:44) * ACTION: chandankumar to chair next meeting (amoralej, 15:45:44) * open floor (apevec, 15:47:17) * ACTION: trown to put something up in tripleo-quickstart/tree/master/ci-scripts for local validation of common-pending updates in the future (apevec, 16:01:03) Meeting ended at 16:01:21 UTC. Action Items ------------ * trown babysit tripleo promote and make sure repo gets promoted correctly on internal dlrn * dmsimard will confirm with kb status of buildlogs repos * amoralej investigate about sending info to rdo alerts about slow gate jobs * number80 to submit sync rdoinfo maintainer script in rdo_gating_scripts.git * number80 prototype warning about third-party repo * chandankumar to chair next meeting * trown to put something up in tripleo-quickstart/tree/master/ci-scripts for local validation of common-pending updates in the future Action Items, by person ----------------------- * amoralej * amoralej investigate about sending info to rdo alerts about slow gate jobs * chandankumar * chandankumar to chair next meeting * dmsimard * dmsimard will confirm with kb status of buildlogs repos * number80 * number80 to submit sync rdoinfo maintainer script in rdo_gating_scripts.git * number80 prototype warning about third-party repo * trown * trown babysit tripleo promote and make sure repo gets promoted correctly on internal dlrn * trown to put something up in tripleo-quickstart/tree/master/ci-scripts for local validation of common-pending updates in the future * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * apevec (63) * dmsimard (59) * amoralej (59) * trown (33) * number80 (21) * EmilienM (20) * zodbot (10) * jpena (10) * slagle (7) * imcsk8 (6) * chandankumar (3) * openstack (3) * rhallisey (2) * fbo (2) * jruzicka (1) * rdogerrit (1) * cmsimard (0) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot Thanks, Alfredo Moralejo From ayoung at redhat.com Wed Jun 1 17:38:40 2016 From: ayoung at redhat.com (Adam Young) Date: Wed, 1 Jun 2016 13:38:40 -0400 Subject: [rdo-list] Access undercloud / overcloud vms via public ip addresses In-Reply-To: References: <8869505b489c43f491c55e3014ec9b87@tecnotree.com> <573E53C1.6000501@redhat.com> Message-ID: <574F1DA0.8080708@redhat.com> On 05/31/2016 02:52 AM, Gerard Braad wrote: > Hi, > > > For this I also have the following question: > > On Fri, May 20, 2016 at 8:01 AM, Adam Young wrote: >> We need tutorial walking through the basic use cases. > > I was wondering about the scope of the Quickstart documentation and > the general TripleO documentation. What I think is the following: > > * Quickstart documentation should be simple and refer to the > implementation and configuration of quickstart options > * General TripleO would describe the planning, deployment, and > troubleshooting of TripleO. It therefore might refer to the quickstart > tool to perform a deployment... > > But as you can imagine, this will lead to having 'two truths' related > to TripleO. Is the quickstart doing something so drastically different > it needs it's own documentation? > > The quickstart tool itself describes how to access the console of the > nodes, and also how to get to the dashboard. But this is because it > now deals with a virtualized environment. > > More advanced how-tos now end up being captured in blogposts in > several different places. > > How could this be improved? I think the quickstart deploy docs need to be augmented with "here is how you modify your existing networks to connect them to the outside world." > > regards, > > > Gerard > From vanditboy at gmail.com Thu Jun 2 13:04:42 2016 From: vanditboy at gmail.com (Michal Adamczyk) Date: Thu, 2 Jun 2016 14:04:42 +0100 Subject: [rdo-list] RDO for XenServer 7 Message-ID: Hi, I would like to ask if there is any chance to build/adjust RDO for XenServer 7 as it's based on CentOS 7.2? I tried to install Mitaka few days ago and came across following error with nova-compute: https://bugs.launchpad.net/nova/+bug/1587537 It would be nice to be install it without hacks, tricks with extra compute VM... -- Kind regards, Michal Adamczyk -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at redhat.com Thu Jun 2 16:23:34 2016 From: apevec at redhat.com (Alan Pevec) Date: Thu, 2 Jun 2016 18:23:34 +0200 Subject: [rdo-list] RDO for XenServer 7 In-Reply-To: References: Message-ID: On Thu, Jun 2, 2016 at 3:04 PM, Michal Adamczyk wrote: > I would like to ask if there is any chance to build/adjust RDO for XenServer > 7 as it's based on CentOS 7.2? > > I tried to install Mitaka few days ago and came across following error with > nova-compute: > > https://bugs.launchpad.net/nova/+bug/1587537 Transaction check error: file /usr/libexec/qemu-bridge-helper from install of qemu-kvm-common-10:1.5.3-105.el7_2.4.x86_64 conflicts with file from package qemu-xen-2.2.1-4.36786.x86_64 That looks like a conflict between qemu provided by EL7 base and qemu from Xen, so it's not directly related to RDO packaging. Where is qemu-xen-2.2.1-4.36786.x86_64 coming from, is it from Xen or CentOS VirtSIG repo? You could try excluding qemu in base EL7 repo to avoid this conflict, but since this isn't tested combination, there might be other issues down the road. We would welcome community contributed CI results against RDO packages using Xen as the hypervisor! Cheers, Alan From vanditboy at gmail.com Thu Jun 2 20:00:17 2016 From: vanditboy at gmail.com (Michal Adamczyk) Date: Thu, 2 Jun 2016 21:00:17 +0100 Subject: [rdo-list] RDO for XenServer 7 In-Reply-To: References: Message-ID: Hi Alan, qemu-xen-2.2.1-4.36786.x86_64 is comming from Citrix XenServer 7 (full hypervisor solution/platform) so it cannot be chaneged - people can still get support from Citrix if needed. We are talking here about having xen with xapi so XenAPI driver can be used instrad of libvirt. On Thursday, 2 June 2016, Alan Pevec wrote: > On Thu, Jun 2, 2016 at 3:04 PM, Michal Adamczyk > wrote: > > I would like to ask if there is any chance to build/adjust RDO for > XenServer > > 7 as it's based on CentOS 7.2? > > > > I tried to install Mitaka few days ago and came across following error > with > > nova-compute: > > > > https://bugs.launchpad.net/nova/+bug/1587537 > > Transaction check error: > file /usr/libexec/qemu-bridge-helper from install of > qemu-kvm-common-10:1.5.3-105.el7_2.4.x86_64 conflicts with file from > package qemu-xen-2.2.1-4.36786.x86_64 > > That looks like a conflict between qemu provided by EL7 base and qemu > from Xen, so it's not directly related to RDO packaging. > Where is qemu-xen-2.2.1-4.36786.x86_64 coming from, is it from Xen or > CentOS VirtSIG repo? > You could try excluding qemu in base EL7 repo to avoid this conflict, > but since this isn't tested combination, there might be other issues > down the road. > We would welcome community contributed CI results against RDO packages > using Xen as the hypervisor! > > Cheers, > Alan > -- Kind regards, Michal Adamczyk -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Fri Jun 3 03:05:02 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Fri, 3 Jun 2016 04:05:02 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? Message-ID: Hi all, been doing some tests on baremetal hosts, but I'm kind of stuck here, it's getting to start to be frustrating. I've followed the documentation from http://docs.openstack.org/developer/tripleo-docs/ First I've tried the stable version from liberty and got stuck in a python-config-oslo outdated package bug. Then I've tried mitaka and I got stuck in this error: "Could not retrieve fact='rabbitmq_nodename', resolution='': undefined method `[]' for nil:NilClass Could not retrieve fact='rabbitmq_nodename', resolution='': undefined method `[]' for nil:NilClass" My question is if there's a stable version that can be installed on overcloud baremetal hosts that we can rely on? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From cbrown2 at ocf.co.uk Fri Jun 3 05:34:25 2016 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Fri, 3 Jun 2016 06:34:25 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? Message-ID: I'm glad you said it. I'm having exactly the same problem. I've had to customize an image due to missing python-hardware-detect and introspection is very hit and miss. Currently testing delorean packages but may have to consider reverting to Liberty. Documentation is not clear and still references Liberty. I think some mitaka stabilisation work would be gratefully received. Regards, Christopher Brown -------- Original message -------- From: Pedro Sousa Date: 03/06/2016 04:11 (GMT+00:00) To: rdo-list Subject: [rdo-list] Baremetal Tripleo stable version? Hi all, been doing some tests on baremetal hosts, but I'm kind of stuck here, it's getting to start to be frustrating. I've followed the documentation from http://docs.openstack.org/developer/tripleo-docs/ First I've tried the stable version from liberty and got stuck in a python-config-oslo outdated package bug. Then I've tried mitaka and I got stuck in this error: "Could not retrieve fact='rabbitmq_nodename', resolution='': undefined method `[]' for nil:NilClass Could not retrieve fact='rabbitmq_nodename', resolution='': undefined method `[]' for nil:NilClass" My question is if there's a stable version that can be installed on overcloud baremetal hosts that we can rely on? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Fri Jun 3 07:17:38 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Fri, 3 Jun 2016 08:17:38 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: Message-ID: The problem is that I cant install none, both fail :-(. I've managed to install mitaka at some point but then something was updated that broke the installation and I get that rabbitmq problem. Its frustating. Guess I'll have to get back to packstack, its the only method that actually works fine so far. Christopher did you manage to install liberty stable? Had to update anything on overcloud image? Thanks Em 03/06/2016 06:34, "Christopher Brown" escreveu: > I'm glad you said it. > > I'm having exactly the same problem. I've had to customize an image due to > missing python-hardware-detect and introspection is very hit and miss. > > Currently testing delorean packages but may have to consider reverting to > Liberty. > > Documentation is not clear and still references Liberty. I think some > mitaka stabilisation work would be gratefully received. > > > Regards, > > Christopher Brown > > > -------- Original message -------- > From: Pedro Sousa > Date: 03/06/2016 04:11 (GMT+00:00) > To: rdo-list > Subject: [rdo-list] Baremetal Tripleo stable version? > > Hi all, > > been doing some tests on baremetal hosts, but I'm kind of stuck here, it's > getting to start to be frustrating. > > I've followed the documentation from > http://docs.openstack.org/developer/tripleo-docs/ > > First I've tried the stable version from liberty and got stuck in a > python-config-oslo outdated package bug. > > Then I've tried mitaka and I got stuck in this error: > > "Could not retrieve fact='rabbitmq_nodename', resolution='': > undefined method `[]' for nil:NilClass Could not retrieve > fact='rabbitmq_nodename', resolution='': undefined method `[]' > for nil:NilClass" > > My question is if there's a stable version that can be installed on > overcloud baremetal hosts that we can rely on? > > Thanks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From duck at redhat.com Fri Jun 3 07:53:07 2016 From: duck at redhat.com (=?UTF-8?B?TWFyYyBEZXF1w6huZXMgKER1Y2sp?=) Date: Fri, 3 Jun 2016 16:53:07 +0900 Subject: [rdo-list] [Fedocal] Reminder meeting : RDO meeting In-Reply-To: <20160530150003.0125060A4009@fedocal02.phx2.fedoraproject.org> References: <20160530150003.0125060A4009@fedocal02.phx2.fedoraproject.org> Message-ID: <57513763.1070500@redhat.com> Quack, On 05/31/2016 12:00 AM, hguemar at fedoraproject.org wrote: > Dear all, > > You are kindly invited to the meeting: > RDO meeting on 2016-06-01 from 15:00:00 to 16:00:00 UTC > At rdo at irc.freenode.net I'd try to come from time to time, but maybe not all times as I may have conflicting meetings/work. There is a community meeting already and I try to notify Rich of important things so it should be ok. As for next week, there is an important meeting early in the morning for LinuxCon Japan, so I would most probably not be there for this (late) RDO meeting, sorry. \_o< -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From cbrown2 at ocf.co.uk Fri Jun 3 08:36:42 2016 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Fri, 3 Jun 2016 09:36:42 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: Message-ID: <1464943002.9673.14.camel@ocf.co.uk> Hi Pedro, TL;DR Try the links below and if fails try Liberty. We've got Mitaka working after some of the frustrations you are experiencing. The image that gets built doesn't work. There was some discussion as to whether a hiera update broke this - it was reverted but the image still seems broken. Try using these: http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka /cbs/ We used packages from: https://repos.fedorapeople.org/repos/openstack/openstack-mitaka/ but the introspection is broken due to a race condition that hasn't been patched in this stable repo yet (I believe). Not given up yet but yes, Liberty tripleo worked fine for us. There seems to be a large amount of development work going into tripleo-quickstart which doesn't really work for production use and I'm not sure if the developers have access to lots of baremetal for testing. As a result I think actual production RDO doesn't get huge amounts of testing - I'm talking baremetal HA controllers with network isolation over bonded connections etc. We have abandoned the tripleo docs and use the Red Hat docs as a reference. I have no idea how to actually run Mitaka stable. When I follow the tripleo docs and use the "stable" delorean repo (still not sure what that actually is) I end up with packages like: python-neutron-8.1.2-0.20160602095303.449bcc6.el7.centos.noarch which seems to indicate it is a nightly snapshot from git. Obviously I'd rather not run that in production. Its all a bit of a mess. I did try to make a start on cleaning up the RDO docs but to be honest it meant having to learn yet another type of documentation syntax so have reverted to internal documentation. Cheers On Fri, 2016-06-03 at 08:17 +0100, Pedro Sousa wrote: > The problem is that I cant install none, both fail :-(. I've managed > to install mitaka at some point but then something was updated that > broke the installation and I get that rabbitmq problem. Its > frustating. > Guess I'll have to get back to packstack, its the only method that > actually works fine so far. > Christopher did you manage to install liberty stable? Had to update > anything on overcloud image? > Thanks > Em 03/06/2016 06:34, "Christopher Brown" > escreveu: > > I'm glad you said it. > > > > I'm having exactly the same problem. I've had to customize an image > > due to missing python-hardware-detect and introspection is very hit > > and miss. > > > > Currently testing delorean packages but may have to consider > > reverting to Liberty. > > > > Documentation is not clear and still references Liberty. I think > > some mitaka stabilisation work would be gratefully received. > > > > > > Regards, > > > > Christopher Brown > > > > > > -------- Original message -------- > > From: Pedro Sousa ? > > Date: 03/06/2016 04:11 (GMT+00:00) > > To: rdo-list ? > > Subject: [rdo-list] Baremetal Tripleo stable version? > > > > Hi all, > > > > been doing some tests on baremetal hosts, but I'm kind of stuck > > here, it's getting to start to be frustrating. > > > > I've followed the documentation from http://docs.openstack.org/deve > > loper/tripleo-docs/ > > > > First I've tried the stable version from liberty and got stuck in a > > python-config-oslo outdated package bug. > > > > Then I've tried mitaka and I got stuck in this error: > > > > "Could not retrieve fact='rabbitmq_nodename', > > resolution='': undefined method `[]' for nil:NilClass > > Could not retrieve fact='rabbitmq_nodename', > > resolution='': undefined method `[]' for nil:NilClass" > > > > My question is if there's a stable version that can be installed on > > overcloud baremetal hosts that we can rely on? > > > > Thanks > > -- Regards, Christopher Brown OpenStack Engineer OCF plc Tel: +44 (0)114 257 2200 Web: www.ocf.co.uk Blog: blog.ocf.co.uk Twitter: @ocfplc Please note, any emails relating to an OCF Support request must always be sent to support at ocf.co.uk for a ticket number to be generated or existing support ticket to be updated. Should this not be done then OCF cannot be held responsible for requests not dealt with in a timely manner. OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 2PG. If you have received this message in error, please notify us immediately and remove it from your system. From pgsousa at gmail.com Fri Jun 3 08:51:28 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Fri, 3 Jun 2016 09:51:28 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <1464943002.9673.14.camel@ocf.co.uk> References: <1464943002.9673.14.camel@ocf.co.uk> Message-ID: Hi Christopher, thank you, I'll test it out. According with my interpretation from documentation, mitaka version is made from nightly builds, as the "stable" version is from liberty. Maybe someone from rdo can clarify this. Anyway, both seem broken right now, like you I test this in baremetal for deploying in production, not in virtual environments. Regards On Fri, Jun 3, 2016 at 9:36 AM, Christopher Brown wrote: > Hi Pedro, > > TL;DR Try the links below and if fails try Liberty. > > We've got Mitaka working after some of the frustrations you are > experiencing. The image that gets built doesn't work. There was some > discussion as to whether a hiera update broke this - it was reverted > but the image still seems broken. > > Try using these: > > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka > /cbs/ > > We used packages from: > > https://repos.fedorapeople.org/repos/openstack/openstack-mitaka/ > > but the introspection is broken due to a race condition that hasn't > been patched in this stable repo yet (I believe). > > Not given up yet but yes, Liberty tripleo worked fine for us. > > There seems to be a large amount of development work going into > tripleo-quickstart which doesn't really work for production use and I'm > not sure if the developers have access to lots of baremetal for > testing. As a result I think actual production RDO doesn't get huge > amounts of testing - I'm talking baremetal HA controllers with network > isolation over bonded connections etc. > > We have abandoned the tripleo docs and use the Red Hat docs as a > reference. > > I have no idea how to actually run Mitaka stable. When I follow the > tripleo docs and use the "stable" delorean repo (still not sure what > that actually is) I end up with packages like: > > python-neutron-8.1.2-0.20160602095303.449bcc6.el7.centos.noarch > > which seems to indicate it is a nightly snapshot from git. Obviously > I'd rather not run that in production. > > Its all a bit of a mess. I did try to make a start on cleaning up the > RDO docs but to be honest it meant having to learn yet another type of > documentation syntax so have reverted to internal documentation. > > Cheers > > On Fri, 2016-06-03 at 08:17 +0100, Pedro Sousa wrote: > > The problem is that I cant install none, both fail :-(. I've managed > > to install mitaka at some point but then something was updated that > > broke the installation and I get that rabbitmq problem. Its > > frustating. > > Guess I'll have to get back to packstack, its the only method that > > actually works fine so far. > > Christopher did you manage to install liberty stable? Had to update > > anything on overcloud image? > > Thanks > > Em 03/06/2016 06:34, "Christopher Brown" > > escreveu: > > > I'm glad you said it. > > > > > > I'm having exactly the same problem. I've had to customize an image > > > due to missing python-hardware-detect and introspection is very hit > > > and miss. > > > > > > Currently testing delorean packages but may have to consider > > > reverting to Liberty. > > > > > > Documentation is not clear and still references Liberty. I think > > > some mitaka stabilisation work would be gratefully received. > > > > > > > > > Regards, > > > > > > Christopher Brown > > > > > > > > > -------- Original message -------- > > > From: Pedro Sousa > > > Date: 03/06/2016 04:11 (GMT+00:00) > > > To: rdo-list > > > Subject: [rdo-list] Baremetal Tripleo stable version? > > > > > > Hi all, > > > > > > been doing some tests on baremetal hosts, but I'm kind of stuck > > > here, it's getting to start to be frustrating. > > > > > > I've followed the documentation from http://docs.openstack.org/deve > > > loper/tripleo-docs/ > > > > > > First I've tried the stable version from liberty and got stuck in a > > > python-config-oslo outdated package bug. > > > > > > Then I've tried mitaka and I got stuck in this error: > > > > > > "Could not retrieve fact='rabbitmq_nodename', > > > resolution='': undefined method `[]' for nil:NilClass > > > Could not retrieve fact='rabbitmq_nodename', > > > resolution='': undefined method `[]' for nil:NilClass" > > > > > > My question is if there's a stable version that can be installed on > > > overcloud baremetal hosts that we can rely on? > > > > > > Thanks > > > > -- > Regards, > > Christopher Brown > OpenStack Engineer > OCF plc > > Tel: +44 (0)114 257 2200 > Web: www.ocf.co.uk > Blog: blog.ocf.co.uk > Twitter: @ocfplc > > Please note, any emails relating to an OCF Support request must always > be sent to support at ocf.co.uk for a ticket number to be generated or > existing support ticket to be updated. Should this not be done then OCF > > cannot be held responsible for requests not dealt with in a timely > manner. > > OCF plc is a company registered in England and Wales. Registered number > > 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 > 2PG. > > If you have received this message in error, please notify us > immediately and remove it from your system. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Fri Jun 3 10:04:56 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Fri, 3 Jun 2016 11:04:56 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <1464943002.9673.14.camel@ocf.co.uk> References: <1464943002.9673.14.camel@ocf.co.uk> Message-ID: Christopher, did you had this error deploying liberty? un 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: No handlers could be found for logger "oslo_config.cfg" Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: Traceback (most recent call last): Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: File "/usr/bin/nova-compute", line 10, in Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: sys.exit(main()) Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: File "/usr/lib/python2.7/site-packages/nova/cmd/compute.py", line 58, in main Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: config.parse_args(sys.argv) Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: File "/usr/lib/python2.7/site-packages/nova/config.py", line 60, in parse_args Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: rpc.init(CONF) Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: File "/usr/lib/python2.7/site-packages/nova/rpc.py", line 63, in init Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: aliases=TRANSPORT_ALIASES) Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 186, in get_transport Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: invoke_kwds=kwargs) Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 45, in __init__ Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: verify_requirements=verify_requirements, Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 55, in __init__ Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: verify_requirements) Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 170, in _load_plugins Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: self._on_load_failure_callback(self, ep, err) Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 162, in _load_plugins Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: verify_requirements, Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 123, in _load_one_plugin Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: verify_requirements, Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 185, in _load_one_plugin Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: plugin = ep.load(require=verify_requirements) Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: entry = __import__(self.module_name, globals(),globals(), ['__name__']) Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 101, in Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: cfg.PortOpt('rabbit_port', Jun 03 10:03:49 overcloud-novacompute-0 nova-compute[15490]: AttributeError: 'module' object has no attribute 'PortOpt' Thanks On Fri, Jun 3, 2016 at 9:36 AM, Christopher Brown wrote: > Hi Pedro, > > TL;DR Try the links below and if fails try Liberty. > > We've got Mitaka working after some of the frustrations you are > experiencing. The image that gets built doesn't work. There was some > discussion as to whether a hiera update broke this - it was reverted > but the image still seems broken. > > Try using these: > > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka > /cbs/ > > We used packages from: > > https://repos.fedorapeople.org/repos/openstack/openstack-mitaka/ > > but the introspection is broken due to a race condition that hasn't > been patched in this stable repo yet (I believe). > > Not given up yet but yes, Liberty tripleo worked fine for us. > > There seems to be a large amount of development work going into > tripleo-quickstart which doesn't really work for production use and I'm > not sure if the developers have access to lots of baremetal for > testing. As a result I think actual production RDO doesn't get huge > amounts of testing - I'm talking baremetal HA controllers with network > isolation over bonded connections etc. > > We have abandoned the tripleo docs and use the Red Hat docs as a > reference. > > I have no idea how to actually run Mitaka stable. When I follow the > tripleo docs and use the "stable" delorean repo (still not sure what > that actually is) I end up with packages like: > > python-neutron-8.1.2-0.20160602095303.449bcc6.el7.centos.noarch > > which seems to indicate it is a nightly snapshot from git. Obviously > I'd rather not run that in production. > > Its all a bit of a mess. I did try to make a start on cleaning up the > RDO docs but to be honest it meant having to learn yet another type of > documentation syntax so have reverted to internal documentation. > > Cheers > > On Fri, 2016-06-03 at 08:17 +0100, Pedro Sousa wrote: > > The problem is that I cant install none, both fail :-(. I've managed > > to install mitaka at some point but then something was updated that > > broke the installation and I get that rabbitmq problem. Its > > frustating. > > Guess I'll have to get back to packstack, its the only method that > > actually works fine so far. > > Christopher did you manage to install liberty stable? Had to update > > anything on overcloud image? > > Thanks > > Em 03/06/2016 06:34, "Christopher Brown" > > escreveu: > > > I'm glad you said it. > > > > > > I'm having exactly the same problem. I've had to customize an image > > > due to missing python-hardware-detect and introspection is very hit > > > and miss. > > > > > > Currently testing delorean packages but may have to consider > > > reverting to Liberty. > > > > > > Documentation is not clear and still references Liberty. I think > > > some mitaka stabilisation work would be gratefully received. > > > > > > > > > Regards, > > > > > > Christopher Brown > > > > > > > > > -------- Original message -------- > > > From: Pedro Sousa > > > Date: 03/06/2016 04:11 (GMT+00:00) > > > To: rdo-list > > > Subject: [rdo-list] Baremetal Tripleo stable version? > > > > > > Hi all, > > > > > > been doing some tests on baremetal hosts, but I'm kind of stuck > > > here, it's getting to start to be frustrating. > > > > > > I've followed the documentation from http://docs.openstack.org/deve > > > loper/tripleo-docs/ > > > > > > First I've tried the stable version from liberty and got stuck in a > > > python-config-oslo outdated package bug. > > > > > > Then I've tried mitaka and I got stuck in this error: > > > > > > "Could not retrieve fact='rabbitmq_nodename', > > > resolution='': undefined method `[]' for nil:NilClass > > > Could not retrieve fact='rabbitmq_nodename', > > > resolution='': undefined method `[]' for nil:NilClass" > > > > > > My question is if there's a stable version that can be installed on > > > overcloud baremetal hosts that we can rely on? > > > > > > Thanks > > > > -- > Regards, > > Christopher Brown > OpenStack Engineer > OCF plc > > Tel: +44 (0)114 257 2200 > Web: www.ocf.co.uk > Blog: blog.ocf.co.uk > Twitter: @ocfplc > > Please note, any emails relating to an OCF Support request must always > be sent to support at ocf.co.uk for a ticket number to be generated or > existing support ticket to be updated. Should this not be done then OCF > > cannot be held responsible for requests not dealt with in a timely > manner. > > OCF plc is a company registered in England and Wales. Registered number > > 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 > 2PG. > > If you have received this message in error, please notify us > immediately and remove it from your system. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cbrown2 at ocf.co.uk Fri Jun 3 10:10:47 2016 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Fri, 3 Jun 2016 11:10:47 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464943002.9673.14.camel@ocf.co.uk> Message-ID: <1464948647.9673.19.camel@ocf.co.uk> No, Liberty deployed ok for us. It suggests to me a package mismatch. Have you completely rebuilt the undercloud and then the images using Liberty? On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: > AttributeError: 'module' object has no attribute 'PortOpt' -- Regards, Christopher Brown OpenStack Engineer OCF plc Tel: +44 (0)114 257 2200 Web: www.ocf.co.uk Blog: blog.ocf.co.uk Twitter: @ocfplc Please note, any emails relating to an OCF Support request must always be sent to support at ocf.co.uk for a ticket number to be generated or existing support ticket to be updated. Should this not be done then OCF cannot be held responsible for requests not dealt with in a timely manner. OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 2PG. If you have received this message in error, please notify us immediately and remove it from your system. From pgsousa at gmail.com Fri Jun 3 10:15:58 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Fri, 3 Jun 2016 11:15:58 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <1464948647.9673.19.camel@ocf.co.uk> References: <1464943002.9673.14.camel@ocf.co.uk> <1464948647.9673.19.camel@ocf.co.uk> Message-ID: Yes. I've used this, but I'll try again as there's seems to be new updates. Stable Branch *Skip all repos mentioned above, other than epel-release which is still required.* Enable latest RDO Stable Delorean repository for all packages sudo curl -o /etc/yum.repos.d/delorean-liberty.repo https://trunk.rdoproject.org/centos7-liberty/current/delorean.repo Enable the Delorean Deps repository sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown wrote: > No, Liberty deployed ok for us. > > It suggests to me a package mismatch. Have you completely rebuilt the > undercloud and then the images using Liberty? > > On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: > > AttributeError: 'module' object has no attribute 'PortOpt' > -- > Regards, > > Christopher Brown > OpenStack Engineer > OCF plc > > Tel: +44 (0)114 257 2200 > Web: www.ocf.co.uk > Blog: blog.ocf.co.uk > Twitter: @ocfplc > > Please note, any emails relating to an OCF Support request must always > be sent to support at ocf.co.uk for a ticket number to be generated or > existing support ticket to be updated. Should this not be done then OCF > > cannot be held responsible for requests not dealt with in a timely > manner. > > OCF plc is a company registered in England and Wales. Registered number > > 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 > 2PG. > > If you have received this message in error, please notify us > immediately and remove it from your system. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Fri Jun 3 10:20:44 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 3 Jun 2016 10:20:44 +0000 Subject: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash In-Reply-To: References: , , , , Message-ID: ===================================== Fresh HA deployment attempt ===================================== [stack at undercloud ~]$ date Fri Jun 3 10:05:35 UTC 2016 [stack at undercloud ~]$ heat stack-list +--------------------------------------+------------+-----------------+---------------------+--------------+ | id | stack_name | stack_status | creation_time | updated_time | +--------------------------------------+------------+-----------------+---------------------+--------------+ | 0c6b8205-be86-4a24-be36-fd4ece956c6d | overcloud | CREATE_COMPLETE | 2016-06-03T08:14:19 | None | +--------------------------------------+------------+-----------------+---------------------+--------------+ [stack at undercloud ~]$ nova list +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ | 6a38b7be-3743-4339-970b-6121e687741d | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.10 | | 9222dc1b-5974-495b-8b98-b8176ac742f4 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.9 | | 76adbb27-220f-42ef-9691-94729ee28749 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.11 | | 8f57f7b6-a2d8-4b7b-b435-1c675e63ea84 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.8 | +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ [stack at undercloud ~]$ ssh heat-admin at 192.0.2.10 Last login: Fri Jun 3 10:01:44 2016 from gateway [heat-admin at overcloud-controller-0 ~]$ sudo su - Last login: Fri Jun 3 10:01:49 UTC 2016 on pts/0 [root at overcloud-controller-0 ~]# . keystonerc_admin [root at overcloud-controller-0 ~]# pcs status Cluster name: tripleo_cluster Last updated: Fri Jun 3 10:07:22 2016 Last change: Fri Jun 3 08:50:59 2016 by root via cibadmin on overcloud-controller-0 Stack: corosync Current DC: overcloud-controller-0 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum 3 nodes and 123 resources configured Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Full list of resources: ip-192.0.2.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0 Clone Set: haproxy-clone [haproxy] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] ip-192.0.2.7 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1 Master/Slave Set: galera-master [galera] Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: memcached-clone [memcached] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: rabbitmq-clone [rabbitmq] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-core-clone [openstack-core] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Master/Slave Set: redis-master [redis] Masters: [ overcloud-controller-1 ] Slaves: [ overcloud-controller-0 overcloud-controller-2 ] Clone Set: mongod-clone [mongod] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-l3-agent-clone [neutron-l3-agent] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-2 Clone Set: openstack-heat-engine-clone [openstack-heat-engine] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-clone [openstack-heat-api] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-glance-api-clone [openstack-glance-api] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-api-clone [openstack-nova-api] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-sahara-api-clone [openstack-sahara-api] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-glance-registry-clone [openstack-glance-registry] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-cinder-api-clone [openstack-cinder-api] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: delay-clone [delay] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: neutron-server-clone [neutron-server] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central] Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: httpd-clone [httpd] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor] Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] Failed Actions: * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-1 'not running' (7): call=76, status=complete, exitreason='none', last-rc-change='Fri Jun 3 08:47:22 2016', queued=0ms, exec=0ms * openstack-ceilometer-central_start_0 on overcloud-controller-1 'not running' (7): call=290, status=complete, exitreason='none', last-rc-change='Fri Jun 3 08:51:18 2016', queued=0ms, exec=2132ms * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-2 'not running' (7): call=76, status=complete, exitreason='none', last-rc-change='Fri Jun 3 08:47:16 2016', queued=0ms, exec=0ms * openstack-ceilometer-central_start_0 on overcloud-controller-2 'not running' (7): call=292, status=complete, exitreason='none', last-rc-change='Fri Jun 3 08:51:31 2016', queued=0ms, exec=2102ms * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-0 'not running' (7): call=77, status=complete, exitreason='none', last-rc-change='Fri Jun 3 08:47:19 2016', queued=0ms, exec=0ms * openstack-ceilometer-central_start_0 on overcloud-controller-0 'not running' (7): call=270, status=complete, exitreason='none', last-rc-change='Fri Jun 3 08:50:02 2016', queued=0ms, exec=2199ms PCSD Status: overcloud-controller-0: Online overcloud-controller-1: Online overcloud-controller-2: Online Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled ________________________________ From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets Sent: Monday, May 30, 2016 4:56 AM To: John Trowbridge; Lars Kellogg-Stedman Cc: rdo-list Subject: Re: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash Done one more time :- [stack at undercloud ~]$ heat deployment-show 9cc8087a-6d82-4261-8a13-ee8c46e3a02d Uploaded here :- http://textuploader.com/5bm5v ________________________________ From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets Sent: Sunday, May 29, 2016 3:39 AM To: John Trowbridge; Lars Kellogg-Stedman Cc: rdo-list Subject: [rdo-list] Tripleo QuickStart HA deploymemt attempts constantly crash Error every time is the same :- 2016-05-29 07:20:17 [0]: CREATE_FAILED Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 2016-05-29 07:20:18 [0]: SIGNAL_COMPLETE Unknown 2016-05-29 07:20:18 [overcloud-ControllerNodesPostDeployment-dzawjmjyaidt-ControllerServicesBaseDeployment_Step2-ufz2ccs5egd7]: CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 2016-05-29 07:20:18 [0]: SIGNAL_COMPLETE Unknown 2016-05-29 07:20:19 [ControllerServicesBaseDeployment_Step2]: CREATE_FAILED Error: resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 2016-05-29 07:20:19 [0]: SIGNAL_COMPLETE Unknown 2016-05-29 07:20:19 [0]: SIGNAL_COMPLETE Unknown 2016-05-29 07:20:20 [ControllerDeployment]: SIGNAL_COMPLETE Unknown 2016-05-29 07:20:20 [overcloud-ControllerNodesPostDeployment-dzawjmjyaidt]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 2016-05-29 07:20:21 [ControllerNodesPostDeployment]: CREATE_FAILED Error: resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 2016-05-29 07:20:21 [0]: SIGNAL_COMPLETE Unknown 2016-05-29 07:20:22 [NetworkDeployment]: SIGNAL_COMPLETE Unknown 2016-05-29 07:20:22 [0]: SIGNAL_COMPLETE Unknown 2016-05-29 07:24:22 [ComputeNodesPostDeployment]: CREATE_FAILED CREATE aborted 2016-05-29 07:24:22 [overcloud]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 Stack overcloud CREATE_FAILED Deployment failed: Heat Stack create failed. + heat stack-list + grep -q CREATE_FAILED + deploy_status=1 ++ heat resource-list --nested-depth 5 overcloud ++ grep FAILED ++ grep 'StructuredDeployment ' ++ cut -d '|' -f3 + for failed in '$(heat resource-list --nested-depth 5 overcloud | grep FAILED | grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' + heat deployment-show 66bd3fbe-296b-4f88-87a7-5ceafd05c1d3 + exit 1 Minimal configuration deployments run with no errors and build completely functional environment. However, template :- ################################# # Test Controller + 2*Compute nodes ################################# control_memory: 6144 compute_memory: 6144 undercloud_memory: 8192 # Giving the undercloud additional CPUs can greatly improve heat's # performance (and result in a shorter deploy time). undercloud_vcpu: 4 # We set introspection to true and use only the minimal amount of nodes # for this job, but test all defaults otherwise. step_introspect: true # Define a single controller node and a single compute node. overcloud_nodes: - name: control_0 flavor: control - name: compute_0 flavor: compute - name: compute_1 flavor: compute # Tell tripleo how we want things done. extra_args: >- --neutron-network-type vxlan --neutron-tunnel-types vxlan --ntp-server pool.ntp.org network_isolation: true Picks up new memory setting but doesn't create second Compute Node. Every time just Controller && (1)* Compute. HW - i74790 , 32 GB RAM Thanks. Boris ________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From trown at redhat.com Fri Jun 3 12:43:05 2016 From: trown at redhat.com (John Trowbridge) Date: Fri, 3 Jun 2016 08:43:05 -0400 Subject: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash In-Reply-To: References: Message-ID: <57517B59.7040103@redhat.com> So this last one looks like telemetry services went down. You could check the logs on the controllers to see if it was OOM killed. My bet would be this is what is happening. The reason that HA is not the default for tripleo-quickstart is exactly this type of issue. It is pretty difficult to fit a full HA deployment of TripleO on a 32G virthost. I think there is near 100% chance that the default HA config will crash when trying to do anything on the deployed overcloud, due to running out of memory. I have had some success in my local test setup using KSM [1] on the virthost, and then changing the HA config to give the controllers more memory. This results in overcommiting, but KSM can handle overcommiting without going into swap. It might even be possible to try to setup KSM in the environment setup part of quickstart. I would certainly accept an RFE/patch for this [2,3]. If you have a larger virthost than 32G, you could similarly bump the memory for the controllers, which should lead to a much higher success rate. There is also a feature coming in TripleO [4] that will allow choosing what services get deployed in each role, which will allow us to tweak the tripleo-quickstart HA config to deploy a minimal service layout in order to reduce memory requirements. Thanks a ton for giving tripleo-quickstart a go! [1] https://en.wikipedia.org/wiki/Kernel_same-page_merging [2] https://bugs.launchpad.net/tripleo-quickstart [3] https://review.openstack.org/#/q/project:openstack/tripleo-quickstart [4] https://blueprints.launchpad.net/tripleo/+spec/composable-services-within-roles On 06/03/2016 06:20 AM, Boris Derzhavets wrote: > ===================================== > > Fresh HA deployment attempt > > ===================================== > > [stack at undercloud ~]$ date > Fri Jun 3 10:05:35 UTC 2016 > [stack at undercloud ~]$ heat stack-list > +--------------------------------------+------------+-----------------+---------------------+--------------+ > | id | stack_name | stack_status | creation_time | updated_time | > +--------------------------------------+------------+-----------------+---------------------+--------------+ > | 0c6b8205-be86-4a24-be36-fd4ece956c6d | overcloud | CREATE_COMPLETE | 2016-06-03T08:14:19 | None | > +--------------------------------------+------------+-----------------+---------------------+--------------+ > [stack at undercloud ~]$ nova list > +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ > | ID | Name | Status | Task State | Power State | Networks | > +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ > | 6a38b7be-3743-4339-970b-6121e687741d | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.10 | > | 9222dc1b-5974-495b-8b98-b8176ac742f4 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.9 | > | 76adbb27-220f-42ef-9691-94729ee28749 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.11 | > | 8f57f7b6-a2d8-4b7b-b435-1c675e63ea84 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.8 | > +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ > [stack at undercloud ~]$ ssh heat-admin at 192.0.2.10 > Last login: Fri Jun 3 10:01:44 2016 from gateway > [heat-admin at overcloud-controller-0 ~]$ sudo su - > Last login: Fri Jun 3 10:01:49 UTC 2016 on pts/0 > [root at overcloud-controller-0 ~]# . keystonerc_admin > > [root at overcloud-controller-0 ~]# pcs status > Cluster name: tripleo_cluster > Last updated: Fri Jun 3 10:07:22 2016 Last change: Fri Jun 3 08:50:59 2016 by root via cibadmin on overcloud-controller-0 > Stack: corosync > Current DC: overcloud-controller-0 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum > 3 nodes and 123 resources configured > > Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > > Full list of resources: > > ip-192.0.2.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0 > Clone Set: haproxy-clone [haproxy] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > ip-192.0.2.7 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1 > Master/Slave Set: galera-master [galera] > Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: memcached-clone [memcached] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: rabbitmq-clone [rabbitmq] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-core-clone [openstack-core] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Master/Slave Set: redis-master [redis] > Masters: [ overcloud-controller-1 ] > Slaves: [ overcloud-controller-0 overcloud-controller-2 ] > Clone Set: mongod-clone [mongod] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator] > Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: neutron-l3-agent-clone [neutron-l3-agent] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-2 > Clone Set: openstack-heat-engine-clone [openstack-heat-engine] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api] > Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener] > Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier] > Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-heat-api-clone [openstack-heat-api] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector] > Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-glance-api-clone [openstack-glance-api] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-nova-api-clone [openstack-nova-api] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-sahara-api-clone [openstack-sahara-api] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-glance-registry-clone [openstack-glance-registry] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-cinder-api-clone [openstack-cinder-api] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: delay-clone [delay] > Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: neutron-server-clone [neutron-server] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central] > Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: httpd-clone [httpd] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > > Failed Actions: > * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-1 'not running' (7): call=76, status=complete, exitreason='none', > last-rc-change='Fri Jun 3 08:47:22 2016', queued=0ms, exec=0ms > * openstack-ceilometer-central_start_0 on overcloud-controller-1 'not running' (7): call=290, status=complete, exitreason='none', > last-rc-change='Fri Jun 3 08:51:18 2016', queued=0ms, exec=2132ms > * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-2 'not running' (7): call=76, status=complete, exitreason='none', > last-rc-change='Fri Jun 3 08:47:16 2016', queued=0ms, exec=0ms > * openstack-ceilometer-central_start_0 on overcloud-controller-2 'not running' (7): call=292, status=complete, exitreason='none', > last-rc-change='Fri Jun 3 08:51:31 2016', queued=0ms, exec=2102ms > * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-0 'not running' (7): call=77, status=complete, exitreason='none', > last-rc-change='Fri Jun 3 08:47:19 2016', queued=0ms, exec=0ms > * openstack-ceilometer-central_start_0 on overcloud-controller-0 'not running' (7): call=270, status=complete, exitreason='none', > last-rc-change='Fri Jun 3 08:50:02 2016', queued=0ms, exec=2199ms > > > PCSD Status: > overcloud-controller-0: Online > overcloud-controller-1: Online > overcloud-controller-2: Online > > Daemon Status: > corosync: active/enabled > pacemaker: active/enabled > pcsd: active/enabled > > > ________________________________ > From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets > Sent: Monday, May 30, 2016 4:56 AM > To: John Trowbridge; Lars Kellogg-Stedman > Cc: rdo-list > Subject: Re: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash > > > Done one more time :- > > > [stack at undercloud ~]$ heat deployment-show 9cc8087a-6d82-4261-8a13-ee8c46e3a02d > > Uploaded here :- > > http://textuploader.com/5bm5v > ________________________________ > From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets > Sent: Sunday, May 29, 2016 3:39 AM > To: John Trowbridge; Lars Kellogg-Stedman > Cc: rdo-list > Subject: [rdo-list] Tripleo QuickStart HA deploymemt attempts constantly crash > > > Error every time is the same :- > > > 2016-05-29 07:20:17 [0]: CREATE_FAILED Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 > 2016-05-29 07:20:18 [0]: SIGNAL_COMPLETE Unknown > 2016-05-29 07:20:18 [overcloud-ControllerNodesPostDeployment-dzawjmjyaidt-ControllerServicesBaseDeployment_Step2-ufz2ccs5egd7]: CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 > 2016-05-29 07:20:18 [0]: SIGNAL_COMPLETE Unknown > 2016-05-29 07:20:19 [ControllerServicesBaseDeployment_Step2]: CREATE_FAILED Error: resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 > 2016-05-29 07:20:19 [0]: SIGNAL_COMPLETE Unknown > 2016-05-29 07:20:19 [0]: SIGNAL_COMPLETE Unknown > 2016-05-29 07:20:20 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > 2016-05-29 07:20:20 [overcloud-ControllerNodesPostDeployment-dzawjmjyaidt]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 > 2016-05-29 07:20:21 [ControllerNodesPostDeployment]: CREATE_FAILED Error: resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 > 2016-05-29 07:20:21 [0]: SIGNAL_COMPLETE Unknown > 2016-05-29 07:20:22 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > 2016-05-29 07:20:22 [0]: SIGNAL_COMPLETE Unknown > 2016-05-29 07:24:22 [ComputeNodesPostDeployment]: CREATE_FAILED CREATE aborted > 2016-05-29 07:24:22 [overcloud]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 > Stack overcloud CREATE_FAILED > Deployment failed: Heat Stack create failed. > + heat stack-list > + grep -q CREATE_FAILED > + deploy_status=1 > ++ heat resource-list --nested-depth 5 overcloud > ++ grep FAILED > ++ grep 'StructuredDeployment ' > ++ cut -d '|' -f3 > + for failed in '$(heat resource-list --nested-depth 5 overcloud | grep FAILED | > grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' > + heat deployment-show 66bd3fbe-296b-4f88-87a7-5ceafd05c1d3 > + exit 1 > > > Minimal configuration deployments run with no errors and build completely functional environment. > > > However, template :- > > > ################################# > # Test Controller + 2*Compute nodes > ################################# > control_memory: 6144 > compute_memory: 6144 > > undercloud_memory: 8192 > > # Giving the undercloud additional CPUs can greatly improve heat's > # performance (and result in a shorter deploy time). > undercloud_vcpu: 4 > > # We set introspection to true and use only the minimal amount of nodes > # for this job, but test all defaults otherwise. > step_introspect: true > > # Define a single controller node and a single compute node. > overcloud_nodes: > - name: control_0 > flavor: control > > - name: compute_0 > flavor: compute > > - name: compute_1 > flavor: compute > > # Tell tripleo how we want things done. > extra_args: >- > --neutron-network-type vxlan > --neutron-tunnel-types vxlan > --ntp-server pool.ntp.org > > network_isolation: true > > > Picks up new memory setting but doesn't create second Compute Node. > > Every time just Controller && (1)* Compute. > > > HW - i74790 , 32 GB RAM > > > Thanks. > > Boris > > ________________________________ > > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From ibravo at ltgfederal.com Fri Jun 3 13:30:05 2016 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Fri, 3 Jun 2016 09:30:05 -0400 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464943002.9673.14.camel@ocf.co.uk> <1464948647.9673.19.camel@ocf.co.uk> Message-ID: Pedro / Christopher, Just wanted to share with you that I also had plenty of issues deploying on bare metal HA servers, and have paused the deployment using TripleO until better winds start to flow here. I was able to deploy the QuickStart, but on bare metal the history was different. Couldn't even deploy a two server configuration. I was thinking that it would be good to have the developers have access to one of our environments and go through a full install with us to better see where things fail. We can do this handholding deployment once every week/month based on developers time availability. That way we can get a working install, and we can troubleshoot real life environment problems. IB > On Jun 3, 2016, at 6:15 AM, Pedro Sousa wrote: > > Yes. I've used this, but I'll try again as there's seems to be new updates. > > > > Stable Branch Skip all repos mentioned above, other than epel-release which is still required. > Enable latest RDO Stable Delorean repository for all packages > > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo https://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > Enable the Delorean Deps repository > > sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > >> On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown wrote: >> No, Liberty deployed ok for us. >> >> It suggests to me a package mismatch. Have you completely rebuilt the >> undercloud and then the images using Liberty? >> >> On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: >> > AttributeError: 'module' object has no attribute 'PortOpt' >> -- >> Regards, >> >> Christopher Brown >> OpenStack Engineer >> OCF plc >> >> Tel: +44 (0)114 257 2200 >> Web: www.ocf.co.uk >> Blog: blog.ocf.co.uk >> Twitter: @ocfplc >> >> Please note, any emails relating to an OCF Support request must always >> be sent to support at ocf.co.uk for a ticket number to be generated or >> existing support ticket to be updated. Should this not be done then OCF >> >> cannot be held responsible for requests not dealt with in a timely >> manner. >> >> OCF plc is a company registered in England and Wales. Registered number >> >> 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, >> >> 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 >> 2PG. >> >> If you have received this message in error, please notify us >> immediately and remove it from your system. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Fri Jun 3 13:49:31 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Fri, 3 Jun 2016 14:49:31 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464943002.9673.14.camel@ocf.co.uk> <1464948647.9673.19.camel@ocf.co.uk> Message-ID: Hi Ignacio, what versions have you tried to install and what problems have you found? Until now I've only managed to install once 1 controller + 1 compute using mitaka nightly build. Everything else has failed. Testing now without delorean repos on liberty. Regards On Fri, Jun 3, 2016 at 2:30 PM, Ignacio Bravo wrote: > Pedro / Christopher, > > Just wanted to share with you that I also had plenty of issues deploying > on bare metal HA servers, and have paused the deployment using TripleO > until better winds start to flow here. I was able to deploy the QuickStart, > but on bare metal the history was different. Couldn't even deploy a two > server configuration. > > I was thinking that it would be good to have the developers have access to > one of our environments and go through a full install with us to better see > where things fail. We can do this handholding deployment once every > week/month based on developers time availability. That way we can get a > working install, and we can troubleshoot real life environment problems. > > > IB > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa wrote: > > Yes. I've used this, but I'll try again as there's seems to be new updates. > > > > Stable Branch > > > *Skip all repos mentioned above, other than epel-release which is still > required.* > > Enable latest RDO Stable Delorean repository for all packages > > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo https://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > > Enable the Delorean Deps repository > > sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown > wrote: > >> No, Liberty deployed ok for us. >> >> It suggests to me a package mismatch. Have you completely rebuilt the >> undercloud and then the images using Liberty? >> >> On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: >> > AttributeError: 'module' object has no attribute 'PortOpt' >> -- >> Regards, >> >> Christopher Brown >> OpenStack Engineer >> OCF plc >> >> Tel: +44 (0)114 257 2200 >> Web: www.ocf.co.uk >> Blog: blog.ocf.co.uk >> Twitter: @ocfplc >> >> Please note, any emails relating to an OCF Support request must always >> be sent to support at ocf.co.uk for a ticket number to be generated or >> existing support ticket to be updated. Should this not be done then OCF >> >> cannot be held responsible for requests not dealt with in a timely >> manner. >> >> OCF plc is a company registered in England and Wales. Registered number >> >> 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, >> >> 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 >> 2PG. >> >> If you have received this message in error, please notify us >> immediately and remove it from your system. >> > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ibravo at ltgfederal.com Fri Jun 3 14:09:36 2016 From: ibravo at ltgfederal.com (Ignacio Bravo) Date: Fri, 3 Jun 2016 10:09:36 -0400 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464943002.9673.14.camel@ocf.co.uk> <1464948647.9673.19.camel@ocf.co.uk> Message-ID: <948419BD-9C06-4178-9B8D-486D5FC00421@ltgfederal.com> I had issues with the introspection of nodes, due to a race condition that was fixed by dtantsur. The issue was that the official images did not include the patch, and thus were not usable by me. I tried creating the images myself and those too were failing. I got out of trying the nightly builds, as I was looking for a more long term solution, so I was using the official production repos that were on centos.org (I believe they were http://mirror.centos.org/centos/7/cloud/x86_64/openstack-mitaka/ but can?t recall right now) The patch was not included on those packages when I was trying this out, that was just about the Austin summit. IB __ Ignacio Bravo LTG Federal, Inc www.ltgfederal.com > On Jun 3, 2016, at 9:49 AM, Pedro Sousa wrote: > > Hi Ignacio, > > what versions have you tried to install and what problems have you found? > > Until now I've only managed to install once 1 controller + 1 compute using mitaka nightly build. Everything else has failed. Testing now without delorean repos on liberty. > > Regards > > > On Fri, Jun 3, 2016 at 2:30 PM, Ignacio Bravo > wrote: > Pedro / Christopher, > > Just wanted to share with you that I also had plenty of issues deploying on bare metal HA servers, and have paused the deployment using TripleO until better winds start to flow here. I was able to deploy the QuickStart, but on bare metal the history was different. Couldn't even deploy a two server configuration. > > I was thinking that it would be good to have the developers have access to one of our environments and go through a full install with us to better see where things fail. We can do this handholding deployment once every week/month based on developers time availability. That way we can get a working install, and we can troubleshoot real life environment problems. > > > IB > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa > wrote: > >> Yes. I've used this, but I'll try again as there's seems to be new updates. >> >> >> >> Stable Branch Skip all repos mentioned above, other than epel-release which is still required. >> Enable latest RDO Stable Delorean repository for all packages >> >> sudo curl -o /etc/yum.repos.d/delorean-liberty.repo https://trunk.rdoproject.org/centos7-liberty/current/delorean.repo >> Enable the Delorean Deps repository >> >> sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo >> >> On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown > wrote: >> No, Liberty deployed ok for us. >> >> It suggests to me a package mismatch. Have you completely rebuilt the >> undercloud and then the images using Liberty? >> >> On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: >> > AttributeError: 'module' object has no attribute 'PortOpt' >> -- >> Regards, >> >> Christopher Brown >> OpenStack Engineer >> OCF plc >> >> Tel: +44 (0)114 257 2200 >> Web: www.ocf.co.uk >> Blog: blog.ocf.co.uk >> Twitter: @ocfplc >> >> Please note, any emails relating to an OCF Support request must always >> be sent to support at ocf.co.uk for a ticket number to be generated or >> existing support ticket to be updated. Should this not be done then OCF >> >> cannot be held responsible for requests not dealt with in a timely >> manner. >> >> OCF plc is a company registered in England and Wales. Registered number >> >> 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, >> >> 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 >> 2PG. >> >> If you have received this message in error, please notify us >> immediately and remove it from your system. >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Fri Jun 3 14:14:16 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Fri, 3 Jun 2016 15:14:16 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <948419BD-9C06-4178-9B8D-486D5FC00421@ltgfederal.com> References: <1464943002.9673.14.camel@ocf.co.uk> <1464948647.9673.19.camel@ocf.co.uk> <948419BD-9C06-4178-9B8D-486D5FC00421@ltgfederal.com> Message-ID: I did not had issues with introspection, built images both liberty and mitaka from delorean repos. The problem is applying the heat templates when the overcloud is up, puppet issues, etc. On Fri, Jun 3, 2016 at 3:09 PM, Ignacio Bravo wrote: > I had issues with the introspection of nodes, due to a race condition that > was fixed by dtantsur. The issue was that the official images did not > include the patch, and thus were not usable by me. I tried creating the > images myself and those too were failing. > > I got out of trying the nightly builds, as I was looking for a more long > term solution, so I was using the official production repos that were on > centos.org (I believe they were > http://mirror.centos.org/centos/7/cloud/x86_64/openstack-mitaka/ but > can?t recall right now) > The patch was not included on those packages when I was trying this out, > that was just about the Austin summit. > > > IB > > > __ > Ignacio Bravo > LTG Federal, Inc > www.ltgfederal.com > > > On Jun 3, 2016, at 9:49 AM, Pedro Sousa wrote: > > Hi Ignacio, > > what versions have you tried to install and what problems have you found? > > Until now I've only managed to install once 1 controller + 1 compute using > mitaka nightly build. Everything else has failed. Testing now without > delorean repos on liberty. > > Regards > > > On Fri, Jun 3, 2016 at 2:30 PM, Ignacio Bravo > wrote: > >> Pedro / Christopher, >> >> Just wanted to share with you that I also had plenty of issues deploying >> on bare metal HA servers, and have paused the deployment using TripleO >> until better winds start to flow here. I was able to deploy the QuickStart, >> but on bare metal the history was different. Couldn't even deploy a two >> server configuration. >> >> I was thinking that it would be good to have the developers have access >> to one of our environments and go through a full install with us to better >> see where things fail. We can do this handholding deployment once every >> week/month based on developers time availability. That way we can get a >> working install, and we can troubleshoot real life environment problems. >> >> >> IB >> >> On Jun 3, 2016, at 6:15 AM, Pedro Sousa wrote: >> >> Yes. I've used this, but I'll try again as there's seems to be new >> updates. >> >> >> >> Stable Branch >> >> >> *Skip all repos mentioned above, other than epel-release which is still >> required.* >> >> Enable latest RDO Stable Delorean repository for all packages >> >> sudo curl -o /etc/yum.repos.d/delorean-liberty.repo https://trunk.rdoproject.org/centos7-liberty/current/delorean.repo >> >> Enable the Delorean Deps repository >> >> sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo >> >> >> On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown >> wrote: >> >>> No, Liberty deployed ok for us. >>> >>> It suggests to me a package mismatch. Have you completely rebuilt the >>> undercloud and then the images using Liberty? >>> >>> On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: >>> > AttributeError: 'module' object has no attribute 'PortOpt' >>> -- >>> Regards, >>> >>> Christopher Brown >>> OpenStack Engineer >>> OCF plc >>> >>> Tel: +44 (0)114 257 2200 >>> Web: www.ocf.co.uk >>> Blog: blog.ocf.co.uk >>> Twitter: @ocfplc >>> >>> Please note, any emails relating to an OCF Support request must always >>> be sent to support at ocf.co.uk for a ticket number to be generated or >>> existing support ticket to be updated. Should this not be done then OCF >>> >>> cannot be held responsible for requests not dealt with in a timely >>> manner. >>> >>> OCF plc is a company registered in England and Wales. Registered number >>> >>> 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, >>> >>> 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 >>> 2PG. >>> >>> If you have received this message in error, please notify us >>> immediately and remove it from your system. >>> >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cbrown2 at ocf.co.uk Fri Jun 3 14:29:39 2016 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Fri, 3 Jun 2016 15:29:39 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464943002.9673.14.camel@ocf.co.uk> <1464948647.9673.19.camel@ocf.co.uk> Message-ID: <1464964179.9673.30.camel@ocf.co.uk> Hello Ignacio, Thanks for your response and good to know it isn't just me! I would be more than happy to provide developers with access to our bare metal environments. I'll also file some bugzilla reports to see if this generates any interest. Please do let me know if you make any progress - I am trying to deploy HA with network isolation, multiple nics and vlans. The RDO web page states: "If you want to create a production-ready cloud, you'll want to use the TripleO quickstart guide." which is a contradiction in terms really. Cheers On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo wrote: > Pedro / Christopher, > > Just wanted to share with you that I also had plenty of issues > deploying on bare metal HA servers, and have paused the deployment > using TripleO until better winds start to flow here. I was able to > deploy the QuickStart, but on bare metal the history was different. > Couldn't even deploy a two server configuration. > > I was thinking that it would be good to have the developers have > access to one of our environments and go through a full install with > us to better see where things fail. We can do this handholding > deployment once every week/month based on developers time > availability. That way we can get a working install, and we can > troubleshoot real life environment problems. > > > IB > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa wrote: > > > Yes. I've used this, but I'll try again as there's seems to be new > > updates. > > > > > > > > Stable Branch Skip all repos mentioned above, other than epel- > > release which is still required. > > Enable latest RDO Stable Delorean repository for all packages > > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo https://trunk.r > > doproject.org/centos7-liberty/current/delorean.repo > > Enable the Delorean Deps repository > > sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo http://tru > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown > uk> wrote: > > > No, Liberty deployed ok for us. > > > > > > It suggests to me a package mismatch. Have you completely rebuilt > > > the > > > undercloud and then the images using Liberty? > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: > > > > AttributeError: 'module' object has no attribute 'PortOpt' > > > -- > > > Regards, > > > > > > Christopher Brown > > > OpenStack Engineer > > > OCF plc > > > > > > Tel: +44 (0)114 257 2200 > > > Web: www.ocf.co.uk > > > Blog: blog.ocf.co.uk > > > Twitter: @ocfplc > > > > > > Please note, any emails relating to an OCF Support request must > > > always > > > be sent to support at ocf.co.uk for a ticket number to be generated > > > or > > > existing support ticket to be updated. Should this not be done > > > then OCF > > > > > > cannot be held responsible for requests not dealt with in a > > > timely > > > manner. > > > > > > OCF plc is a company registered in England and Wales. Registered > > > number > > > > > > 4132533, VAT number GB 780 6803 14. Registered office address: > > > OCF plc, > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, > > > Sheffield S35 > > > 2PG. > > > > > > If you have received this message in error, please notify us > > > immediately and remove it from your system. > > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Regards, Christopher Brown OpenStack Engineer OCF plc Tel: +44 (0)114 257 2200 Web: www.ocf.co.uk Blog: blog.ocf.co.uk Twitter: @ocfplc Please note, any emails relating to an OCF Support request must always be sent to support at ocf.co.uk for a ticket number to be generated or existing support ticket to be updated. Should this not be done then OCF cannot be held responsible for requests not dealt with in a timely manner. OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 2PG. If you have received this message in error, please notify us immediately and remove it from your system. From bderzhavets at hotmail.com Fri Jun 3 15:30:10 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 3 Jun 2016 15:30:10 +0000 Subject: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash In-Reply-To: <57517B59.7040103@redhat.com> References: , <57517B59.7040103@redhat.com> Message-ID: 1. Attempting to address your concern ( if I understood you correct ) First log :- [root at overcloud-controller-0 ceilometer]# cat central.log | grep ERROR 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service [req-4db5f172-0bf0-4200-9cf4-174859cdc00b admin - - - -] Error starting thread. 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service Traceback (most recent call last): 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 680, in run_service 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service service.start() 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/ceilometer/agent/manager.py", line 384, in start 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service self.partition_coordinator.start() 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/ceilometer/coordination.py", line 84, in start 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service backend_url, self._my_id) 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 539, in get_coordinator 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service invoke_args=(member_id, parsed_url, options)).driver 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 46, in __init__ 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements=verify_requirements, 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 55, in __init__ 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements) 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 171, in _load_plugins 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service self._on_load_failure_callback(self, ep, err) 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 163, in _load_plugins 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements, 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 123, in _load_one_plugin 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements, 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 186, in _load_one_plugin 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service plugin = ep.load(require=verify_requirements) 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service entry = __import__(self.module_name, globals(),globals(), ['__name__']) 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/drivers/redis.py", line 27, in 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service import redis 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service ImportError: No module named redis 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service [root at overcloud-controller-0 ceilometer]# clear  [root at overcloud-controller-0 ceilometer]# cat central.log | grep ERROR 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service [req-4db5f172-0bf0-4200-9cf4-174859cdc00b admin - - - -] Error starting thread. 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service Traceback (most recent call last): 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 680, in run_service 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service service.start() 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/ceilometer/agent/manager.py", line 384, in start 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service self.partition_coordinator.start() 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/ceilometer/coordination.py", line 84, in start 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service backend_url, self._my_id) 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 539, in get_coordinator 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service invoke_args=(member_id, parsed_url, options)).driver 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 46, in __init__ 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements=verify_requirements, 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 55, in __init__ 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements) 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 171, in _load_plugins 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service self._on_load_failure_callback(self, ep, err) 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 163, in _load_plugins 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements, 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 123, in _load_one_plugin 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements, 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 186, in _load_one_plugin 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service plugin = ep.load(require=verify_requirements) 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service entry = __import__(self.module_name, globals(),globals(), ['__name__']) 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/drivers/redis.py", line 27, in 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service import redis 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service ImportError: No module named redis 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service Second log :- [root at overcloud-controller-0 ceilometer]# cd - /var/log/aodh [root at overcloud-controller-0 aodh]# cat evaluator.log | grep ERROR 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service [-] Error starting thread. 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service Traceback (most recent call last): 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 680, in run_service 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service service.start() 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/aodh/evaluator/__init__.py", line 229, in start 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service self.partition_coordinator.start() 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/aodh/coordination.py", line 133, in start 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service self.backend_url, self._my_id) 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 539, in get_coordinator 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service invoke_args=(member_id, parsed_url, options)).driver 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 46, in __init__ 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service verify_requirements=verify_requirements, 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 55, in __init__ 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service verify_requirements) 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 171, in _load_plugins 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service self._on_load_failure_callback(self, ep, err) 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 163, in _load_plugins 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service verify_requirements, 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 123, in _load_one_plugin 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service verify_requirements, 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 186, in _load_one_plugin 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service plugin = ep.load(require=verify_requirements) 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service entry = __import__(self.module_name, globals(),globals(), ['__name__']) 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/drivers/redis.py", line 27, in 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service import redis 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service ImportError: No module named redis 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service 2 . Memory DIMMs DDR3 ( Kingston HyperX 1600 MHZ ) is not a problem My board ASUS Z97-P cannot support more 32 GB. So .... 3. i7 4790 surprised me on doing deployment on TripleO Quickstart , in particular, Controller+2xComputes ( --compute-scale 2 ) Thank you Boris. ________________________________________ From: John Trowbridge Sent: Friday, June 3, 2016 8:43 AM To: Boris Derzhavets; John Trowbridge; Lars Kellogg-Stedman Cc: rdo-list Subject: Re: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash So this last one looks like telemetry services went down. You could check the logs on the controllers to see if it was OOM killed. My bet would be this is what is happening. The reason that HA is not the default for tripleo-quickstart is exactly this type of issue. It is pretty difficult to fit a full HA deployment of TripleO on a 32G virthost. I think there is near 100% chance that the default HA config will crash when trying to do anything on the deployed overcloud, due to running out of memory. I have had some success in my local test setup using KSM [1] on the virthost, and then changing the HA config to give the controllers more memory. This results in overcommiting, but KSM can handle overcommiting without going into swap. It might even be possible to try to setup KSM in the environment setup part of quickstart. I would certainly accept an RFE/patch for this [2,3]. If you have a larger virthost than 32G, you could similarly bump the memory for the controllers, which should lead to a much higher success rate. There is also a feature coming in TripleO [4] that will allow choosing what services get deployed in each role, which will allow us to tweak the tripleo-quickstart HA config to deploy a minimal service layout in order to reduce memory requirements. Thanks a ton for giving tripleo-quickstart a go! [1] https://en.wikipedia.org/wiki/Kernel_same-page_merging [2] https://bugs.launchpad.net/tripleo-quickstart [3] https://review.openstack.org/#/q/project:openstack/tripleo-quickstart [4] https://blueprints.launchpad.net/tripleo/+spec/composable-services-within-roles On 06/03/2016 06:20 AM, Boris Derzhavets wrote: > ===================================== > > Fresh HA deployment attempt > > ===================================== > > [stack at undercloud ~]$ date > Fri Jun 3 10:05:35 UTC 2016 > [stack at undercloud ~]$ heat stack-list > +--------------------------------------+------------+-----------------+---------------------+--------------+ > | id | stack_name | stack_status | creation_time | updated_time | > +--------------------------------------+------------+-----------------+---------------------+--------------+ > | 0c6b8205-be86-4a24-be36-fd4ece956c6d | overcloud | CREATE_COMPLETE | 2016-06-03T08:14:19 | None | > +--------------------------------------+------------+-----------------+---------------------+--------------+ > [stack at undercloud ~]$ nova list > +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ > | ID | Name | Status | Task State | Power State | Networks | > +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ > | 6a38b7be-3743-4339-970b-6121e687741d | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.10 | > | 9222dc1b-5974-495b-8b98-b8176ac742f4 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.9 | > | 76adbb27-220f-42ef-9691-94729ee28749 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.11 | > | 8f57f7b6-a2d8-4b7b-b435-1c675e63ea84 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.8 | > +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ > [stack at undercloud ~]$ ssh heat-admin at 192.0.2.10 > Last login: Fri Jun 3 10:01:44 2016 from gateway > [heat-admin at overcloud-controller-0 ~]$ sudo su - > Last login: Fri Jun 3 10:01:49 UTC 2016 on pts/0 > [root at overcloud-controller-0 ~]# . keystonerc_admin > > [root at overcloud-controller-0 ~]# pcs status > Cluster name: tripleo_cluster > Last updated: Fri Jun 3 10:07:22 2016 Last change: Fri Jun 3 08:50:59 2016 by root via cibadmin on overcloud-controller-0 > Stack: corosync > Current DC: overcloud-controller-0 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum > 3 nodes and 123 resources configured > > Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > > Full list of resources: > > ip-192.0.2.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0 > Clone Set: haproxy-clone [haproxy] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > ip-192.0.2.7 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1 > Master/Slave Set: galera-master [galera] > Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: memcached-clone [memcached] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: rabbitmq-clone [rabbitmq] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-core-clone [openstack-core] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Master/Slave Set: redis-master [redis] > Masters: [ overcloud-controller-1 ] > Slaves: [ overcloud-controller-0 overcloud-controller-2 ] > Clone Set: mongod-clone [mongod] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator] > Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: neutron-l3-agent-clone [neutron-l3-agent] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-2 > Clone Set: openstack-heat-engine-clone [openstack-heat-engine] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api] > Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener] > Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier] > Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-heat-api-clone [openstack-heat-api] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector] > Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-glance-api-clone [openstack-glance-api] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-nova-api-clone [openstack-nova-api] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-sahara-api-clone [openstack-sahara-api] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-glance-registry-clone [openstack-glance-registry] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-cinder-api-clone [openstack-cinder-api] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: delay-clone [delay] > Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: neutron-server-clone [neutron-server] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central] > Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: httpd-clone [httpd] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor] > Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] > > Failed Actions: > * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-1 'not running' (7): call=76, status=complete, exitreason='none', > last-rc-change='Fri Jun 3 08:47:22 2016', queued=0ms, exec=0ms > * openstack-ceilometer-central_start_0 on overcloud-controller-1 'not running' (7): call=290, status=complete, exitreason='none', > last-rc-change='Fri Jun 3 08:51:18 2016', queued=0ms, exec=2132ms > * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-2 'not running' (7): call=76, status=complete, exitreason='none', > last-rc-change='Fri Jun 3 08:47:16 2016', queued=0ms, exec=0ms > * openstack-ceilometer-central_start_0 on overcloud-controller-2 'not running' (7): call=292, status=complete, exitreason='none', > last-rc-change='Fri Jun 3 08:51:31 2016', queued=0ms, exec=2102ms > * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-0 'not running' (7): call=77, status=complete, exitreason='none', > last-rc-change='Fri Jun 3 08:47:19 2016', queued=0ms, exec=0ms > * openstack-ceilometer-central_start_0 on overcloud-controller-0 'not running' (7): call=270, status=complete, exitreason='none', > last-rc-change='Fri Jun 3 08:50:02 2016', queued=0ms, exec=2199ms > > > PCSD Status: > overcloud-controller-0: Online > overcloud-controller-1: Online > overcloud-controller-2: Online > > Daemon Status: > corosync: active/enabled > pacemaker: active/enabled > pcsd: active/enabled > > > ________________________________ > From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets > Sent: Monday, May 30, 2016 4:56 AM > To: John Trowbridge; Lars Kellogg-Stedman > Cc: rdo-list > Subject: Re: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash > > > Done one more time :- > > > [stack at undercloud ~]$ heat deployment-show 9cc8087a-6d82-4261-8a13-ee8c46e3a02d > > Uploaded here :- > > http://textuploader.com/5bm5v > ________________________________ > From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets > Sent: Sunday, May 29, 2016 3:39 AM > To: John Trowbridge; Lars Kellogg-Stedman > Cc: rdo-list > Subject: [rdo-list] Tripleo QuickStart HA deploymemt attempts constantly crash > > > Error every time is the same :- > > > 2016-05-29 07:20:17 [0]: CREATE_FAILED Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 > 2016-05-29 07:20:18 [0]: SIGNAL_COMPLETE Unknown > 2016-05-29 07:20:18 [overcloud-ControllerNodesPostDeployment-dzawjmjyaidt-ControllerServicesBaseDeployment_Step2-ufz2ccs5egd7]: CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 > 2016-05-29 07:20:18 [0]: SIGNAL_COMPLETE Unknown > 2016-05-29 07:20:19 [ControllerServicesBaseDeployment_Step2]: CREATE_FAILED Error: resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 > 2016-05-29 07:20:19 [0]: SIGNAL_COMPLETE Unknown > 2016-05-29 07:20:19 [0]: SIGNAL_COMPLETE Unknown > 2016-05-29 07:20:20 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > 2016-05-29 07:20:20 [overcloud-ControllerNodesPostDeployment-dzawjmjyaidt]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 > 2016-05-29 07:20:21 [ControllerNodesPostDeployment]: CREATE_FAILED Error: resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 > 2016-05-29 07:20:21 [0]: SIGNAL_COMPLETE Unknown > 2016-05-29 07:20:22 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > 2016-05-29 07:20:22 [0]: SIGNAL_COMPLETE Unknown > 2016-05-29 07:24:22 [ComputeNodesPostDeployment]: CREATE_FAILED CREATE aborted > 2016-05-29 07:24:22 [overcloud]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 > Stack overcloud CREATE_FAILED > Deployment failed: Heat Stack create failed. > + heat stack-list > + grep -q CREATE_FAILED > + deploy_status=1 > ++ heat resource-list --nested-depth 5 overcloud > ++ grep FAILED > ++ grep 'StructuredDeployment ' > ++ cut -d '|' -f3 > + for failed in '$(heat resource-list --nested-depth 5 overcloud | grep FAILED | > grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' > + heat deployment-show 66bd3fbe-296b-4f88-87a7-5ceafd05c1d3 > + exit 1 > > > Minimal configuration deployments run with no errors and build completely functional environment. > > > However, template :- > > > ################################# > # Test Controller + 2*Compute nodes > ################################# > control_memory: 6144 > compute_memory: 6144 > > undercloud_memory: 8192 > > # Giving the undercloud additional CPUs can greatly improve heat's > # performance (and result in a shorter deploy time). > undercloud_vcpu: 4 > > # We set introspection to true and use only the minimal amount of nodes > # for this job, but test all defaults otherwise. > step_introspect: true > > # Define a single controller node and a single compute node. > overcloud_nodes: > - name: control_0 > flavor: control > > - name: compute_0 > flavor: compute > > - name: compute_1 > flavor: compute > > # Tell tripleo how we want things done. > extra_args: >- > --neutron-network-type vxlan > --neutron-tunnel-types vxlan > --ntp-server pool.ntp.org > > network_isolation: true > > > Picks up new memory setting but doesn't create second Compute Node. > > Every time just Controller && (1)* Compute. > > > HW - i74790 , 32 GB RAM > > > Thanks. > > Boris > > ________________________________ > > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From rlandy at redhat.com Fri Jun 3 15:43:02 2016 From: rlandy at redhat.com (Ronelle Landy) Date: Fri, 3 Jun 2016 11:43:02 -0400 (EDT) Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <1464964179.9673.30.camel@ocf.co.uk> References: <1464943002.9673.14.camel@ocf.co.uk> <1464948647.9673.19.camel@ocf.co.uk> <1464964179.9673.30.camel@ocf.co.uk> Message-ID: <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> Hello, We have had success deploying RDO (Mitaka) on baremetal systems - using Tripleo Quickstart with both single-nic-vlans and bond-with-vlans network isolation configurations. Baremetal can have some complicated networking issues but, from previous experiences, if a single-controller deployment worked but a HA deployment did not, I would check: - does the HA deployment command include: -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml - are there possible MTU issues? ----- Original Message ----- > From: "Christopher Brown" > To: pgsousa at gmail.com, ibravo at ltgfederal.com > Cc: rdo-list at redhat.com > Sent: Friday, June 3, 2016 10:29:39 AM > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > Hello Ignacio, > > Thanks for your response and good to know it isn't just me! > > I would be more than happy to provide developers with access to our > bare metal environments. I'll also file some bugzilla reports to see if > this generates any interest. > > Please do let me know if you make any progress - I am trying to deploy > HA with network isolation, multiple nics and vlans. > > The RDO web page states: > > "If you want to create a production-ready cloud, you'll want to use the > TripleO quickstart guide." > > which is a contradiction in terms really. > > Cheers > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo wrote: > > Pedro / Christopher, > > > > Just wanted to share with you that I also had plenty of issues > > deploying on bare metal HA servers, and have paused the deployment > > using TripleO until better winds start to flow here. I was able to > > deploy the QuickStart, but on bare metal the history was different. > > Couldn't even deploy a two server configuration. > > > > I was thinking that it would be good to have the developers have > > access to one of our environments and go through a full install with > > us to better see where things fail. We can do this handholding > > deployment once every week/month based on developers time > > availability. That way we can get a working install, and we can > > troubleshoot real life environment problems. > > > > > > IB > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa wrote: > > > > > Yes. I've used this, but I'll try again as there's seems to be new > > > updates. > > > > > > > > > > > > Stable Branch Skip all repos mentioned above, other than epel- > > > release which is still required. > > > Enable latest RDO Stable Delorean repository for all packages > > > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo https://trunk.r > > > doproject.org/centos7-liberty/current/delorean.repo > > > Enable the Delorean Deps repository > > > sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo http://tru > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown > > uk> wrote: > > > > No, Liberty deployed ok for us. > > > > > > > > It suggests to me a package mismatch. Have you completely rebuilt > > > > the > > > > undercloud and then the images using Liberty? > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: > > > > > AttributeError: 'module' object has no attribute 'PortOpt' > > > > -- > > > > Regards, > > > > > > > > Christopher Brown > > > > OpenStack Engineer > > > > OCF plc > > > > > > > > Tel: +44 (0)114 257 2200 > > > > Web: www.ocf.co.uk > > > > Blog: blog.ocf.co.uk > > > > Twitter: @ocfplc > > > > > > > > Please note, any emails relating to an OCF Support request must > > > > always > > > > be sent to support at ocf.co.uk for a ticket number to be generated > > > > or > > > > existing support ticket to be updated. Should this not be done > > > > then OCF > > > > > > > > cannot be held responsible for requests not dealt with in a > > > > timely > > > > manner. > > > > > > > > OCF plc is a company registered in England and Wales. Registered > > > > number > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office address: > > > > OCF plc, > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, > > > > Sheffield S35 > > > > 2PG. > > > > > > > > If you have received this message in error, please notify us > > > > immediately and remove it from your system. > > > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- > Regards, > > Christopher Brown > OpenStack Engineer > OCF plc > > Tel: +44 (0)114 257 2200 > Web: www.ocf.co.uk > Blog: blog.ocf.co.uk > Twitter: @ocfplc > > Please note, any emails relating to an OCF Support request must always > be sent to support at ocf.co.uk for a ticket number to be generated or > existing support ticket to be updated. Should this not be done then OCF > > cannot be held responsible for requests not dealt with in a timely > manner. > > OCF plc is a company registered in England and Wales. Registered number > > 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 > 2PG. > > If you have received this message in error, please notify us > immediately and remove it from your system. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From pgsousa at gmail.com Fri Jun 3 15:48:38 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Fri, 3 Jun 2016 16:48:38 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> References: <1464943002.9673.14.camel@ocf.co.uk> <1464948647.9673.19.camel@ocf.co.uk> <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> Message-ID: Hi Ronelle, maybe I understand it wrong but I thought that Tripleo Quickstart was for deploying virtual environments? And for baremetal we should use http://docs.openstack.org/developer/tripleo-docs/installation/installation.html ? Thanks On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy wrote: > Hello, > > We have had success deploying RDO (Mitaka) on baremetal systems - using > Tripleo Quickstart with both single-nic-vlans and bond-with-vlans network > isolation configurations. > > Baremetal can have some complicated networking issues but, from previous > experiences, if a single-controller deployment worked but a HA deployment > did not, I would check: > - does the HA deployment command include: -e > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > - are there possible MTU issues? > > > ----- Original Message ----- > > From: "Christopher Brown" > > To: pgsousa at gmail.com, ibravo at ltgfederal.com > > Cc: rdo-list at redhat.com > > Sent: Friday, June 3, 2016 10:29:39 AM > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > > > Hello Ignacio, > > > > Thanks for your response and good to know it isn't just me! > > > > I would be more than happy to provide developers with access to our > > bare metal environments. I'll also file some bugzilla reports to see if > > this generates any interest. > > > > Please do let me know if you make any progress - I am trying to deploy > > HA with network isolation, multiple nics and vlans. > > > > The RDO web page states: > > > > "If you want to create a production-ready cloud, you'll want to use the > > TripleO quickstart guide." > > > > which is a contradiction in terms really. > > > > Cheers > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo wrote: > > > Pedro / Christopher, > > > > > > Just wanted to share with you that I also had plenty of issues > > > deploying on bare metal HA servers, and have paused the deployment > > > using TripleO until better winds start to flow here. I was able to > > > deploy the QuickStart, but on bare metal the history was different. > > > Couldn't even deploy a two server configuration. > > > > > > I was thinking that it would be good to have the developers have > > > access to one of our environments and go through a full install with > > > us to better see where things fail. We can do this handholding > > > deployment once every week/month based on developers time > > > availability. That way we can get a working install, and we can > > > troubleshoot real life environment problems. > > > > > > > > > IB > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa wrote: > > > > > > > Yes. I've used this, but I'll try again as there's seems to be new > > > > updates. > > > > > > > > > > > > > > > > Stable Branch Skip all repos mentioned above, other than epel- > > > > release which is still required. > > > > Enable latest RDO Stable Delorean repository for all packages > > > > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo https://trunk.r > > > > doproject.org/centos7-liberty/current/delorean.repo > > > > Enable the Delorean Deps repository > > > > sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo http://tru > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown > > > uk> wrote: > > > > > No, Liberty deployed ok for us. > > > > > > > > > > It suggests to me a package mismatch. Have you completely rebuilt > > > > > the > > > > > undercloud and then the images using Liberty? > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: > > > > > > AttributeError: 'module' object has no attribute 'PortOpt' > > > > > -- > > > > > Regards, > > > > > > > > > > Christopher Brown > > > > > OpenStack Engineer > > > > > OCF plc > > > > > > > > > > Tel: +44 (0)114 257 2200 > > > > > Web: www.ocf.co.uk > > > > > Blog: blog.ocf.co.uk > > > > > Twitter: @ocfplc > > > > > > > > > > Please note, any emails relating to an OCF Support request must > > > > > always > > > > > be sent to support at ocf.co.uk for a ticket number to be generated > > > > > or > > > > > existing support ticket to be updated. Should this not be done > > > > > then OCF > > > > > > > > > > cannot be held responsible for requests not dealt with in a > > > > > timely > > > > > manner. > > > > > > > > > > OCF plc is a company registered in England and Wales. Registered > > > > > number > > > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office address: > > > > > OCF plc, > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, > > > > > Sheffield S35 > > > > > 2PG. > > > > > > > > > > If you have received this message in error, please notify us > > > > > immediately and remove it from your system. > > > > > > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > -- > > Regards, > > > > Christopher Brown > > OpenStack Engineer > > OCF plc > > > > Tel: +44 (0)114 257 2200 > > Web: www.ocf.co.uk > > Blog: blog.ocf.co.uk > > Twitter: @ocfplc > > > > Please note, any emails relating to an OCF Support request must always > > be sent to support at ocf.co.uk for a ticket number to be generated or > > existing support ticket to be updated. Should this not be done then OCF > > > > cannot be held responsible for requests not dealt with in a timely > > manner. > > > > OCF plc is a company registered in England and Wales. Registered number > > > > 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 > > 2PG. > > > > If you have received this message in error, please notify us > > immediately and remove it from your system. > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlandy at redhat.com Fri Jun 3 15:56:35 2016 From: rlandy at redhat.com (Ronelle Landy) Date: Fri, 3 Jun 2016 11:56:35 -0400 (EDT) Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464948647.9673.19.camel@ocf.co.uk> <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> Message-ID: <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> Hi Pedro, You could use the docs you referred to. Alternatively, if you want to use a vm for the undercloud and baremetal machines for the overcloud, it is possible to use Tripleo Qucikstart with a few modifications. https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. ----- Original Message ----- > From: "Pedro Sousa" > To: "Ronelle Landy" > Cc: "Christopher Brown" , "Ignacio Bravo" , "rdo-list" > > Sent: Friday, June 3, 2016 11:48:38 AM > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > Hi Ronelle, > > maybe I understand it wrong but I thought that Tripleo Quickstart was for > deploying virtual environments? > > And for baremetal we should use > http://docs.openstack.org/developer/tripleo-docs/installation/installation.html > ? > > Thanks > > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy wrote: > > > Hello, > > > > We have had success deploying RDO (Mitaka) on baremetal systems - using > > Tripleo Quickstart with both single-nic-vlans and bond-with-vlans network > > isolation configurations. > > > > Baremetal can have some complicated networking issues but, from previous > > experiences, if a single-controller deployment worked but a HA deployment > > did not, I would check: > > - does the HA deployment command include: -e > > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > > - are there possible MTU issues? > > > > > > ----- Original Message ----- > > > From: "Christopher Brown" > > > To: pgsousa at gmail.com, ibravo at ltgfederal.com > > > Cc: rdo-list at redhat.com > > > Sent: Friday, June 3, 2016 10:29:39 AM > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > > > > > Hello Ignacio, > > > > > > Thanks for your response and good to know it isn't just me! > > > > > > I would be more than happy to provide developers with access to our > > > bare metal environments. I'll also file some bugzilla reports to see if > > > this generates any interest. > > > > > > Please do let me know if you make any progress - I am trying to deploy > > > HA with network isolation, multiple nics and vlans. > > > > > > The RDO web page states: > > > > > > "If you want to create a production-ready cloud, you'll want to use the > > > TripleO quickstart guide." > > > > > > which is a contradiction in terms really. > > > > > > Cheers > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo wrote: > > > > Pedro / Christopher, > > > > > > > > Just wanted to share with you that I also had plenty of issues > > > > deploying on bare metal HA servers, and have paused the deployment > > > > using TripleO until better winds start to flow here. I was able to > > > > deploy the QuickStart, but on bare metal the history was different. > > > > Couldn't even deploy a two server configuration. > > > > > > > > I was thinking that it would be good to have the developers have > > > > access to one of our environments and go through a full install with > > > > us to better see where things fail. We can do this handholding > > > > deployment once every week/month based on developers time > > > > availability. That way we can get a working install, and we can > > > > troubleshoot real life environment problems. > > > > > > > > > > > > IB > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa wrote: > > > > > > > > > Yes. I've used this, but I'll try again as there's seems to be new > > > > > updates. > > > > > > > > > > > > > > > > > > > > Stable Branch Skip all repos mentioned above, other than epel- > > > > > release which is still required. > > > > > Enable latest RDO Stable Delorean repository for all packages > > > > > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo https://trunk.r > > > > > doproject.org/centos7-liberty/current/delorean.repo > > > > > Enable the Delorean Deps repository > > > > > sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo http://tru > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown > > > > uk> wrote: > > > > > > No, Liberty deployed ok for us. > > > > > > > > > > > > It suggests to me a package mismatch. Have you completely rebuilt > > > > > > the > > > > > > undercloud and then the images using Liberty? > > > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: > > > > > > > AttributeError: 'module' object has no attribute 'PortOpt' > > > > > > -- > > > > > > Regards, > > > > > > > > > > > > Christopher Brown > > > > > > OpenStack Engineer > > > > > > OCF plc > > > > > > > > > > > > Tel: +44 (0)114 257 2200 > > > > > > Web: www.ocf.co.uk > > > > > > Blog: blog.ocf.co.uk > > > > > > Twitter: @ocfplc > > > > > > > > > > > > Please note, any emails relating to an OCF Support request must > > > > > > always > > > > > > be sent to support at ocf.co.uk for a ticket number to be generated > > > > > > or > > > > > > existing support ticket to be updated. Should this not be done > > > > > > then OCF > > > > > > > > > > > > cannot be held responsible for requests not dealt with in a > > > > > > timely > > > > > > manner. > > > > > > > > > > > > OCF plc is a company registered in England and Wales. Registered > > > > > > number > > > > > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office address: > > > > > > OCF plc, > > > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, > > > > > > Sheffield S35 > > > > > > 2PG. > > > > > > > > > > > > If you have received this message in error, please notify us > > > > > > immediately and remove it from your system. > > > > > > > > > > > > > > _______________________________________________ > > > > rdo-list mailing list > > > > rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -- > > > Regards, > > > > > > Christopher Brown > > > OpenStack Engineer > > > OCF plc > > > > > > Tel: +44 (0)114 257 2200 > > > Web: www.ocf.co.uk > > > Blog: blog.ocf.co.uk > > > Twitter: @ocfplc > > > > > > Please note, any emails relating to an OCF Support request must always > > > be sent to support at ocf.co.uk for a ticket number to be generated or > > > existing support ticket to be updated. Should this not be done then OCF > > > > > > cannot be held responsible for requests not dealt with in a timely > > > manner. > > > > > > OCF plc is a company registered in England and Wales. Registered number > > > > > > 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 > > > 2PG. > > > > > > If you have received this message in error, please notify us > > > immediately and remove it from your system. > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > From pgsousa at gmail.com Fri Jun 3 16:26:58 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Fri, 3 Jun 2016 17:26:58 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> References: <1464948647.9673.19.camel@ocf.co.uk> <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> Message-ID: Thanks Ronelle, do you think this kind of errors can be related with network settings? "Could not retrieve fact='rabbitmq_nodename', resolution='': undefined method `[]' for nil:NilClass Could not retrieve fact='rabbitmq_nodename', resolution='': undefined method `[]' for nil:NilClass" On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy wrote: > Hi Pedro, > > You could use the docs you referred to. > Alternatively, if you want to use a vm for the undercloud and baremetal > machines for the overcloud, it is possible to use Tripleo Qucikstart with a > few modifications. > https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. > > ----- Original Message ----- > > From: "Pedro Sousa" > > To: "Ronelle Landy" > > Cc: "Christopher Brown" , "Ignacio Bravo" < > ibravo at ltgfederal.com>, "rdo-list" > > > > Sent: Friday, June 3, 2016 11:48:38 AM > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > > > Hi Ronelle, > > > > maybe I understand it wrong but I thought that Tripleo Quickstart was for > > deploying virtual environments? > > > > And for baremetal we should use > > > http://docs.openstack.org/developer/tripleo-docs/installation/installation.html > > ? > > > > Thanks > > > > > > > > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy wrote: > > > > > Hello, > > > > > > We have had success deploying RDO (Mitaka) on baremetal systems - using > > > Tripleo Quickstart with both single-nic-vlans and bond-with-vlans > network > > > isolation configurations. > > > > > > Baremetal can have some complicated networking issues but, from > previous > > > experiences, if a single-controller deployment worked but a HA > deployment > > > did not, I would check: > > > - does the HA deployment command include: -e > > > > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > > > - are there possible MTU issues? > > > > > > > > > ----- Original Message ----- > > > > From: "Christopher Brown" > > > > To: pgsousa at gmail.com, ibravo at ltgfederal.com > > > > Cc: rdo-list at redhat.com > > > > Sent: Friday, June 3, 2016 10:29:39 AM > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > > > > > > > Hello Ignacio, > > > > > > > > Thanks for your response and good to know it isn't just me! > > > > > > > > I would be more than happy to provide developers with access to our > > > > bare metal environments. I'll also file some bugzilla reports to see > if > > > > this generates any interest. > > > > > > > > Please do let me know if you make any progress - I am trying to > deploy > > > > HA with network isolation, multiple nics and vlans. > > > > > > > > The RDO web page states: > > > > > > > > "If you want to create a production-ready cloud, you'll want to use > the > > > > TripleO quickstart guide." > > > > > > > > which is a contradiction in terms really. > > > > > > > > Cheers > > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo wrote: > > > > > Pedro / Christopher, > > > > > > > > > > Just wanted to share with you that I also had plenty of issues > > > > > deploying on bare metal HA servers, and have paused the deployment > > > > > using TripleO until better winds start to flow here. I was able to > > > > > deploy the QuickStart, but on bare metal the history was different. > > > > > Couldn't even deploy a two server configuration. > > > > > > > > > > I was thinking that it would be good to have the developers have > > > > > access to one of our environments and go through a full install > with > > > > > us to better see where things fail. We can do this handholding > > > > > deployment once every week/month based on developers time > > > > > availability. That way we can get a working install, and we can > > > > > troubleshoot real life environment problems. > > > > > > > > > > > > > > > IB > > > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa wrote: > > > > > > > > > > > Yes. I've used this, but I'll try again as there's seems to be > new > > > > > > updates. > > > > > > > > > > > > > > > > > > > > > > > > Stable Branch Skip all repos mentioned above, other than epel- > > > > > > release which is still required. > > > > > > Enable latest RDO Stable Delorean repository for all packages > > > > > > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo > https://trunk.r > > > > > > doproject.org/centos7-liberty/current/delorean.repo > > > > > > Enable the Delorean Deps repository > > > > > > sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo > http://tru > > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown < > cbrown2 at ocf.co. > > > > > > uk> wrote: > > > > > > > No, Liberty deployed ok for us. > > > > > > > > > > > > > > It suggests to me a package mismatch. Have you completely > rebuilt > > > > > > > the > > > > > > > undercloud and then the images using Liberty? > > > > > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: > > > > > > > > AttributeError: 'module' object has no attribute 'PortOpt' > > > > > > > -- > > > > > > > Regards, > > > > > > > > > > > > > > Christopher Brown > > > > > > > OpenStack Engineer > > > > > > > OCF plc > > > > > > > > > > > > > > Tel: +44 (0)114 257 2200 > > > > > > > Web: www.ocf.co.uk > > > > > > > Blog: blog.ocf.co.uk > > > > > > > Twitter: @ocfplc > > > > > > > > > > > > > > Please note, any emails relating to an OCF Support request must > > > > > > > always > > > > > > > be sent to support at ocf.co.uk for a ticket number to be > generated > > > > > > > or > > > > > > > existing support ticket to be updated. Should this not be done > > > > > > > then OCF > > > > > > > > > > > > > > cannot be held responsible for requests not dealt with in a > > > > > > > timely > > > > > > > manner. > > > > > > > > > > > > > > OCF plc is a company registered in England and Wales. > Registered > > > > > > > number > > > > > > > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office address: > > > > > > > OCF plc, > > > > > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, > > > > > > > Sheffield S35 > > > > > > > 2PG. > > > > > > > > > > > > > > If you have received this message in error, please notify us > > > > > > > immediately and remove it from your system. > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > rdo-list mailing list > > > > > rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > -- > > > > Regards, > > > > > > > > Christopher Brown > > > > OpenStack Engineer > > > > OCF plc > > > > > > > > Tel: +44 (0)114 257 2200 > > > > Web: www.ocf.co.uk > > > > Blog: blog.ocf.co.uk > > > > Twitter: @ocfplc > > > > > > > > Please note, any emails relating to an OCF Support request must > always > > > > be sent to support at ocf.co.uk for a ticket number to be generated or > > > > existing support ticket to be updated. Should this not be done then > OCF > > > > > > > > cannot be held responsible for requests not dealt with in a timely > > > > manner. > > > > > > > > OCF plc is a company registered in England and Wales. Registered > number > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office address: OCF > plc, > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield > S35 > > > > 2PG. > > > > > > > > If you have received this message in error, please notify us > > > > immediately and remove it from your system. > > > > > > > > _______________________________________________ > > > > rdo-list mailing list > > > > rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Fri Jun 3 16:38:24 2016 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 3 Jun 2016 09:38:24 -0700 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <1464943002.9673.14.camel@ocf.co.uk> References: <1464943002.9673.14.camel@ocf.co.uk> Message-ID: On Jun 3, 2016 4:36 AM, "Christopher Brown" wrote: > > > > Its all a bit of a mess. I did try to make a start on cleaning up the > RDO docs but to be honest it meant having to learn yet another type of > documentation syntax so have reverted to internal documentation. > > I would be delighted to work with you on the docs syntax -- if you want to just write content, in any format, I'll be glad to do the markup side of things. Any help we can get on the docs is very, very welcome. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Fri Jun 3 16:43:10 2016 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 3 Jun 2016 09:43:10 -0700 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464943002.9673.14.camel@ocf.co.uk> Message-ID: <38ce0ed1-bf5a-428b-614e-3d61c8c7ff30@redhat.com> On 06/03/2016 09:38 AM, Rich Bowen wrote: > > > On Jun 3, 2016 4:36 AM, "Christopher Brown" > wrote: > > > > > > > > Its all a bit of a mess. I did try to make a start on cleaning up the > > RDO docs but to be honest it meant having to learn yet another type of > > documentation syntax so have reverted to internal documentation. > > > > > > I would be delighted to work with you on the docs syntax -- if you > want to just write content, in any format, I'll be glad to do the > markup side of things. Any help we can get on the docs is very, very > welcome. > Also, as OSP gets closer and closer to RDO in coming releases, I would really love to see closer working between the OSP and RDO documentation. This will benefit everyone. If there's anything at all I can do to help this out, please let me know. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlandy at redhat.com Fri Jun 3 16:55:45 2016 From: rlandy at redhat.com (Ronelle Landy) Date: Fri, 3 Jun 2016 12:55:45 -0400 (EDT) Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> Message-ID: <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> I am not sure exactly where you installed from, and when you did your installation, but any chance, you've hit: https://bugs.launchpad.net/tripleo/+bug/1584892? There is a link bugzilla record. ----- Original Message ----- > From: "Pedro Sousa" > To: "Ronelle Landy" > Cc: "Christopher Brown" , "Ignacio Bravo" , "rdo-list" > > Sent: Friday, June 3, 2016 12:26:58 PM > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > Thanks Ronelle, > > do you think this kind of errors can be related with network settings? > > "Could not retrieve fact='rabbitmq_nodename', resolution='': > undefined method `[]' for nil:NilClass Could not retrieve > fact='rabbitmq_nodename', resolution='': undefined method `[]' > for nil:NilClass" > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy wrote: > > > Hi Pedro, > > > > You could use the docs you referred to. > > Alternatively, if you want to use a vm for the undercloud and baremetal > > machines for the overcloud, it is possible to use Tripleo Qucikstart with a > > few modifications. > > https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. > > > > ----- Original Message ----- > > > From: "Pedro Sousa" > > > To: "Ronelle Landy" > > > Cc: "Christopher Brown" , "Ignacio Bravo" < > > ibravo at ltgfederal.com>, "rdo-list" > > > > > > Sent: Friday, June 3, 2016 11:48:38 AM > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > > > > > Hi Ronelle, > > > > > > maybe I understand it wrong but I thought that Tripleo Quickstart was for > > > deploying virtual environments? > > > > > > And for baremetal we should use > > > > > http://docs.openstack.org/developer/tripleo-docs/installation/installation.html > > > ? > > > > > > Thanks > > > > > > > > > > > > > > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy wrote: > > > > > > > Hello, > > > > > > > > We have had success deploying RDO (Mitaka) on baremetal systems - using > > > > Tripleo Quickstart with both single-nic-vlans and bond-with-vlans > > network > > > > isolation configurations. > > > > > > > > Baremetal can have some complicated networking issues but, from > > previous > > > > experiences, if a single-controller deployment worked but a HA > > deployment > > > > did not, I would check: > > > > - does the HA deployment command include: -e > > > > > > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > > > > - are there possible MTU issues? > > > > > > > > > > > > ----- Original Message ----- > > > > > From: "Christopher Brown" > > > > > To: pgsousa at gmail.com, ibravo at ltgfederal.com > > > > > Cc: rdo-list at redhat.com > > > > > Sent: Friday, June 3, 2016 10:29:39 AM > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > > > > > > > > > Hello Ignacio, > > > > > > > > > > Thanks for your response and good to know it isn't just me! > > > > > > > > > > I would be more than happy to provide developers with access to our > > > > > bare metal environments. I'll also file some bugzilla reports to see > > if > > > > > this generates any interest. > > > > > > > > > > Please do let me know if you make any progress - I am trying to > > deploy > > > > > HA with network isolation, multiple nics and vlans. > > > > > > > > > > The RDO web page states: > > > > > > > > > > "If you want to create a production-ready cloud, you'll want to use > > the > > > > > TripleO quickstart guide." > > > > > > > > > > which is a contradiction in terms really. > > > > > > > > > > Cheers > > > > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo wrote: > > > > > > Pedro / Christopher, > > > > > > > > > > > > Just wanted to share with you that I also had plenty of issues > > > > > > deploying on bare metal HA servers, and have paused the deployment > > > > > > using TripleO until better winds start to flow here. I was able to > > > > > > deploy the QuickStart, but on bare metal the history was different. > > > > > > Couldn't even deploy a two server configuration. > > > > > > > > > > > > I was thinking that it would be good to have the developers have > > > > > > access to one of our environments and go through a full install > > with > > > > > > us to better see where things fail. We can do this handholding > > > > > > deployment once every week/month based on developers time > > > > > > availability. That way we can get a working install, and we can > > > > > > troubleshoot real life environment problems. > > > > > > > > > > > > > > > > > > IB > > > > > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa wrote: > > > > > > > > > > > > > Yes. I've used this, but I'll try again as there's seems to be > > new > > > > > > > updates. > > > > > > > > > > > > > > > > > > > > > > > > > > > > Stable Branch Skip all repos mentioned above, other than epel- > > > > > > > release which is still required. > > > > > > > Enable latest RDO Stable Delorean repository for all packages > > > > > > > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo > > https://trunk.r > > > > > > > doproject.org/centos7-liberty/current/delorean.repo > > > > > > > Enable the Delorean Deps repository > > > > > > > sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo > > http://tru > > > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown < > > cbrown2 at ocf.co. > > > > > > > uk> wrote: > > > > > > > > No, Liberty deployed ok for us. > > > > > > > > > > > > > > > > It suggests to me a package mismatch. Have you completely > > rebuilt > > > > > > > > the > > > > > > > > undercloud and then the images using Liberty? > > > > > > > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: > > > > > > > > > AttributeError: 'module' object has no attribute 'PortOpt' > > > > > > > > -- > > > > > > > > Regards, > > > > > > > > > > > > > > > > Christopher Brown > > > > > > > > OpenStack Engineer > > > > > > > > OCF plc > > > > > > > > > > > > > > > > Tel: +44 (0)114 257 2200 > > > > > > > > Web: www.ocf.co.uk > > > > > > > > Blog: blog.ocf.co.uk > > > > > > > > Twitter: @ocfplc > > > > > > > > > > > > > > > > Please note, any emails relating to an OCF Support request must > > > > > > > > always > > > > > > > > be sent to support at ocf.co.uk for a ticket number to be > > generated > > > > > > > > or > > > > > > > > existing support ticket to be updated. Should this not be done > > > > > > > > then OCF > > > > > > > > > > > > > > > > cannot be held responsible for requests not dealt with in a > > > > > > > > timely > > > > > > > > manner. > > > > > > > > > > > > > > > > OCF plc is a company registered in England and Wales. > > Registered > > > > > > > > number > > > > > > > > > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office address: > > > > > > > > OCF plc, > > > > > > > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, > > > > > > > > Sheffield S35 > > > > > > > > 2PG. > > > > > > > > > > > > > > > > If you have received this message in error, please notify us > > > > > > > > immediately and remove it from your system. > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > rdo-list mailing list > > > > > > rdo-list at redhat.com > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > -- > > > > > Regards, > > > > > > > > > > Christopher Brown > > > > > OpenStack Engineer > > > > > OCF plc > > > > > > > > > > Tel: +44 (0)114 257 2200 > > > > > Web: www.ocf.co.uk > > > > > Blog: blog.ocf.co.uk > > > > > Twitter: @ocfplc > > > > > > > > > > Please note, any emails relating to an OCF Support request must > > always > > > > > be sent to support at ocf.co.uk for a ticket number to be generated or > > > > > existing support ticket to be updated. Should this not be done then > > OCF > > > > > > > > > > cannot be held responsible for requests not dealt with in a timely > > > > > manner. > > > > > > > > > > OCF plc is a company registered in England and Wales. Registered > > number > > > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office address: OCF > > plc, > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield > > S35 > > > > > 2PG. > > > > > > > > > > If you have received this message in error, please notify us > > > > > immediately and remove it from your system. > > > > > > > > > > _______________________________________________ > > > > > rdo-list mailing list > > > > > rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > From pgsousa at gmail.com Fri Jun 3 17:20:43 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Fri, 3 Jun 2016 18:20:43 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> Message-ID: Anyway to workaround this? Maybe downgrade hiera? On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy wrote: > I am not sure exactly where you installed from, and when you did your > installation, but any chance, you've hit: > https://bugs.launchpad.net/tripleo/+bug/1584892? > There is a link bugzilla record. > > ----- Original Message ----- > > From: "Pedro Sousa" > > To: "Ronelle Landy" > > Cc: "Christopher Brown" , "Ignacio Bravo" < > ibravo at ltgfederal.com>, "rdo-list" > > > > Sent: Friday, June 3, 2016 12:26:58 PM > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > > > Thanks Ronelle, > > > > do you think this kind of errors can be related with network settings? > > > > "Could not retrieve fact='rabbitmq_nodename', resolution='': > > undefined method `[]' for nil:NilClass Could not retrieve > > fact='rabbitmq_nodename', resolution='': undefined method `[]' > > for nil:NilClass" > > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy wrote: > > > > > Hi Pedro, > > > > > > You could use the docs you referred to. > > > Alternatively, if you want to use a vm for the undercloud and baremetal > > > machines for the overcloud, it is possible to use Tripleo Qucikstart > with a > > > few modifications. > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. > > > > > > ----- Original Message ----- > > > > From: "Pedro Sousa" > > > > To: "Ronelle Landy" > > > > Cc: "Christopher Brown" , "Ignacio Bravo" < > > > ibravo at ltgfederal.com>, "rdo-list" > > > > > > > > Sent: Friday, June 3, 2016 11:48:38 AM > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > > > > > > > Hi Ronelle, > > > > > > > > maybe I understand it wrong but I thought that Tripleo Quickstart > was for > > > > deploying virtual environments? > > > > > > > > And for baremetal we should use > > > > > > > > http://docs.openstack.org/developer/tripleo-docs/installation/installation.html > > > > ? > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy > wrote: > > > > > > > > > Hello, > > > > > > > > > > We have had success deploying RDO (Mitaka) on baremetal systems - > using > > > > > Tripleo Quickstart with both single-nic-vlans and bond-with-vlans > > > network > > > > > isolation configurations. > > > > > > > > > > Baremetal can have some complicated networking issues but, from > > > previous > > > > > experiences, if a single-controller deployment worked but a HA > > > deployment > > > > > did not, I would check: > > > > > - does the HA deployment command include: -e > > > > > > > > > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > > > > > - are there possible MTU issues? > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > From: "Christopher Brown" > > > > > > To: pgsousa at gmail.com, ibravo at ltgfederal.com > > > > > > Cc: rdo-list at redhat.com > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > > > > > > > > > > > Hello Ignacio, > > > > > > > > > > > > Thanks for your response and good to know it isn't just me! > > > > > > > > > > > > I would be more than happy to provide developers with access to > our > > > > > > bare metal environments. I'll also file some bugzilla reports to > see > > > if > > > > > > this generates any interest. > > > > > > > > > > > > Please do let me know if you make any progress - I am trying to > > > deploy > > > > > > HA with network isolation, multiple nics and vlans. > > > > > > > > > > > > The RDO web page states: > > > > > > > > > > > > "If you want to create a production-ready cloud, you'll want to > use > > > the > > > > > > TripleO quickstart guide." > > > > > > > > > > > > which is a contradiction in terms really. > > > > > > > > > > > > Cheers > > > > > > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo wrote: > > > > > > > Pedro / Christopher, > > > > > > > > > > > > > > Just wanted to share with you that I also had plenty of issues > > > > > > > deploying on bare metal HA servers, and have paused the > deployment > > > > > > > using TripleO until better winds start to flow here. I was > able to > > > > > > > deploy the QuickStart, but on bare metal the history was > different. > > > > > > > Couldn't even deploy a two server configuration. > > > > > > > > > > > > > > I was thinking that it would be good to have the developers > have > > > > > > > access to one of our environments and go through a full install > > > with > > > > > > > us to better see where things fail. We can do this handholding > > > > > > > deployment once every week/month based on developers time > > > > > > > availability. That way we can get a working install, and we can > > > > > > > troubleshoot real life environment problems. > > > > > > > > > > > > > > > > > > > > > IB > > > > > > > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa > wrote: > > > > > > > > > > > > > > > Yes. I've used this, but I'll try again as there's seems to > be > > > new > > > > > > > > updates. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Stable Branch Skip all repos mentioned above, other than > epel- > > > > > > > > release which is still required. > > > > > > > > Enable latest RDO Stable Delorean repository for all packages > > > > > > > > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo > > > https://trunk.r > > > > > > > > doproject.org/centos7-liberty/current/delorean.repo > > > > > > > > Enable the Delorean Deps repository > > > > > > > > sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo > > > http://tru > > > > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > > > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown < > > > cbrown2 at ocf.co. > > > > > > > > uk> wrote: > > > > > > > > > No, Liberty deployed ok for us. > > > > > > > > > > > > > > > > > > It suggests to me a package mismatch. Have you completely > > > rebuilt > > > > > > > > > the > > > > > > > > > undercloud and then the images using Liberty? > > > > > > > > > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: > > > > > > > > > > AttributeError: 'module' object has no attribute > 'PortOpt' > > > > > > > > > -- > > > > > > > > > Regards, > > > > > > > > > > > > > > > > > > Christopher Brown > > > > > > > > > OpenStack Engineer > > > > > > > > > OCF plc > > > > > > > > > > > > > > > > > > Tel: +44 (0)114 257 2200 > > > > > > > > > Web: www.ocf.co.uk > > > > > > > > > Blog: blog.ocf.co.uk > > > > > > > > > Twitter: @ocfplc > > > > > > > > > > > > > > > > > > Please note, any emails relating to an OCF Support request > must > > > > > > > > > always > > > > > > > > > be sent to support at ocf.co.uk for a ticket number to be > > > generated > > > > > > > > > or > > > > > > > > > existing support ticket to be updated. Should this not be > done > > > > > > > > > then OCF > > > > > > > > > > > > > > > > > > cannot be held responsible for requests not dealt with in a > > > > > > > > > timely > > > > > > > > > manner. > > > > > > > > > > > > > > > > > > OCF plc is a company registered in England and Wales. > > > Registered > > > > > > > > > number > > > > > > > > > > > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office > address: > > > > > > > > > OCF plc, > > > > > > > > > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, > > > > > > > > > Sheffield S35 > > > > > > > > > 2PG. > > > > > > > > > > > > > > > > > > If you have received this message in error, please notify > us > > > > > > > > > immediately and remove it from your system. > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > rdo-list mailing list > > > > > > > rdo-list at redhat.com > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > -- > > > > > > Regards, > > > > > > > > > > > > Christopher Brown > > > > > > OpenStack Engineer > > > > > > OCF plc > > > > > > > > > > > > Tel: +44 (0)114 257 2200 > > > > > > Web: www.ocf.co.uk > > > > > > Blog: blog.ocf.co.uk > > > > > > Twitter: @ocfplc > > > > > > > > > > > > Please note, any emails relating to an OCF Support request must > > > always > > > > > > be sent to support at ocf.co.uk for a ticket number to be > generated or > > > > > > existing support ticket to be updated. Should this not be done > then > > > OCF > > > > > > > > > > > > cannot be held responsible for requests not dealt with in a > timely > > > > > > manner. > > > > > > > > > > > > OCF plc is a company registered in England and Wales. Registered > > > number > > > > > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office address: > OCF > > > plc, > > > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, > Sheffield > > > S35 > > > > > > 2PG. > > > > > > > > > > > > If you have received this message in error, please notify us > > > > > > immediately and remove it from your system. > > > > > > > > > > > > _______________________________________________ > > > > > > rdo-list mailing list > > > > > > rdo-list at redhat.com > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlandy at redhat.com Fri Jun 3 19:26:11 2016 From: rlandy at redhat.com (Ronelle Landy) Date: Fri, 3 Jun 2016 15:26:11 -0400 (EDT) Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> Message-ID: <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> The report says: "Fix Released" as of 2016-05-24. Are you installing on a clean system with the latest repositories? Might also want to check your version of rabbitmq: I have rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. ----- Original Message ----- > From: "Pedro Sousa" > To: "Ronelle Landy" > Cc: "Christopher Brown" , "Ignacio Bravo" , "rdo-list" > > Sent: Friday, June 3, 2016 1:20:43 PM > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > Anyway to workaround this? Maybe downgrade hiera? > > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy wrote: > > > I am not sure exactly where you installed from, and when you did your > > installation, but any chance, you've hit: > > https://bugs.launchpad.net/tripleo/+bug/1584892? > > There is a link bugzilla record. > > > > ----- Original Message ----- > > > From: "Pedro Sousa" > > > To: "Ronelle Landy" > > > Cc: "Christopher Brown" , "Ignacio Bravo" < > > ibravo at ltgfederal.com>, "rdo-list" > > > > > > Sent: Friday, June 3, 2016 12:26:58 PM > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > > > > > Thanks Ronelle, > > > > > > do you think this kind of errors can be related with network settings? > > > > > > "Could not retrieve fact='rabbitmq_nodename', resolution='': > > > undefined method `[]' for nil:NilClass Could not retrieve > > > fact='rabbitmq_nodename', resolution='': undefined method `[]' > > > for nil:NilClass" > > > > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy wrote: > > > > > > > Hi Pedro, > > > > > > > > You could use the docs you referred to. > > > > Alternatively, if you want to use a vm for the undercloud and baremetal > > > > machines for the overcloud, it is possible to use Tripleo Qucikstart > > with a > > > > few modifications. > > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. > > > > > > > > ----- Original Message ----- > > > > > From: "Pedro Sousa" > > > > > To: "Ronelle Landy" > > > > > Cc: "Christopher Brown" , "Ignacio Bravo" < > > > > ibravo at ltgfederal.com>, "rdo-list" > > > > > > > > > > Sent: Friday, June 3, 2016 11:48:38 AM > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > > > > > > > > > Hi Ronelle, > > > > > > > > > > maybe I understand it wrong but I thought that Tripleo Quickstart > > was for > > > > > deploying virtual environments? > > > > > > > > > > And for baremetal we should use > > > > > > > > > > > http://docs.openstack.org/developer/tripleo-docs/installation/installation.html > > > > > ? > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy > > wrote: > > > > > > > > > > > Hello, > > > > > > > > > > > > We have had success deploying RDO (Mitaka) on baremetal systems - > > using > > > > > > Tripleo Quickstart with both single-nic-vlans and bond-with-vlans > > > > network > > > > > > isolation configurations. > > > > > > > > > > > > Baremetal can have some complicated networking issues but, from > > > > previous > > > > > > experiences, if a single-controller deployment worked but a HA > > > > deployment > > > > > > did not, I would check: > > > > > > - does the HA deployment command include: -e > > > > > > > > > > > > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > > > > > > - are there possible MTU issues? > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > From: "Christopher Brown" > > > > > > > To: pgsousa at gmail.com, ibravo at ltgfederal.com > > > > > > > Cc: rdo-list at redhat.com > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > > > > > > > > > > > > > Hello Ignacio, > > > > > > > > > > > > > > Thanks for your response and good to know it isn't just me! > > > > > > > > > > > > > > I would be more than happy to provide developers with access to > > our > > > > > > > bare metal environments. I'll also file some bugzilla reports to > > see > > > > if > > > > > > > this generates any interest. > > > > > > > > > > > > > > Please do let me know if you make any progress - I am trying to > > > > deploy > > > > > > > HA with network isolation, multiple nics and vlans. > > > > > > > > > > > > > > The RDO web page states: > > > > > > > > > > > > > > "If you want to create a production-ready cloud, you'll want to > > use > > > > the > > > > > > > TripleO quickstart guide." > > > > > > > > > > > > > > which is a contradiction in terms really. > > > > > > > > > > > > > > Cheers > > > > > > > > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo wrote: > > > > > > > > Pedro / Christopher, > > > > > > > > > > > > > > > > Just wanted to share with you that I also had plenty of issues > > > > > > > > deploying on bare metal HA servers, and have paused the > > deployment > > > > > > > > using TripleO until better winds start to flow here. I was > > able to > > > > > > > > deploy the QuickStart, but on bare metal the history was > > different. > > > > > > > > Couldn't even deploy a two server configuration. > > > > > > > > > > > > > > > > I was thinking that it would be good to have the developers > > have > > > > > > > > access to one of our environments and go through a full install > > > > with > > > > > > > > us to better see where things fail. We can do this handholding > > > > > > > > deployment once every week/month based on developers time > > > > > > > > availability. That way we can get a working install, and we can > > > > > > > > troubleshoot real life environment problems. > > > > > > > > > > > > > > > > > > > > > > > > IB > > > > > > > > > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa > > wrote: > > > > > > > > > > > > > > > > > Yes. I've used this, but I'll try again as there's seems to > > be > > > > new > > > > > > > > > updates. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Stable Branch Skip all repos mentioned above, other than > > epel- > > > > > > > > > release which is still required. > > > > > > > > > Enable latest RDO Stable Delorean repository for all packages > > > > > > > > > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo > > > > https://trunk.r > > > > > > > > > doproject.org/centos7-liberty/current/delorean.repo > > > > > > > > > Enable the Delorean Deps repository > > > > > > > > > sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo > > > > http://tru > > > > > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > > > > > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown < > > > > cbrown2 at ocf.co. > > > > > > > > > uk> wrote: > > > > > > > > > > No, Liberty deployed ok for us. > > > > > > > > > > > > > > > > > > > > It suggests to me a package mismatch. Have you completely > > > > rebuilt > > > > > > > > > > the > > > > > > > > > > undercloud and then the images using Liberty? > > > > > > > > > > > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: > > > > > > > > > > > AttributeError: 'module' object has no attribute > > 'PortOpt' > > > > > > > > > > -- > > > > > > > > > > Regards, > > > > > > > > > > > > > > > > > > > > Christopher Brown > > > > > > > > > > OpenStack Engineer > > > > > > > > > > OCF plc > > > > > > > > > > > > > > > > > > > > Tel: +44 (0)114 257 2200 > > > > > > > > > > Web: www.ocf.co.uk > > > > > > > > > > Blog: blog.ocf.co.uk > > > > > > > > > > Twitter: @ocfplc > > > > > > > > > > > > > > > > > > > > Please note, any emails relating to an OCF Support request > > must > > > > > > > > > > always > > > > > > > > > > be sent to support at ocf.co.uk for a ticket number to be > > > > generated > > > > > > > > > > or > > > > > > > > > > existing support ticket to be updated. Should this not be > > done > > > > > > > > > > then OCF > > > > > > > > > > > > > > > > > > > > cannot be held responsible for requests not dealt with in a > > > > > > > > > > timely > > > > > > > > > > manner. > > > > > > > > > > > > > > > > > > > > OCF plc is a company registered in England and Wales. > > > > Registered > > > > > > > > > > number > > > > > > > > > > > > > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office > > address: > > > > > > > > > > OCF plc, > > > > > > > > > > > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, > > > > > > > > > > Sheffield S35 > > > > > > > > > > 2PG. > > > > > > > > > > > > > > > > > > > > If you have received this message in error, please notify > > us > > > > > > > > > > immediately and remove it from your system. > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > rdo-list mailing list > > > > > > > > rdo-list at redhat.com > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > -- > > > > > > > Regards, > > > > > > > > > > > > > > Christopher Brown > > > > > > > OpenStack Engineer > > > > > > > OCF plc > > > > > > > > > > > > > > Tel: +44 (0)114 257 2200 > > > > > > > Web: www.ocf.co.uk > > > > > > > Blog: blog.ocf.co.uk > > > > > > > Twitter: @ocfplc > > > > > > > > > > > > > > Please note, any emails relating to an OCF Support request must > > > > always > > > > > > > be sent to support at ocf.co.uk for a ticket number to be > > generated or > > > > > > > existing support ticket to be updated. Should this not be done > > then > > > > OCF > > > > > > > > > > > > > > cannot be held responsible for requests not dealt with in a > > timely > > > > > > > manner. > > > > > > > > > > > > > > OCF plc is a company registered in England and Wales. Registered > > > > number > > > > > > > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office address: > > OCF > > > > plc, > > > > > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, > > Sheffield > > > > S35 > > > > > > > 2PG. > > > > > > > > > > > > > > If you have received this message in error, please notify us > > > > > > > immediately and remove it from your system. > > > > > > > > > > > > > > _______________________________________________ > > > > > > > rdo-list mailing list > > > > > > > rdo-list at redhat.com > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > From trown at redhat.com Fri Jun 3 20:53:08 2016 From: trown at redhat.com (John Trowbridge) Date: Fri, 3 Jun 2016 16:53:08 -0400 Subject: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash In-Reply-To: References: <57517B59.7040103@redhat.com> Message-ID: <5751EE34.20901@redhat.com> I just did an HA deploy locally on master, and I see the same thing wrt telemetry services being down due to failed redis import. That could be a packaging bug (something should depend on python-redis, maybe python-tooz?). That said, it does not appear fatal in my case. Is there some issue other than telemetry services being down that you are seeing? That is certainly something we should fix, but I wouldn't characterize it as the deployment is constantly crashing. On 06/03/2016 11:30 AM, Boris Derzhavets wrote: > 1. Attempting to address your concern ( if I understood you correct ) > > First log :- > > [root at overcloud-controller-0 ceilometer]# cat central.log | grep ERROR > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service [req-4db5f172-0bf0-4200-9cf4-174859cdc00b admin - - - -] Error starting thread. > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service Traceback (most recent call last): > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 680, in run_service > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service service.start() > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/ceilometer/agent/manager.py", line 384, in start > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service self.partition_coordinator.start() > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/ceilometer/coordination.py", line 84, in start > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service backend_url, self._my_id) > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 539, in get_coordinator > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service invoke_args=(member_id, parsed_url, options)).driver > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 46, in __init__ > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements=verify_requirements, > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 55, in __init__ > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements) > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 171, in _load_plugins > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service self._on_load_failure_callback(self, ep, err) > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 163, in _load_plugins > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements, > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 123, in _load_one_plugin > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements, > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 186, in _load_one_plugin > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service plugin = ep.load(require=verify_requirements) > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service entry = __import__(self.module_name, globals(),globals(), ['__name__']) > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/drivers/redis.py", line 27, in > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service import redis > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service ImportError: No module named redis > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service > [root at overcloud-controller-0 ceilometer]# clear >  > [root at overcloud-controller-0 ceilometer]# cat central.log | grep ERROR > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service [req-4db5f172-0bf0-4200-9cf4-174859cdc00b admin - - - -] Error starting thread. > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service Traceback (most recent call last): > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 680, in run_service > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service service.start() > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/ceilometer/agent/manager.py", line 384, in start > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service self.partition_coordinator.start() > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/ceilometer/coordination.py", line 84, in start > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service backend_url, self._my_id) > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 539, in get_coordinator > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service invoke_args=(member_id, parsed_url, options)).driver > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 46, in __init__ > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements=verify_requirements, > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 55, in __init__ > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements) > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 171, in _load_plugins > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service self._on_load_failure_callback(self, ep, err) > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 163, in _load_plugins > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements, > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 123, in _load_one_plugin > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements, > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 186, in _load_one_plugin > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service plugin = ep.load(require=verify_requirements) > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service entry = __import__(self.module_name, globals(),globals(), ['__name__']) > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/drivers/redis.py", line 27, in > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service import redis > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service ImportError: No module named redis > 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service > > Second log :- > > [root at overcloud-controller-0 ceilometer]# cd - > /var/log/aodh > [root at overcloud-controller-0 aodh]# cat evaluator.log | grep ERROR > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service [-] Error starting thread. > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service Traceback (most recent call last): > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 680, in run_service > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service service.start() > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/aodh/evaluator/__init__.py", line 229, in start > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service self.partition_coordinator.start() > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/aodh/coordination.py", line 133, in start > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service self.backend_url, self._my_id) > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 539, in get_coordinator > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service invoke_args=(member_id, parsed_url, options)).driver > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 46, in __init__ > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service verify_requirements=verify_requirements, > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 55, in __init__ > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service verify_requirements) > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 171, in _load_plugins > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service self._on_load_failure_callback(self, ep, err) > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 163, in _load_plugins > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service verify_requirements, > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 123, in _load_one_plugin > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service verify_requirements, > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 186, in _load_one_plugin > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service plugin = ep.load(require=verify_requirements) > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service entry = __import__(self.module_name, globals(),globals(), ['__name__']) > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/drivers/redis.py", line 27, in > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service import redis > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service ImportError: No module named redis > 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service > > 2 . Memory DIMMs DDR3 ( Kingston HyperX 1600 MHZ ) is not a problem > My board ASUS Z97-P cannot support more 32 GB. So .... > > 3. i7 4790 surprised me on doing deployment on TripleO Quickstart , in particular, Controller+2xComputes ( --compute-scale 2 ) > > Thank you > Boris. > ________________________________________ > From: John Trowbridge > Sent: Friday, June 3, 2016 8:43 AM > To: Boris Derzhavets; John Trowbridge; Lars Kellogg-Stedman > Cc: rdo-list > Subject: Re: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash > > So this last one looks like telemetry services went down. You could > check the logs on the controllers to see if it was OOM killed. My bet > would be this is what is happening. > > The reason that HA is not the default for tripleo-quickstart is exactly > this type of issue. It is pretty difficult to fit a full HA deployment > of TripleO on a 32G virthost. I think there is near 100% chance that the > default HA config will crash when trying to do anything on the > deployed overcloud, due to running out of memory. > > I have had some success in my local test setup using KSM [1] on the > virthost, and then changing the HA config to give the controllers more > memory. This results in overcommiting, but KSM can handle overcommiting > without going into swap. It might even be possible to try to setup KSM > in the environment setup part of quickstart. I would certainly accept an > RFE/patch for this [2,3]. > > If you have a larger virthost than 32G, you could similarly bump the > memory for the controllers, which should lead to a much higher success rate. > > There is also a feature coming in TripleO [4] that will allow choosing > what services get deployed in each role, which will allow us to tweak > the tripleo-quickstart HA config to deploy a minimal service layout in > order to reduce memory requirements. > > Thanks a ton for giving tripleo-quickstart a go! > > [1] https://en.wikipedia.org/wiki/Kernel_same-page_merging > [2] https://bugs.launchpad.net/tripleo-quickstart > [3] https://review.openstack.org/#/q/project:openstack/tripleo-quickstart > [4] > https://blueprints.launchpad.net/tripleo/+spec/composable-services-within-roles > > On 06/03/2016 06:20 AM, Boris Derzhavets wrote: >> ===================================== >> >> Fresh HA deployment attempt >> >> ===================================== >> >> [stack at undercloud ~]$ date >> Fri Jun 3 10:05:35 UTC 2016 >> [stack at undercloud ~]$ heat stack-list >> +--------------------------------------+------------+-----------------+---------------------+--------------+ >> | id | stack_name | stack_status | creation_time | updated_time | >> +--------------------------------------+------------+-----------------+---------------------+--------------+ >> | 0c6b8205-be86-4a24-be36-fd4ece956c6d | overcloud | CREATE_COMPLETE | 2016-06-03T08:14:19 | None | >> +--------------------------------------+------------+-----------------+---------------------+--------------+ >> [stack at undercloud ~]$ nova list >> +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ >> | ID | Name | Status | Task State | Power State | Networks | >> +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ >> | 6a38b7be-3743-4339-970b-6121e687741d | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.10 | >> | 9222dc1b-5974-495b-8b98-b8176ac742f4 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.9 | >> | 76adbb27-220f-42ef-9691-94729ee28749 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.11 | >> | 8f57f7b6-a2d8-4b7b-b435-1c675e63ea84 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.8 | >> +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ >> [stack at undercloud ~]$ ssh heat-admin at 192.0.2.10 >> Last login: Fri Jun 3 10:01:44 2016 from gateway >> [heat-admin at overcloud-controller-0 ~]$ sudo su - >> Last login: Fri Jun 3 10:01:49 UTC 2016 on pts/0 >> [root at overcloud-controller-0 ~]# . keystonerc_admin >> >> [root at overcloud-controller-0 ~]# pcs status >> Cluster name: tripleo_cluster >> Last updated: Fri Jun 3 10:07:22 2016 Last change: Fri Jun 3 08:50:59 2016 by root via cibadmin on overcloud-controller-0 >> Stack: corosync >> Current DC: overcloud-controller-0 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum >> 3 nodes and 123 resources configured >> >> Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> >> Full list of resources: >> >> ip-192.0.2.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0 >> Clone Set: haproxy-clone [haproxy] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> ip-192.0.2.7 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1 >> Master/Slave Set: galera-master [galera] >> Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: memcached-clone [memcached] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: rabbitmq-clone [rabbitmq] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-core-clone [openstack-core] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Master/Slave Set: redis-master [redis] >> Masters: [ overcloud-controller-1 ] >> Slaves: [ overcloud-controller-0 overcloud-controller-2 ] >> Clone Set: mongod-clone [mongod] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator] >> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: neutron-l3-agent-clone [neutron-l3-agent] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-2 >> Clone Set: openstack-heat-engine-clone [openstack-heat-engine] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api] >> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener] >> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier] >> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-heat-api-clone [openstack-heat-api] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector] >> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-glance-api-clone [openstack-glance-api] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-nova-api-clone [openstack-nova-api] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-sahara-api-clone [openstack-sahara-api] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-glance-registry-clone [openstack-glance-registry] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-cinder-api-clone [openstack-cinder-api] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: delay-clone [delay] >> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: neutron-server-clone [neutron-server] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central] >> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: httpd-clone [httpd] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor] >> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >> >> Failed Actions: >> * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-1 'not running' (7): call=76, status=complete, exitreason='none', >> last-rc-change='Fri Jun 3 08:47:22 2016', queued=0ms, exec=0ms >> * openstack-ceilometer-central_start_0 on overcloud-controller-1 'not running' (7): call=290, status=complete, exitreason='none', >> last-rc-change='Fri Jun 3 08:51:18 2016', queued=0ms, exec=2132ms >> * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-2 'not running' (7): call=76, status=complete, exitreason='none', >> last-rc-change='Fri Jun 3 08:47:16 2016', queued=0ms, exec=0ms >> * openstack-ceilometer-central_start_0 on overcloud-controller-2 'not running' (7): call=292, status=complete, exitreason='none', >> last-rc-change='Fri Jun 3 08:51:31 2016', queued=0ms, exec=2102ms >> * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-0 'not running' (7): call=77, status=complete, exitreason='none', >> last-rc-change='Fri Jun 3 08:47:19 2016', queued=0ms, exec=0ms >> * openstack-ceilometer-central_start_0 on overcloud-controller-0 'not running' (7): call=270, status=complete, exitreason='none', >> last-rc-change='Fri Jun 3 08:50:02 2016', queued=0ms, exec=2199ms >> >> >> PCSD Status: >> overcloud-controller-0: Online >> overcloud-controller-1: Online >> overcloud-controller-2: Online >> >> Daemon Status: >> corosync: active/enabled >> pacemaker: active/enabled >> pcsd: active/enabled >> >> >> ________________________________ >> From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets >> Sent: Monday, May 30, 2016 4:56 AM >> To: John Trowbridge; Lars Kellogg-Stedman >> Cc: rdo-list >> Subject: Re: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash >> >> >> Done one more time :- >> >> >> [stack at undercloud ~]$ heat deployment-show 9cc8087a-6d82-4261-8a13-ee8c46e3a02d >> >> Uploaded here :- >> >> http://textuploader.com/5bm5v >> ________________________________ >> From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets >> Sent: Sunday, May 29, 2016 3:39 AM >> To: John Trowbridge; Lars Kellogg-Stedman >> Cc: rdo-list >> Subject: [rdo-list] Tripleo QuickStart HA deploymemt attempts constantly crash >> >> >> Error every time is the same :- >> >> >> 2016-05-29 07:20:17 [0]: CREATE_FAILED Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 >> 2016-05-29 07:20:18 [0]: SIGNAL_COMPLETE Unknown >> 2016-05-29 07:20:18 [overcloud-ControllerNodesPostDeployment-dzawjmjyaidt-ControllerServicesBaseDeployment_Step2-ufz2ccs5egd7]: CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 >> 2016-05-29 07:20:18 [0]: SIGNAL_COMPLETE Unknown >> 2016-05-29 07:20:19 [ControllerServicesBaseDeployment_Step2]: CREATE_FAILED Error: resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 >> 2016-05-29 07:20:19 [0]: SIGNAL_COMPLETE Unknown >> 2016-05-29 07:20:19 [0]: SIGNAL_COMPLETE Unknown >> 2016-05-29 07:20:20 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >> 2016-05-29 07:20:20 [overcloud-ControllerNodesPostDeployment-dzawjmjyaidt]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 >> 2016-05-29 07:20:21 [ControllerNodesPostDeployment]: CREATE_FAILED Error: resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 >> 2016-05-29 07:20:21 [0]: SIGNAL_COMPLETE Unknown >> 2016-05-29 07:20:22 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >> 2016-05-29 07:20:22 [0]: SIGNAL_COMPLETE Unknown >> 2016-05-29 07:24:22 [ComputeNodesPostDeployment]: CREATE_FAILED CREATE aborted >> 2016-05-29 07:24:22 [overcloud]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 >> Stack overcloud CREATE_FAILED >> Deployment failed: Heat Stack create failed. >> + heat stack-list >> + grep -q CREATE_FAILED >> + deploy_status=1 >> ++ heat resource-list --nested-depth 5 overcloud >> ++ grep FAILED >> ++ grep 'StructuredDeployment ' >> ++ cut -d '|' -f3 >> + for failed in '$(heat resource-list --nested-depth 5 overcloud | grep FAILED | >> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >> + heat deployment-show 66bd3fbe-296b-4f88-87a7-5ceafd05c1d3 >> + exit 1 >> >> >> Minimal configuration deployments run with no errors and build completely functional environment. >> >> >> However, template :- >> >> >> ################################# >> # Test Controller + 2*Compute nodes >> ################################# >> control_memory: 6144 >> compute_memory: 6144 >> >> undercloud_memory: 8192 >> >> # Giving the undercloud additional CPUs can greatly improve heat's >> # performance (and result in a shorter deploy time). >> undercloud_vcpu: 4 >> >> # We set introspection to true and use only the minimal amount of nodes >> # for this job, but test all defaults otherwise. >> step_introspect: true >> >> # Define a single controller node and a single compute node. >> overcloud_nodes: >> - name: control_0 >> flavor: control >> >> - name: compute_0 >> flavor: compute >> >> - name: compute_1 >> flavor: compute >> >> # Tell tripleo how we want things done. >> extra_args: >- >> --neutron-network-type vxlan >> --neutron-tunnel-types vxlan >> --ntp-server pool.ntp.org >> >> network_isolation: true >> >> >> Picks up new memory setting but doesn't create second Compute Node. >> >> Every time just Controller && (1)* Compute. >> >> >> HW - i74790 , 32 GB RAM >> >> >> Thanks. >> >> Boris >> >> ________________________________ >> >> >> >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> From trown at redhat.com Fri Jun 3 21:43:27 2016 From: trown at redhat.com (John Trowbridge) Date: Fri, 3 Jun 2016 17:43:27 -0400 Subject: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash In-Reply-To: <5751EE34.20901@redhat.com> References: <57517B59.7040103@redhat.com> <5751EE34.20901@redhat.com> Message-ID: <5751F9FF.7030305@redhat.com> On 06/03/2016 04:53 PM, John Trowbridge wrote: > I just did an HA deploy locally on master, and I see the same thing wrt > telemetry services being down due to failed redis import. That could be > a packaging bug (something should depend on python-redis, maybe > python-tooz?). That said, it does not appear fatal in my case. Is there > some issue other than telemetry services being down that you are seeing? > That is certainly something we should fix, but I wouldn't characterize > it as the deployment is constantly crashing. > Confirmed that installing python-redis fixes the telemetry issue by doing the following from the undercloud: sudo LIBGUESTFS_BACKEND=direct virt-customize -a overcloud-full.qcow2 --install python-redis openstack overcloud image upload --update-existing Then deleting the failed overcloud stack, and re-running overcloud-deploy.sh. > On 06/03/2016 11:30 AM, Boris Derzhavets wrote: >> 1. Attempting to address your concern ( if I understood you correct ) >> >> First log :- >> >> [root at overcloud-controller-0 ceilometer]# cat central.log | grep ERROR >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service [req-4db5f172-0bf0-4200-9cf4-174859cdc00b admin - - - -] Error starting thread. >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service Traceback (most recent call last): >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 680, in run_service >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service service.start() >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/ceilometer/agent/manager.py", line 384, in start >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service self.partition_coordinator.start() >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/ceilometer/coordination.py", line 84, in start >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service backend_url, self._my_id) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 539, in get_coordinator >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service invoke_args=(member_id, parsed_url, options)).driver >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 46, in __init__ >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements=verify_requirements, >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 55, in __init__ >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 171, in _load_plugins >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service self._on_load_failure_callback(self, ep, err) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 163, in _load_plugins >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements, >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 123, in _load_one_plugin >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements, >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 186, in _load_one_plugin >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service plugin = ep.load(require=verify_requirements) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service entry = __import__(self.module_name, globals(),globals(), ['__name__']) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/drivers/redis.py", line 27, in >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service import redis >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service ImportError: No module named redis >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service >> [root at overcloud-controller-0 ceilometer]# clear >>  >> [root at overcloud-controller-0 ceilometer]# cat central.log | grep ERROR >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service [req-4db5f172-0bf0-4200-9cf4-174859cdc00b admin - - - -] Error starting thread. >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service Traceback (most recent call last): >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 680, in run_service >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service service.start() >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/ceilometer/agent/manager.py", line 384, in start >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service self.partition_coordinator.start() >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/ceilometer/coordination.py", line 84, in start >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service backend_url, self._my_id) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 539, in get_coordinator >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service invoke_args=(member_id, parsed_url, options)).driver >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 46, in __init__ >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements=verify_requirements, >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 55, in __init__ >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 171, in _load_plugins >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service self._on_load_failure_callback(self, ep, err) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 163, in _load_plugins >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements, >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 123, in _load_one_plugin >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements, >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 186, in _load_one_plugin >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service plugin = ep.load(require=verify_requirements) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service entry = __import__(self.module_name, globals(),globals(), ['__name__']) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/drivers/redis.py", line 27, in >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service import redis >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service ImportError: No module named redis >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service >> >> Second log :- >> >> [root at overcloud-controller-0 ceilometer]# cd - >> /var/log/aodh >> [root at overcloud-controller-0 aodh]# cat evaluator.log | grep ERROR >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service [-] Error starting thread. >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service Traceback (most recent call last): >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 680, in run_service >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service service.start() >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/aodh/evaluator/__init__.py", line 229, in start >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service self.partition_coordinator.start() >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/aodh/coordination.py", line 133, in start >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service self.backend_url, self._my_id) >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 539, in get_coordinator >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service invoke_args=(member_id, parsed_url, options)).driver >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 46, in __init__ >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service verify_requirements=verify_requirements, >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 55, in __init__ >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service verify_requirements) >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 171, in _load_plugins >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service self._on_load_failure_callback(self, ep, err) >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 163, in _load_plugins >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service verify_requirements, >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 123, in _load_one_plugin >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service verify_requirements, >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 186, in _load_one_plugin >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service plugin = ep.load(require=verify_requirements) >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service entry = __import__(self.module_name, globals(),globals(), ['__name__']) >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/drivers/redis.py", line 27, in >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service import redis >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service ImportError: No module named redis >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service >> >> 2 . Memory DIMMs DDR3 ( Kingston HyperX 1600 MHZ ) is not a problem >> My board ASUS Z97-P cannot support more 32 GB. So .... >> >> 3. i7 4790 surprised me on doing deployment on TripleO Quickstart , in particular, Controller+2xComputes ( --compute-scale 2 ) >> >> Thank you >> Boris. >> ________________________________________ >> From: John Trowbridge >> Sent: Friday, June 3, 2016 8:43 AM >> To: Boris Derzhavets; John Trowbridge; Lars Kellogg-Stedman >> Cc: rdo-list >> Subject: Re: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash >> >> So this last one looks like telemetry services went down. You could >> check the logs on the controllers to see if it was OOM killed. My bet >> would be this is what is happening. >> >> The reason that HA is not the default for tripleo-quickstart is exactly >> this type of issue. It is pretty difficult to fit a full HA deployment >> of TripleO on a 32G virthost. I think there is near 100% chance that the >> default HA config will crash when trying to do anything on the >> deployed overcloud, due to running out of memory. >> >> I have had some success in my local test setup using KSM [1] on the >> virthost, and then changing the HA config to give the controllers more >> memory. This results in overcommiting, but KSM can handle overcommiting >> without going into swap. It might even be possible to try to setup KSM >> in the environment setup part of quickstart. I would certainly accept an >> RFE/patch for this [2,3]. >> >> If you have a larger virthost than 32G, you could similarly bump the >> memory for the controllers, which should lead to a much higher success rate. >> >> There is also a feature coming in TripleO [4] that will allow choosing >> what services get deployed in each role, which will allow us to tweak >> the tripleo-quickstart HA config to deploy a minimal service layout in >> order to reduce memory requirements. >> >> Thanks a ton for giving tripleo-quickstart a go! >> >> [1] https://en.wikipedia.org/wiki/Kernel_same-page_merging >> [2] https://bugs.launchpad.net/tripleo-quickstart >> [3] https://review.openstack.org/#/q/project:openstack/tripleo-quickstart >> [4] >> https://blueprints.launchpad.net/tripleo/+spec/composable-services-within-roles >> >> On 06/03/2016 06:20 AM, Boris Derzhavets wrote: >>> ===================================== >>> >>> Fresh HA deployment attempt >>> >>> ===================================== >>> >>> [stack at undercloud ~]$ date >>> Fri Jun 3 10:05:35 UTC 2016 >>> [stack at undercloud ~]$ heat stack-list >>> +--------------------------------------+------------+-----------------+---------------------+--------------+ >>> | id | stack_name | stack_status | creation_time | updated_time | >>> +--------------------------------------+------------+-----------------+---------------------+--------------+ >>> | 0c6b8205-be86-4a24-be36-fd4ece956c6d | overcloud | CREATE_COMPLETE | 2016-06-03T08:14:19 | None | >>> +--------------------------------------+------------+-----------------+---------------------+--------------+ >>> [stack at undercloud ~]$ nova list >>> +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ >>> | ID | Name | Status | Task State | Power State | Networks | >>> +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ >>> | 6a38b7be-3743-4339-970b-6121e687741d | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.10 | >>> | 9222dc1b-5974-495b-8b98-b8176ac742f4 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.9 | >>> | 76adbb27-220f-42ef-9691-94729ee28749 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.11 | >>> | 8f57f7b6-a2d8-4b7b-b435-1c675e63ea84 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.8 | >>> +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ >>> [stack at undercloud ~]$ ssh heat-admin at 192.0.2.10 >>> Last login: Fri Jun 3 10:01:44 2016 from gateway >>> [heat-admin at overcloud-controller-0 ~]$ sudo su - >>> Last login: Fri Jun 3 10:01:49 UTC 2016 on pts/0 >>> [root at overcloud-controller-0 ~]# . keystonerc_admin >>> >>> [root at overcloud-controller-0 ~]# pcs status >>> Cluster name: tripleo_cluster >>> Last updated: Fri Jun 3 10:07:22 2016 Last change: Fri Jun 3 08:50:59 2016 by root via cibadmin on overcloud-controller-0 >>> Stack: corosync >>> Current DC: overcloud-controller-0 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum >>> 3 nodes and 123 resources configured >>> >>> Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> >>> Full list of resources: >>> >>> ip-192.0.2.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0 >>> Clone Set: haproxy-clone [haproxy] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> ip-192.0.2.7 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1 >>> Master/Slave Set: galera-master [galera] >>> Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: memcached-clone [memcached] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: rabbitmq-clone [rabbitmq] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-core-clone [openstack-core] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Master/Slave Set: redis-master [redis] >>> Masters: [ overcloud-controller-1 ] >>> Slaves: [ overcloud-controller-0 overcloud-controller-2 ] >>> Clone Set: mongod-clone [mongod] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator] >>> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: neutron-l3-agent-clone [neutron-l3-agent] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-2 >>> Clone Set: openstack-heat-engine-clone [openstack-heat-engine] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api] >>> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener] >>> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier] >>> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-heat-api-clone [openstack-heat-api] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector] >>> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-glance-api-clone [openstack-glance-api] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-nova-api-clone [openstack-nova-api] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-sahara-api-clone [openstack-sahara-api] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-glance-registry-clone [openstack-glance-registry] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-cinder-api-clone [openstack-cinder-api] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: delay-clone [delay] >>> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: neutron-server-clone [neutron-server] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central] >>> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: httpd-clone [httpd] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> >>> Failed Actions: >>> * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-1 'not running' (7): call=76, status=complete, exitreason='none', >>> last-rc-change='Fri Jun 3 08:47:22 2016', queued=0ms, exec=0ms >>> * openstack-ceilometer-central_start_0 on overcloud-controller-1 'not running' (7): call=290, status=complete, exitreason='none', >>> last-rc-change='Fri Jun 3 08:51:18 2016', queued=0ms, exec=2132ms >>> * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-2 'not running' (7): call=76, status=complete, exitreason='none', >>> last-rc-change='Fri Jun 3 08:47:16 2016', queued=0ms, exec=0ms >>> * openstack-ceilometer-central_start_0 on overcloud-controller-2 'not running' (7): call=292, status=complete, exitreason='none', >>> last-rc-change='Fri Jun 3 08:51:31 2016', queued=0ms, exec=2102ms >>> * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-0 'not running' (7): call=77, status=complete, exitreason='none', >>> last-rc-change='Fri Jun 3 08:47:19 2016', queued=0ms, exec=0ms >>> * openstack-ceilometer-central_start_0 on overcloud-controller-0 'not running' (7): call=270, status=complete, exitreason='none', >>> last-rc-change='Fri Jun 3 08:50:02 2016', queued=0ms, exec=2199ms >>> >>> >>> PCSD Status: >>> overcloud-controller-0: Online >>> overcloud-controller-1: Online >>> overcloud-controller-2: Online >>> >>> Daemon Status: >>> corosync: active/enabled >>> pacemaker: active/enabled >>> pcsd: active/enabled >>> >>> >>> ________________________________ >>> From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets >>> Sent: Monday, May 30, 2016 4:56 AM >>> To: John Trowbridge; Lars Kellogg-Stedman >>> Cc: rdo-list >>> Subject: Re: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash >>> >>> >>> Done one more time :- >>> >>> >>> [stack at undercloud ~]$ heat deployment-show 9cc8087a-6d82-4261-8a13-ee8c46e3a02d >>> >>> Uploaded here :- >>> >>> http://textuploader.com/5bm5v >>> ________________________________ >>> From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets >>> Sent: Sunday, May 29, 2016 3:39 AM >>> To: John Trowbridge; Lars Kellogg-Stedman >>> Cc: rdo-list >>> Subject: [rdo-list] Tripleo QuickStart HA deploymemt attempts constantly crash >>> >>> >>> Error every time is the same :- >>> >>> >>> 2016-05-29 07:20:17 [0]: CREATE_FAILED Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 >>> 2016-05-29 07:20:18 [0]: SIGNAL_COMPLETE Unknown >>> 2016-05-29 07:20:18 [overcloud-ControllerNodesPostDeployment-dzawjmjyaidt-ControllerServicesBaseDeployment_Step2-ufz2ccs5egd7]: CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 >>> 2016-05-29 07:20:18 [0]: SIGNAL_COMPLETE Unknown >>> 2016-05-29 07:20:19 [ControllerServicesBaseDeployment_Step2]: CREATE_FAILED Error: resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 >>> 2016-05-29 07:20:19 [0]: SIGNAL_COMPLETE Unknown >>> 2016-05-29 07:20:19 [0]: SIGNAL_COMPLETE Unknown >>> 2016-05-29 07:20:20 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >>> 2016-05-29 07:20:20 [overcloud-ControllerNodesPostDeployment-dzawjmjyaidt]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 >>> 2016-05-29 07:20:21 [ControllerNodesPostDeployment]: CREATE_FAILED Error: resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 >>> 2016-05-29 07:20:21 [0]: SIGNAL_COMPLETE Unknown >>> 2016-05-29 07:20:22 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >>> 2016-05-29 07:20:22 [0]: SIGNAL_COMPLETE Unknown >>> 2016-05-29 07:24:22 [ComputeNodesPostDeployment]: CREATE_FAILED CREATE aborted >>> 2016-05-29 07:24:22 [overcloud]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 >>> Stack overcloud CREATE_FAILED >>> Deployment failed: Heat Stack create failed. >>> + heat stack-list >>> + grep -q CREATE_FAILED >>> + deploy_status=1 >>> ++ heat resource-list --nested-depth 5 overcloud >>> ++ grep FAILED >>> ++ grep 'StructuredDeployment ' >>> ++ cut -d '|' -f3 >>> + for failed in '$(heat resource-list --nested-depth 5 overcloud | grep FAILED | >>> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >>> + heat deployment-show 66bd3fbe-296b-4f88-87a7-5ceafd05c1d3 >>> + exit 1 >>> >>> >>> Minimal configuration deployments run with no errors and build completely functional environment. >>> >>> >>> However, template :- >>> >>> >>> ################################# >>> # Test Controller + 2*Compute nodes >>> ################################# >>> control_memory: 6144 >>> compute_memory: 6144 >>> >>> undercloud_memory: 8192 >>> >>> # Giving the undercloud additional CPUs can greatly improve heat's >>> # performance (and result in a shorter deploy time). >>> undercloud_vcpu: 4 >>> >>> # We set introspection to true and use only the minimal amount of nodes >>> # for this job, but test all defaults otherwise. >>> step_introspect: true >>> >>> # Define a single controller node and a single compute node. >>> overcloud_nodes: >>> - name: control_0 >>> flavor: control >>> >>> - name: compute_0 >>> flavor: compute >>> >>> - name: compute_1 >>> flavor: compute >>> >>> # Tell tripleo how we want things done. >>> extra_args: >- >>> --neutron-network-type vxlan >>> --neutron-tunnel-types vxlan >>> --ntp-server pool.ntp.org >>> >>> network_isolation: true >>> >>> >>> Picks up new memory setting but doesn't create second Compute Node. >>> >>> Every time just Controller && (1)* Compute. >>> >>> >>> HW - i74790 , 32 GB RAM >>> >>> >>> Thanks. >>> >>> Boris >>> >>> ________________________________ >>> >>> >>> >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From pgsousa at gmail.com Sat Jun 4 00:50:14 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Sat, 4 Jun 2016 01:50:14 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> Message-ID: Hi, I finally managed to install a baremetal in mitaka with 1 controller + 1 compute with network isolation. Thank god :) All I did was: #yum install centos-release-openstack-mitaka #sudo yum install python-tripleoclient without epel repos. Then followed instructions from Redhat Site. I downloaded the overcloud images from: http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/ I do have an issue that forces me to delete a json file and run os-refresh-config inside my overcloud nodes other than that it installs fine. Now I'll test with more 2 controllers + 2 computes to have a full HA deployment. If anyone needs help to document this I'll be happy to help. Regards, Pedro Sousa On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy wrote: > The report says: "Fix Released" as of 2016-05-24. > Are you installing on a clean system with the latest repositories? > > Might also want to check your version of rabbitmq: I have > rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. > > ----- Original Message ----- > > From: "Pedro Sousa" > > To: "Ronelle Landy" > > Cc: "Christopher Brown" , "Ignacio Bravo" < > ibravo at ltgfederal.com>, "rdo-list" > > > > Sent: Friday, June 3, 2016 1:20:43 PM > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > > > Anyway to workaround this? Maybe downgrade hiera? > > > > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy wrote: > > > > > I am not sure exactly where you installed from, and when you did your > > > installation, but any chance, you've hit: > > > https://bugs.launchpad.net/tripleo/+bug/1584892? > > > There is a link bugzilla record. > > > > > > ----- Original Message ----- > > > > From: "Pedro Sousa" > > > > To: "Ronelle Landy" > > > > Cc: "Christopher Brown" , "Ignacio Bravo" < > > > ibravo at ltgfederal.com>, "rdo-list" > > > > > > > > Sent: Friday, June 3, 2016 12:26:58 PM > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > > > > > > > Thanks Ronelle, > > > > > > > > do you think this kind of errors can be related with network > settings? > > > > > > > > "Could not retrieve fact='rabbitmq_nodename', > resolution='': > > > > undefined method `[]' for nil:NilClass Could not retrieve > > > > fact='rabbitmq_nodename', resolution='': undefined method > `[]' > > > > for nil:NilClass" > > > > > > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy > wrote: > > > > > > > > > Hi Pedro, > > > > > > > > > > You could use the docs you referred to. > > > > > Alternatively, if you want to use a vm for the undercloud and > baremetal > > > > > machines for the overcloud, it is possible to use Tripleo > Qucikstart > > > with a > > > > > few modifications. > > > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. > > > > > > > > > > ----- Original Message ----- > > > > > > From: "Pedro Sousa" > > > > > > To: "Ronelle Landy" > > > > > > Cc: "Christopher Brown" , "Ignacio Bravo" < > > > > > ibravo at ltgfederal.com>, "rdo-list" > > > > > > > > > > > > Sent: Friday, June 3, 2016 11:48:38 AM > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > > > > > > > > > > > Hi Ronelle, > > > > > > > > > > > > maybe I understand it wrong but I thought that Tripleo Quickstart > > > was for > > > > > > deploying virtual environments? > > > > > > > > > > > > And for baremetal we should use > > > > > > > > > > > > > > > http://docs.openstack.org/developer/tripleo-docs/installation/installation.html > > > > > > ? > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy > > > > wrote: > > > > > > > > > > > > > Hello, > > > > > > > > > > > > > > We have had success deploying RDO (Mitaka) on baremetal > systems - > > > using > > > > > > > Tripleo Quickstart with both single-nic-vlans and > bond-with-vlans > > > > > network > > > > > > > isolation configurations. > > > > > > > > > > > > > > Baremetal can have some complicated networking issues but, from > > > > > previous > > > > > > > experiences, if a single-controller deployment worked but a HA > > > > > deployment > > > > > > > did not, I would check: > > > > > > > - does the HA deployment command include: -e > > > > > > > > > > > > > > > > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > > > > > > > - are there possible MTU issues? > > > > > > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > > > > From: "Christopher Brown" > > > > > > > > To: pgsousa at gmail.com, ibravo at ltgfederal.com > > > > > > > > Cc: rdo-list at redhat.com > > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM > > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > > > > > > > > > > > > > > > Hello Ignacio, > > > > > > > > > > > > > > > > Thanks for your response and good to know it isn't just me! > > > > > > > > > > > > > > > > I would be more than happy to provide developers with access > to > > > our > > > > > > > > bare metal environments. I'll also file some bugzilla > reports to > > > see > > > > > if > > > > > > > > this generates any interest. > > > > > > > > > > > > > > > > Please do let me know if you make any progress - I am trying > to > > > > > deploy > > > > > > > > HA with network isolation, multiple nics and vlans. > > > > > > > > > > > > > > > > The RDO web page states: > > > > > > > > > > > > > > > > "If you want to create a production-ready cloud, you'll want > to > > > use > > > > > the > > > > > > > > TripleO quickstart guide." > > > > > > > > > > > > > > > > which is a contradiction in terms really. > > > > > > > > > > > > > > > > Cheers > > > > > > > > > > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo wrote: > > > > > > > > > Pedro / Christopher, > > > > > > > > > > > > > > > > > > Just wanted to share with you that I also had plenty of > issues > > > > > > > > > deploying on bare metal HA servers, and have paused the > > > deployment > > > > > > > > > using TripleO until better winds start to flow here. I was > > > able to > > > > > > > > > deploy the QuickStart, but on bare metal the history was > > > different. > > > > > > > > > Couldn't even deploy a two server configuration. > > > > > > > > > > > > > > > > > > I was thinking that it would be good to have the developers > > > have > > > > > > > > > access to one of our environments and go through a full > install > > > > > with > > > > > > > > > us to better see where things fail. We can do this > handholding > > > > > > > > > deployment once every week/month based on developers time > > > > > > > > > availability. That way we can get a working install, and > we can > > > > > > > > > troubleshoot real life environment problems. > > > > > > > > > > > > > > > > > > > > > > > > > > > IB > > > > > > > > > > > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa > > > > wrote: > > > > > > > > > > > > > > > > > > > Yes. I've used this, but I'll try again as there's seems > to > > > be > > > > > new > > > > > > > > > > updates. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Stable Branch Skip all repos mentioned above, other than > > > epel- > > > > > > > > > > release which is still required. > > > > > > > > > > Enable latest RDO Stable Delorean repository for all > packages > > > > > > > > > > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo > > > > > https://trunk.r > > > > > > > > > > doproject.org/centos7-liberty/current/delorean.repo > > > > > > > > > > Enable the Delorean Deps repository > > > > > > > > > > sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo > > > > > http://tru > > > > > > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > > > > > > > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown < > > > > > cbrown2 at ocf.co. > > > > > > > > > > uk> wrote: > > > > > > > > > > > No, Liberty deployed ok for us. > > > > > > > > > > > > > > > > > > > > > > It suggests to me a package mismatch. Have you > completely > > > > > rebuilt > > > > > > > > > > > the > > > > > > > > > > > undercloud and then the images using Liberty? > > > > > > > > > > > > > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: > > > > > > > > > > > > AttributeError: 'module' object has no attribute > > > 'PortOpt' > > > > > > > > > > > -- > > > > > > > > > > > Regards, > > > > > > > > > > > > > > > > > > > > > > Christopher Brown > > > > > > > > > > > OpenStack Engineer > > > > > > > > > > > OCF plc > > > > > > > > > > > > > > > > > > > > > > Tel: +44 (0)114 257 2200 > > > > > > > > > > > Web: www.ocf.co.uk > > > > > > > > > > > Blog: blog.ocf.co.uk > > > > > > > > > > > Twitter: @ocfplc > > > > > > > > > > > > > > > > > > > > > > Please note, any emails relating to an OCF Support > request > > > must > > > > > > > > > > > always > > > > > > > > > > > be sent to support at ocf.co.uk for a ticket number to be > > > > > generated > > > > > > > > > > > or > > > > > > > > > > > existing support ticket to be updated. Should this not > be > > > done > > > > > > > > > > > then OCF > > > > > > > > > > > > > > > > > > > > > > cannot be held responsible for requests not dealt with > in a > > > > > > > > > > > timely > > > > > > > > > > > manner. > > > > > > > > > > > > > > > > > > > > > > OCF plc is a company registered in England and Wales. > > > > > Registered > > > > > > > > > > > number > > > > > > > > > > > > > > > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office > > > address: > > > > > > > > > > > OCF plc, > > > > > > > > > > > > > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, > Chapeltown, > > > > > > > > > > > Sheffield S35 > > > > > > > > > > > 2PG. > > > > > > > > > > > > > > > > > > > > > > If you have received this message in error, please > notify > > > us > > > > > > > > > > > immediately and remove it from your system. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > > rdo-list mailing list > > > > > > > > > rdo-list at redhat.com > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > -- > > > > > > > > Regards, > > > > > > > > > > > > > > > > Christopher Brown > > > > > > > > OpenStack Engineer > > > > > > > > OCF plc > > > > > > > > > > > > > > > > Tel: +44 (0)114 257 2200 > > > > > > > > Web: www.ocf.co.uk > > > > > > > > Blog: blog.ocf.co.uk > > > > > > > > Twitter: @ocfplc > > > > > > > > > > > > > > > > Please note, any emails relating to an OCF Support request > must > > > > > always > > > > > > > > be sent to support at ocf.co.uk for a ticket number to be > > > generated or > > > > > > > > existing support ticket to be updated. Should this not be > done > > > then > > > > > OCF > > > > > > > > > > > > > > > > cannot be held responsible for requests not dealt with in a > > > timely > > > > > > > > manner. > > > > > > > > > > > > > > > > OCF plc is a company registered in England and Wales. > Registered > > > > > number > > > > > > > > > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office > address: > > > OCF > > > > > plc, > > > > > > > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, > > > Sheffield > > > > > S35 > > > > > > > > 2PG. > > > > > > > > > > > > > > > > If you have received this message in error, please notify us > > > > > > > > immediately and remove it from your system. > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > > > rdo-list mailing list > > > > > > > > rdo-list at redhat.com > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Sat Jun 4 01:13:13 2016 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Fri, 3 Jun 2016 21:13:13 -0400 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> Message-ID: Pedro I have no objections to working with you to flesh out that document On Jun 3, 2016 8:51 PM, "Pedro Sousa" wrote: > Hi, > > I finally managed to install a baremetal in mitaka with 1 controller + 1 > compute with network isolation. Thank god :) > > All I did was: > > #yum install centos-release-openstack-mitaka > #sudo yum install python-tripleoclient > > without epel repos. > > Then followed instructions from Redhat Site. > > I downloaded the overcloud images from: > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/ > > I do have an issue that forces me to delete a json file and run > os-refresh-config inside my overcloud nodes other than that it installs > fine. > > Now I'll test with more 2 controllers + 2 computes to have a full HA > deployment. > > If anyone needs help to document this I'll be happy to help. > > Regards, > Pedro Sousa > > > On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy wrote: > >> The report says: "Fix Released" as of 2016-05-24. >> Are you installing on a clean system with the latest repositories? >> >> Might also want to check your version of rabbitmq: I have >> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. >> >> ----- Original Message ----- >> > From: "Pedro Sousa" >> > To: "Ronelle Landy" >> > Cc: "Christopher Brown" , "Ignacio Bravo" < >> ibravo at ltgfederal.com>, "rdo-list" >> > >> > Sent: Friday, June 3, 2016 1:20:43 PM >> > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >> > >> > Anyway to workaround this? Maybe downgrade hiera? >> > >> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy >> wrote: >> > >> > > I am not sure exactly where you installed from, and when you did your >> > > installation, but any chance, you've hit: >> > > https://bugs.launchpad.net/tripleo/+bug/1584892? >> > > There is a link bugzilla record. >> > > >> > > ----- Original Message ----- >> > > > From: "Pedro Sousa" >> > > > To: "Ronelle Landy" >> > > > Cc: "Christopher Brown" , "Ignacio Bravo" < >> > > ibravo at ltgfederal.com>, "rdo-list" >> > > > >> > > > Sent: Friday, June 3, 2016 12:26:58 PM >> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >> > > > >> > > > Thanks Ronelle, >> > > > >> > > > do you think this kind of errors can be related with network >> settings? >> > > > >> > > > "Could not retrieve fact='rabbitmq_nodename', >> resolution='': >> > > > undefined method `[]' for nil:NilClass Could not retrieve >> > > > fact='rabbitmq_nodename', resolution='': undefined >> method `[]' >> > > > for nil:NilClass" >> > > > >> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy >> wrote: >> > > > >> > > > > Hi Pedro, >> > > > > >> > > > > You could use the docs you referred to. >> > > > > Alternatively, if you want to use a vm for the undercloud and >> baremetal >> > > > > machines for the overcloud, it is possible to use Tripleo >> Qucikstart >> > > with a >> > > > > few modifications. >> > > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. >> > > > > >> > > > > ----- Original Message ----- >> > > > > > From: "Pedro Sousa" >> > > > > > To: "Ronelle Landy" >> > > > > > Cc: "Christopher Brown" , "Ignacio Bravo" < >> > > > > ibravo at ltgfederal.com>, "rdo-list" >> > > > > > >> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM >> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >> > > > > > >> > > > > > Hi Ronelle, >> > > > > > >> > > > > > maybe I understand it wrong but I thought that Tripleo >> Quickstart >> > > was for >> > > > > > deploying virtual environments? >> > > > > > >> > > > > > And for baremetal we should use >> > > > > > >> > > > > >> > > >> http://docs.openstack.org/developer/tripleo-docs/installation/installation.html >> > > > > > ? >> > > > > > >> > > > > > Thanks >> > > > > > >> > > > > > >> > > > > > >> > > > > > >> > > > > > >> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy < >> rlandy at redhat.com> >> > > wrote: >> > > > > > >> > > > > > > Hello, >> > > > > > > >> > > > > > > We have had success deploying RDO (Mitaka) on baremetal >> systems - >> > > using >> > > > > > > Tripleo Quickstart with both single-nic-vlans and >> bond-with-vlans >> > > > > network >> > > > > > > isolation configurations. >> > > > > > > >> > > > > > > Baremetal can have some complicated networking issues but, >> from >> > > > > previous >> > > > > > > experiences, if a single-controller deployment worked but a HA >> > > > > deployment >> > > > > > > did not, I would check: >> > > > > > > - does the HA deployment command include: -e >> > > > > > > >> > > > > >> > > >> /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >> > > > > > > - are there possible MTU issues? >> > > > > > > >> > > > > > > >> > > > > > > ----- Original Message ----- >> > > > > > > > From: "Christopher Brown" >> > > > > > > > To: pgsousa at gmail.com, ibravo at ltgfederal.com >> > > > > > > > Cc: rdo-list at redhat.com >> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM >> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >> > > > > > > > >> > > > > > > > Hello Ignacio, >> > > > > > > > >> > > > > > > > Thanks for your response and good to know it isn't just me! >> > > > > > > > >> > > > > > > > I would be more than happy to provide developers with >> access to >> > > our >> > > > > > > > bare metal environments. I'll also file some bugzilla >> reports to >> > > see >> > > > > if >> > > > > > > > this generates any interest. >> > > > > > > > >> > > > > > > > Please do let me know if you make any progress - I am >> trying to >> > > > > deploy >> > > > > > > > HA with network isolation, multiple nics and vlans. >> > > > > > > > >> > > > > > > > The RDO web page states: >> > > > > > > > >> > > > > > > > "If you want to create a production-ready cloud, you'll >> want to >> > > use >> > > > > the >> > > > > > > > TripleO quickstart guide." >> > > > > > > > >> > > > > > > > which is a contradiction in terms really. >> > > > > > > > >> > > > > > > > Cheers >> > > > > > > > >> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo wrote: >> > > > > > > > > Pedro / Christopher, >> > > > > > > > > >> > > > > > > > > Just wanted to share with you that I also had plenty of >> issues >> > > > > > > > > deploying on bare metal HA servers, and have paused the >> > > deployment >> > > > > > > > > using TripleO until better winds start to flow here. I was >> > > able to >> > > > > > > > > deploy the QuickStart, but on bare metal the history was >> > > different. >> > > > > > > > > Couldn't even deploy a two server configuration. >> > > > > > > > > >> > > > > > > > > I was thinking that it would be good to have the >> developers >> > > have >> > > > > > > > > access to one of our environments and go through a full >> install >> > > > > with >> > > > > > > > > us to better see where things fail. We can do this >> handholding >> > > > > > > > > deployment once every week/month based on developers time >> > > > > > > > > availability. That way we can get a working install, and >> we can >> > > > > > > > > troubleshoot real life environment problems. >> > > > > > > > > >> > > > > > > > > >> > > > > > > > > IB >> > > > > > > > > >> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa < >> pgsousa at gmail.com> >> > > wrote: >> > > > > > > > > >> > > > > > > > > > Yes. I've used this, but I'll try again as there's >> seems to >> > > be >> > > > > new >> > > > > > > > > > updates. >> > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > > Stable Branch Skip all repos mentioned above, other than >> > > epel- >> > > > > > > > > > release which is still required. >> > > > > > > > > > Enable latest RDO Stable Delorean repository for all >> packages >> > > > > > > > > > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo >> > > > > https://trunk.r >> > > > > > > > > > doproject.org/centos7-liberty/current/delorean.repo >> > > > > > > > > > Enable the Delorean Deps repository >> > > > > > > > > > sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo >> > > > > http://tru >> > > > > > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo >> > > > > > > > > > >> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown < >> > > > > cbrown2 at ocf.co. >> > > > > > > > > > uk> wrote: >> > > > > > > > > > > No, Liberty deployed ok for us. >> > > > > > > > > > > >> > > > > > > > > > > It suggests to me a package mismatch. Have you >> completely >> > > > > rebuilt >> > > > > > > > > > > the >> > > > > > > > > > > undercloud and then the images using Liberty? >> > > > > > > > > > > >> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: >> > > > > > > > > > > > AttributeError: 'module' object has no attribute >> > > 'PortOpt' >> > > > > > > > > > > -- >> > > > > > > > > > > Regards, >> > > > > > > > > > > >> > > > > > > > > > > Christopher Brown >> > > > > > > > > > > OpenStack Engineer >> > > > > > > > > > > OCF plc >> > > > > > > > > > > >> > > > > > > > > > > Tel: +44 (0)114 257 2200 >> > > > > > > > > > > Web: www.ocf.co.uk >> > > > > > > > > > > Blog: blog.ocf.co.uk >> > > > > > > > > > > Twitter: @ocfplc >> > > > > > > > > > > >> > > > > > > > > > > Please note, any emails relating to an OCF Support >> request >> > > must >> > > > > > > > > > > always >> > > > > > > > > > > be sent to support at ocf.co.uk for a ticket number to >> be >> > > > > generated >> > > > > > > > > > > or >> > > > > > > > > > > existing support ticket to be updated. Should this >> not be >> > > done >> > > > > > > > > > > then OCF >> > > > > > > > > > > >> > > > > > > > > > > cannot be held responsible for requests not dealt >> with in a >> > > > > > > > > > > timely >> > > > > > > > > > > manner. >> > > > > > > > > > > >> > > > > > > > > > > OCF plc is a company registered in England and Wales. >> > > > > Registered >> > > > > > > > > > > number >> > > > > > > > > > > >> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office >> > > address: >> > > > > > > > > > > OCF plc, >> > > > > > > > > > > >> > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, >> Chapeltown, >> > > > > > > > > > > Sheffield S35 >> > > > > > > > > > > 2PG. >> > > > > > > > > > > >> > > > > > > > > > > If you have received this message in error, please >> notify >> > > us >> > > > > > > > > > > immediately and remove it from your system. >> > > > > > > > > > > >> > > > > > > > > >> > > > > > > > > _______________________________________________ >> > > > > > > > > rdo-list mailing list >> > > > > > > > > rdo-list at redhat.com >> > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list >> > > > > > > > > >> > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > > > > > -- >> > > > > > > > Regards, >> > > > > > > > >> > > > > > > > Christopher Brown >> > > > > > > > OpenStack Engineer >> > > > > > > > OCF plc >> > > > > > > > >> > > > > > > > Tel: +44 (0)114 257 2200 >> > > > > > > > Web: www.ocf.co.uk >> > > > > > > > Blog: blog.ocf.co.uk >> > > > > > > > Twitter: @ocfplc >> > > > > > > > >> > > > > > > > Please note, any emails relating to an OCF Support request >> must >> > > > > always >> > > > > > > > be sent to support at ocf.co.uk for a ticket number to be >> > > generated or >> > > > > > > > existing support ticket to be updated. Should this not be >> done >> > > then >> > > > > OCF >> > > > > > > > >> > > > > > > > cannot be held responsible for requests not dealt with in a >> > > timely >> > > > > > > > manner. >> > > > > > > > >> > > > > > > > OCF plc is a company registered in England and Wales. >> Registered >> > > > > number >> > > > > > > > >> > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office >> address: >> > > OCF >> > > > > plc, >> > > > > > > > >> > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, >> > > Sheffield >> > > > > S35 >> > > > > > > > 2PG. >> > > > > > > > >> > > > > > > > If you have received this message in error, please notify us >> > > > > > > > immediately and remove it from your system. >> > > > > > > > >> > > > > > > > _______________________________________________ >> > > > > > > > rdo-list mailing list >> > > > > > > > rdo-list at redhat.com >> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list >> > > > > > > > >> > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > > > > > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > >> > >> > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Sat Jun 4 07:47:23 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sat, 4 Jun 2016 07:47:23 +0000 Subject: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash In-Reply-To: <5751F9FF.7030305@redhat.com> References: <57517B59.7040103@redhat.com> <5751EE34.20901@redhat.com>,<5751F9FF.7030305@redhat.com> Message-ID: From: John Trowbridge Sent: Friday, June 3, 2016 5:43 PM To: Boris Derzhavets; Lars Kellogg-Stedman Cc: rdo-list Subject: Re: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash On 06/03/2016 04:53 PM, John Trowbridge wrote: > I just did an HA deploy locally on master, and I see the same thing wrt > telemetry services being down due to failed redis import. That could be > a packaging bug (something should depend on python-redis, maybe > python-tooz?). That said, it does not appear fatal in my case. Is there > some issue other than telemetry services being down that you are seeing? > That is certainly something we should fix, but I wouldn't characterize > it as the deployment is constantly crashing. That was told by me in regards of comment #3 in https://bugzilla.redhat.com/show_bug.cgi?id=1340865 Of course , issue with telemetry services is not "constantly crashing" > Confirmed that installing python-redis fixes the telemetry issue by doing the following from the undercloud: sudo LIBGUESTFS_BACKEND=direct virt-customize -a overcloud-full.qcow2 --install python-redis openstack overcloud image upload --update-existing > Then deleting the failed overcloud stack, and re-running > overcloud-deploy.sh. Doesn't work for me. Re-running fails to recreate overcloud stack. > On 06/03/2016 11:30 AM, Boris Derzhavets wrote: >> 1. Attempting to address your concern ( if I understood you correct ) >> >> First log :- >> >> [root at overcloud-controller-0 ceilometer]# cat central.log | grep ERROR >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service [req-4db5f172-0bf0-4200-9cf4-174859cdc00b admin - - - -] Error starting thread. >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service Traceback (most recent call last): >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 680, in run_service >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service service.start() >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/ceilometer/agent/manager.py", line 384, in start >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service self.partition_coordinator.start() >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/ceilometer/coordination.py", line 84, in start >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service backend_url, self._my_id) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 539, in get_coordinator >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service invoke_args=(member_id, parsed_url, options)).driver >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 46, in __init__ >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements=verify_requirements, >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 55, in __init__ >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 171, in _load_plugins >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service self._on_load_failure_callback(self, ep, err) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 163, in _load_plugins >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements, >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 123, in _load_one_plugin >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements, >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 186, in _load_one_plugin >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service plugin = ep.load(require=verify_requirements) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service entry = __import__(self.module_name, globals(),globals(), ['__name__']) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/drivers/redis.py", line 27, in >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service import redis >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service ImportError: No module named redis >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service >> [root at overcloud-controller-0 ceilometer]# clear >>  >> [root at overcloud-controller-0 ceilometer]# cat central.log | grep ERROR >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service [req-4db5f172-0bf0-4200-9cf4-174859cdc00b admin - - - -] Error starting thread. >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service Traceback (most recent call last): >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 680, in run_service >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service service.start() >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/ceilometer/agent/manager.py", line 384, in start >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service self.partition_coordinator.start() >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/ceilometer/coordination.py", line 84, in start >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service backend_url, self._my_id) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 539, in get_coordinator >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service invoke_args=(member_id, parsed_url, options)).driver >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 46, in __init__ >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements=verify_requirements, >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 55, in __init__ >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 171, in _load_plugins >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service self._on_load_failure_callback(self, ep, err) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 163, in _load_plugins >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements, >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 123, in _load_one_plugin >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service verify_requirements, >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 186, in _load_one_plugin >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service plugin = ep.load(require=verify_requirements) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service entry = __import__(self.module_name, globals(),globals(), ['__name__']) >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/drivers/redis.py", line 27, in >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service import redis >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service ImportError: No module named redis >> 2016-06-03 08:50:04.405 17503 ERROR oslo_service.service >> >> Second log :- >> >> [root at overcloud-controller-0 ceilometer]# cd - >> /var/log/aodh >> [root at overcloud-controller-0 aodh]# cat evaluator.log | grep ERROR >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service [-] Error starting thread. >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service Traceback (most recent call last): >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/oslo_service/service.py", line 680, in run_service >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service service.start() >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/aodh/evaluator/__init__.py", line 229, in start >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service self.partition_coordinator.start() >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/aodh/coordination.py", line 133, in start >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service self.backend_url, self._my_id) >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 539, in get_coordinator >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service invoke_args=(member_id, parsed_url, options)).driver >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/driver.py", line 46, in __init__ >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service verify_requirements=verify_requirements, >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 55, in __init__ >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service verify_requirements) >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 171, in _load_plugins >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service self._on_load_failure_callback(self, ep, err) >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 163, in _load_plugins >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service verify_requirements, >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/named.py", line 123, in _load_one_plugin >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service verify_requirements, >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 186, in _load_one_plugin >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service plugin = ep.load(require=verify_requirements) >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service entry = __import__(self.module_name, globals(),globals(), ['__name__']) >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service File "/usr/lib/python2.7/site-packages/tooz/drivers/redis.py", line 27, in >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service import redis >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service ImportError: No module named redis >> 2016-06-03 08:46:20.552 32101 ERROR oslo_service.service >> >> 2 . Memory DIMMs DDR3 ( Kingston HyperX 1600 MHZ ) is not a problem >> My board ASUS Z97-P cannot support more 32 GB. So .... >> >> 3. i7 4790 surprised me on doing deployment on TripleO Quickstart , in particular, Controller+2xComputes ( --compute-scale 2 ) >> >> Thank you >> Boris. >> ________________________________________ >> From: John Trowbridge >> Sent: Friday, June 3, 2016 8:43 AM >> To: Boris Derzhavets; John Trowbridge; Lars Kellogg-Stedman >> Cc: rdo-list >> Subject: Re: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash >> >> So this last one looks like telemetry services went down. You could >> check the logs on the controllers to see if it was OOM killed. My bet >> would be this is what is happening. >> >> The reason that HA is not the default for tripleo-quickstart is exactly >> this type of issue. It is pretty difficult to fit a full HA deployment >> of TripleO on a 32G virthost. I think there is near 100% chance that the >> default HA config will crash when trying to do anything on the >> deployed overcloud, due to running out of memory. >> >> I have had some success in my local test setup using KSM [1] on the >> virthost, and then changing the HA config to give the controllers more >> memory. This results in overcommiting, but KSM can handle overcommiting >> without going into swap. It might even be possible to try to setup KSM >> in the environment setup part of quickstart. I would certainly accept an >> RFE/patch for this [2,3]. >> >> If you have a larger virthost than 32G, you could similarly bump the >> memory for the controllers, which should lead to a much higher success rate. >> >> There is also a feature coming in TripleO [4] that will allow choosing >> what services get deployed in each role, which will allow us to tweak >> the tripleo-quickstart HA config to deploy a minimal service layout in >> order to reduce memory requirements. >> >> Thanks a ton for giving tripleo-quickstart a go! >> >> [1] https://en.wikipedia.org/wiki/Kernel_same-page_merging Kernel same-page merging - Wikipedia, the free encyclopedia en.wikipedia.org In computing, kernel same-page merging (abbreviated as KSM, and also known as kernel shared memory and memory merging) is a kernel feature that makes it possible for ... >> [2] https://bugs.launchpad.net/tripleo-quickstart >> [3] https://review.openstack.org/#/q/project:openstack/tripleo-quickstart >> [4] >> https://blueprints.launchpad.net/tripleo/+spec/composable-services-within-roles >> >> On 06/03/2016 06:20 AM, Boris Derzhavets wrote: >>> ===================================== >>> >>> Fresh HA deployment attempt >>> >>> ===================================== >>> >>> [stack at undercloud ~]$ date >>> Fri Jun 3 10:05:35 UTC 2016 >>> [stack at undercloud ~]$ heat stack-list >>> +--------------------------------------+------------+-----------------+---------------------+--------------+ >>> | id | stack_name | stack_status | creation_time | updated_time | >>> +--------------------------------------+------------+-----------------+---------------------+--------------+ >>> | 0c6b8205-be86-4a24-be36-fd4ece956c6d | overcloud | CREATE_COMPLETE | 2016-06-03T08:14:19 | None | >>> +--------------------------------------+------------+-----------------+---------------------+--------------+ >>> [stack at undercloud ~]$ nova list >>> +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ >>> | ID | Name | Status | Task State | Power State | Networks | >>> +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ >>> | 6a38b7be-3743-4339-970b-6121e687741d | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.10 | >>> | 9222dc1b-5974-495b-8b98-b8176ac742f4 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.9 | >>> | 76adbb27-220f-42ef-9691-94729ee28749 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.11 | >>> | 8f57f7b6-a2d8-4b7b-b435-1c675e63ea84 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.8 | >>> +--------------------------------------+-------------------------+--------+------------+-------------+---------------------+ >>> [stack at undercloud ~]$ ssh heat-admin at 192.0.2.10 >>> Last login: Fri Jun 3 10:01:44 2016 from gateway >>> [heat-admin at overcloud-controller-0 ~]$ sudo su - >>> Last login: Fri Jun 3 10:01:49 UTC 2016 on pts/0 >>> [root at overcloud-controller-0 ~]# . keystonerc_admin >>> >>> [root at overcloud-controller-0 ~]# pcs status >>> Cluster name: tripleo_cluster >>> Last updated: Fri Jun 3 10:07:22 2016 Last change: Fri Jun 3 08:50:59 2016 by root via cibadmin on overcloud-controller-0 >>> Stack: corosync >>> Current DC: overcloud-controller-0 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum >>> 3 nodes and 123 resources configured >>> >>> Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> >>> Full list of resources: >>> >>> ip-192.0.2.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0 >>> Clone Set: haproxy-clone [haproxy] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> ip-192.0.2.7 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1 >>> Master/Slave Set: galera-master [galera] >>> Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: memcached-clone [memcached] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: rabbitmq-clone [rabbitmq] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-core-clone [openstack-core] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Master/Slave Set: redis-master [redis] >>> Masters: [ overcloud-controller-1 ] >>> Slaves: [ overcloud-controller-0 overcloud-controller-2 ] >>> Clone Set: mongod-clone [mongod] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator] >>> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: neutron-l3-agent-clone [neutron-l3-agent] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-2 >>> Clone Set: openstack-heat-engine-clone [openstack-heat-engine] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api] >>> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener] >>> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier] >>> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-heat-api-clone [openstack-heat-api] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector] >>> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-glance-api-clone [openstack-glance-api] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-nova-api-clone [openstack-nova-api] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-sahara-api-clone [openstack-sahara-api] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-glance-registry-clone [openstack-glance-registry] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-cinder-api-clone [openstack-cinder-api] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: delay-clone [delay] >>> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: neutron-server-clone [neutron-server] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central] >>> Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: httpd-clone [httpd] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor] >>> Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] >>> >>> Failed Actions: >>> * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-1 'not running' (7): call=76, status=complete, exitreason='none', >>> last-rc-change='Fri Jun 3 08:47:22 2016', queued=0ms, exec=0ms >>> * openstack-ceilometer-central_start_0 on overcloud-controller-1 'not running' (7): call=290, status=complete, exitreason='none', >>> last-rc-change='Fri Jun 3 08:51:18 2016', queued=0ms, exec=2132ms >>> * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-2 'not running' (7): call=76, status=complete, exitreason='none', >>> last-rc-change='Fri Jun 3 08:47:16 2016', queued=0ms, exec=0ms >>> * openstack-ceilometer-central_start_0 on overcloud-controller-2 'not running' (7): call=292, status=complete, exitreason='none', >>> last-rc-change='Fri Jun 3 08:51:31 2016', queued=0ms, exec=2102ms >>> * openstack-aodh-evaluator_monitor_60000 on overcloud-controller-0 'not running' (7): call=77, status=complete, exitreason='none', >>> last-rc-change='Fri Jun 3 08:47:19 2016', queued=0ms, exec=0ms >>> * openstack-ceilometer-central_start_0 on overcloud-controller-0 'not running' (7): call=270, status=complete, exitreason='none', >>> last-rc-change='Fri Jun 3 08:50:02 2016', queued=0ms, exec=2199ms >>> >>> >>> PCSD Status: >>> overcloud-controller-0: Online >>> overcloud-controller-1: Online >>> overcloud-controller-2: Online >>> >>> Daemon Status: >>> corosync: active/enabled >>> pacemaker: active/enabled >>> pcsd: active/enabled >>> >>> >>> ________________________________ >>> From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets >>> Sent: Monday, May 30, 2016 4:56 AM >>> To: John Trowbridge; Lars Kellogg-Stedman >>> Cc: rdo-list >>> Subject: Re: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash >>> >>> >>> Done one more time :- >>> >>> >>> [stack at undercloud ~]$ heat deployment-show 9cc8087a-6d82-4261-8a13-ee8c46e3a02d >>> >>> Uploaded here :- >>> >>> http://textuploader.com/5bm5v >>> ________________________________ >>> From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets >>> Sent: Sunday, May 29, 2016 3:39 AM >>> To: John Trowbridge; Lars Kellogg-Stedman >>> Cc: rdo-list >>> Subject: [rdo-list] Tripleo QuickStart HA deploymemt attempts constantly crash >>> >>> >>> Error every time is the same :- >>> >>> >>> 2016-05-29 07:20:17 [0]: CREATE_FAILED Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 >>> 2016-05-29 07:20:18 [0]: SIGNAL_COMPLETE Unknown >>> 2016-05-29 07:20:18 [overcloud-ControllerNodesPostDeployment-dzawjmjyaidt-ControllerServicesBaseDeployment_Step2-ufz2ccs5egd7]: CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 >>> 2016-05-29 07:20:18 [0]: SIGNAL_COMPLETE Unknown >>> 2016-05-29 07:20:19 [ControllerServicesBaseDeployment_Step2]: CREATE_FAILED Error: resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 >>> 2016-05-29 07:20:19 [0]: SIGNAL_COMPLETE Unknown >>> 2016-05-29 07:20:19 [0]: SIGNAL_COMPLETE Unknown >>> 2016-05-29 07:20:20 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >>> 2016-05-29 07:20:20 [overcloud-ControllerNodesPostDeployment-dzawjmjyaidt]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 >>> 2016-05-29 07:20:21 [ControllerNodesPostDeployment]: CREATE_FAILED Error: resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 >>> 2016-05-29 07:20:21 [0]: SIGNAL_COMPLETE Unknown >>> 2016-05-29 07:20:22 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >>> 2016-05-29 07:20:22 [0]: SIGNAL_COMPLETE Unknown >>> 2016-05-29 07:24:22 [ComputeNodesPostDeployment]: CREATE_FAILED CREATE aborted >>> 2016-05-29 07:24:22 [overcloud]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerNodesPostDeployment.resources.ControllerServicesBaseDeployment_Step2.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 >>> Stack overcloud CREATE_FAILED >>> Deployment failed: Heat Stack create failed. >>> + heat stack-list >>> + grep -q CREATE_FAILED >>> + deploy_status=1 >>> ++ heat resource-list --nested-depth 5 overcloud >>> ++ grep FAILED >>> ++ grep 'StructuredDeployment ' >>> ++ cut -d '|' -f3 >>> + for failed in '$(heat resource-list --nested-depth 5 overcloud | grep FAILED | >>> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >>> + heat deployment-show 66bd3fbe-296b-4f88-87a7-5ceafd05c1d3 >>> + exit 1 >>> >>> >>> Minimal configuration deployments run with no errors and build completely functional environment. >>> >>> >>> However, template :- >>> >>> >>> ################################# >>> # Test Controller + 2*Compute nodes >>> ################################# >>> control_memory: 6144 >>> compute_memory: 6144 >>> >>> undercloud_memory: 8192 >>> >>> # Giving the undercloud additional CPUs can greatly improve heat's >>> # performance (and result in a shorter deploy time). >>> undercloud_vcpu: 4 >>> >>> # We set introspection to true and use only the minimal amount of nodes >>> # for this job, but test all defaults otherwise. >>> step_introspect: true >>> >>> # Define a single controller node and a single compute node. >>> overcloud_nodes: >>> - name: control_0 >>> flavor: control >>> >>> - name: compute_0 >>> flavor: compute >>> >>> - name: compute_1 >>> flavor: compute >>> >>> # Tell tripleo how we want things done. >>> extra_args: >- >>> --neutron-network-type vxlan >>> --neutron-tunnel-types vxlan >>> --ntp-server pool.ntp.org >>> >>> network_isolation: true >>> >>> >>> Picks up new memory setting but doesn't create second Compute Node. >>> >>> Every time just Controller && (1)* Compute. >>> >>> >>> HW - i74790 , 32 GB RAM >>> >>> >>> Thanks. >>> >>> Boris >>> >>> ________________________________ >>> >>> >>> >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Sat Jun 4 11:07:39 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Sat, 4 Jun 2016 12:07:39 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> Message-ID: Sure, just let me know how to start, as I don't know how documentation works for rdoproject but we can share some google drive document or something and I start contributing to it. Regards On Sat, Jun 4, 2016 at 2:13 AM, Mohammed Arafa wrote: > Pedro > I have no objections to working with you to flesh out that document > On Jun 3, 2016 8:51 PM, "Pedro Sousa" wrote: > >> Hi, >> >> I finally managed to install a baremetal in mitaka with 1 controller + 1 >> compute with network isolation. Thank god :) >> >> All I did was: >> >> #yum install centos-release-openstack-mitaka >> #sudo yum install python-tripleoclient >> >> without epel repos. >> >> Then followed instructions from Redhat Site. >> >> I downloaded the overcloud images from: >> http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/ >> >> I do have an issue that forces me to delete a json file and run >> os-refresh-config inside my overcloud nodes other than that it installs >> fine. >> >> Now I'll test with more 2 controllers + 2 computes to have a full HA >> deployment. >> >> If anyone needs help to document this I'll be happy to help. >> >> Regards, >> Pedro Sousa >> >> >> On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy wrote: >> >>> The report says: "Fix Released" as of 2016-05-24. >>> Are you installing on a clean system with the latest repositories? >>> >>> Might also want to check your version of rabbitmq: I have >>> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. >>> >>> ----- Original Message ----- >>> > From: "Pedro Sousa" >>> > To: "Ronelle Landy" >>> > Cc: "Christopher Brown" , "Ignacio Bravo" < >>> ibravo at ltgfederal.com>, "rdo-list" >>> > >>> > Sent: Friday, June 3, 2016 1:20:43 PM >>> > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>> > >>> > Anyway to workaround this? Maybe downgrade hiera? >>> > >>> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy >>> wrote: >>> > >>> > > I am not sure exactly where you installed from, and when you did your >>> > > installation, but any chance, you've hit: >>> > > https://bugs.launchpad.net/tripleo/+bug/1584892? >>> > > There is a link bugzilla record. >>> > > >>> > > ----- Original Message ----- >>> > > > From: "Pedro Sousa" >>> > > > To: "Ronelle Landy" >>> > > > Cc: "Christopher Brown" , "Ignacio Bravo" < >>> > > ibravo at ltgfederal.com>, "rdo-list" >>> > > > >>> > > > Sent: Friday, June 3, 2016 12:26:58 PM >>> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>> > > > >>> > > > Thanks Ronelle, >>> > > > >>> > > > do you think this kind of errors can be related with network >>> settings? >>> > > > >>> > > > "Could not retrieve fact='rabbitmq_nodename', >>> resolution='': >>> > > > undefined method `[]' for nil:NilClass Could not retrieve >>> > > > fact='rabbitmq_nodename', resolution='': undefined >>> method `[]' >>> > > > for nil:NilClass" >>> > > > >>> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy >>> wrote: >>> > > > >>> > > > > Hi Pedro, >>> > > > > >>> > > > > You could use the docs you referred to. >>> > > > > Alternatively, if you want to use a vm for the undercloud and >>> baremetal >>> > > > > machines for the overcloud, it is possible to use Tripleo >>> Qucikstart >>> > > with a >>> > > > > few modifications. >>> > > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. >>> > > > > >>> > > > > ----- Original Message ----- >>> > > > > > From: "Pedro Sousa" >>> > > > > > To: "Ronelle Landy" >>> > > > > > Cc: "Christopher Brown" , "Ignacio Bravo" < >>> > > > > ibravo at ltgfederal.com>, "rdo-list" >>> > > > > > >>> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM >>> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>> > > > > > >>> > > > > > Hi Ronelle, >>> > > > > > >>> > > > > > maybe I understand it wrong but I thought that Tripleo >>> Quickstart >>> > > was for >>> > > > > > deploying virtual environments? >>> > > > > > >>> > > > > > And for baremetal we should use >>> > > > > > >>> > > > > >>> > > >>> http://docs.openstack.org/developer/tripleo-docs/installation/installation.html >>> > > > > > ? >>> > > > > > >>> > > > > > Thanks >>> > > > > > >>> > > > > > >>> > > > > > >>> > > > > > >>> > > > > > >>> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy < >>> rlandy at redhat.com> >>> > > wrote: >>> > > > > > >>> > > > > > > Hello, >>> > > > > > > >>> > > > > > > We have had success deploying RDO (Mitaka) on baremetal >>> systems - >>> > > using >>> > > > > > > Tripleo Quickstart with both single-nic-vlans and >>> bond-with-vlans >>> > > > > network >>> > > > > > > isolation configurations. >>> > > > > > > >>> > > > > > > Baremetal can have some complicated networking issues but, >>> from >>> > > > > previous >>> > > > > > > experiences, if a single-controller deployment worked but a >>> HA >>> > > > > deployment >>> > > > > > > did not, I would check: >>> > > > > > > - does the HA deployment command include: -e >>> > > > > > > >>> > > > > >>> > > >>> /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >>> > > > > > > - are there possible MTU issues? >>> > > > > > > >>> > > > > > > >>> > > > > > > ----- Original Message ----- >>> > > > > > > > From: "Christopher Brown" >>> > > > > > > > To: pgsousa at gmail.com, ibravo at ltgfederal.com >>> > > > > > > > Cc: rdo-list at redhat.com >>> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM >>> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>> > > > > > > > >>> > > > > > > > Hello Ignacio, >>> > > > > > > > >>> > > > > > > > Thanks for your response and good to know it isn't just me! >>> > > > > > > > >>> > > > > > > > I would be more than happy to provide developers with >>> access to >>> > > our >>> > > > > > > > bare metal environments. I'll also file some bugzilla >>> reports to >>> > > see >>> > > > > if >>> > > > > > > > this generates any interest. >>> > > > > > > > >>> > > > > > > > Please do let me know if you make any progress - I am >>> trying to >>> > > > > deploy >>> > > > > > > > HA with network isolation, multiple nics and vlans. >>> > > > > > > > >>> > > > > > > > The RDO web page states: >>> > > > > > > > >>> > > > > > > > "If you want to create a production-ready cloud, you'll >>> want to >>> > > use >>> > > > > the >>> > > > > > > > TripleO quickstart guide." >>> > > > > > > > >>> > > > > > > > which is a contradiction in terms really. >>> > > > > > > > >>> > > > > > > > Cheers >>> > > > > > > > >>> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo wrote: >>> > > > > > > > > Pedro / Christopher, >>> > > > > > > > > >>> > > > > > > > > Just wanted to share with you that I also had plenty of >>> issues >>> > > > > > > > > deploying on bare metal HA servers, and have paused the >>> > > deployment >>> > > > > > > > > using TripleO until better winds start to flow here. I >>> was >>> > > able to >>> > > > > > > > > deploy the QuickStart, but on bare metal the history was >>> > > different. >>> > > > > > > > > Couldn't even deploy a two server configuration. >>> > > > > > > > > >>> > > > > > > > > I was thinking that it would be good to have the >>> developers >>> > > have >>> > > > > > > > > access to one of our environments and go through a full >>> install >>> > > > > with >>> > > > > > > > > us to better see where things fail. We can do this >>> handholding >>> > > > > > > > > deployment once every week/month based on developers time >>> > > > > > > > > availability. That way we can get a working install, and >>> we can >>> > > > > > > > > troubleshoot real life environment problems. >>> > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > IB >>> > > > > > > > > >>> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa < >>> pgsousa at gmail.com> >>> > > wrote: >>> > > > > > > > > >>> > > > > > > > > > Yes. I've used this, but I'll try again as there's >>> seems to >>> > > be >>> > > > > new >>> > > > > > > > > > updates. >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > > Stable Branch Skip all repos mentioned above, other >>> than >>> > > epel- >>> > > > > > > > > > release which is still required. >>> > > > > > > > > > Enable latest RDO Stable Delorean repository for all >>> packages >>> > > > > > > > > > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo >>> > > > > https://trunk.r >>> > > > > > > > > > doproject.org/centos7-liberty/current/delorean.repo >>> > > > > > > > > > Enable the Delorean Deps repository >>> > > > > > > > > > sudo curl -o >>> /etc/yum.repos.d/delorean-deps-liberty.repo >>> > > > > http://tru >>> > > > > > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo >>> > > > > > > > > > >>> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown < >>> > > > > cbrown2 at ocf.co. >>> > > > > > > > > > uk> wrote: >>> > > > > > > > > > > No, Liberty deployed ok for us. >>> > > > > > > > > > > >>> > > > > > > > > > > It suggests to me a package mismatch. Have you >>> completely >>> > > > > rebuilt >>> > > > > > > > > > > the >>> > > > > > > > > > > undercloud and then the images using Liberty? >>> > > > > > > > > > > >>> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: >>> > > > > > > > > > > > AttributeError: 'module' object has no attribute >>> > > 'PortOpt' >>> > > > > > > > > > > -- >>> > > > > > > > > > > Regards, >>> > > > > > > > > > > >>> > > > > > > > > > > Christopher Brown >>> > > > > > > > > > > OpenStack Engineer >>> > > > > > > > > > > OCF plc >>> > > > > > > > > > > >>> > > > > > > > > > > Tel: +44 (0)114 257 2200 >>> > > > > > > > > > > Web: www.ocf.co.uk >>> > > > > > > > > > > Blog: blog.ocf.co.uk >>> > > > > > > > > > > Twitter: @ocfplc >>> > > > > > > > > > > >>> > > > > > > > > > > Please note, any emails relating to an OCF Support >>> request >>> > > must >>> > > > > > > > > > > always >>> > > > > > > > > > > be sent to support at ocf.co.uk for a ticket number to >>> be >>> > > > > generated >>> > > > > > > > > > > or >>> > > > > > > > > > > existing support ticket to be updated. Should this >>> not be >>> > > done >>> > > > > > > > > > > then OCF >>> > > > > > > > > > > >>> > > > > > > > > > > cannot be held responsible for requests not dealt >>> with in a >>> > > > > > > > > > > timely >>> > > > > > > > > > > manner. >>> > > > > > > > > > > >>> > > > > > > > > > > OCF plc is a company registered in England and Wales. >>> > > > > Registered >>> > > > > > > > > > > number >>> > > > > > > > > > > >>> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office >>> > > address: >>> > > > > > > > > > > OCF plc, >>> > > > > > > > > > > >>> > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, >>> Chapeltown, >>> > > > > > > > > > > Sheffield S35 >>> > > > > > > > > > > 2PG. >>> > > > > > > > > > > >>> > > > > > > > > > > If you have received this message in error, please >>> notify >>> > > us >>> > > > > > > > > > > immediately and remove it from your system. >>> > > > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > _______________________________________________ >>> > > > > > > > > rdo-list mailing list >>> > > > > > > > > rdo-list at redhat.com >>> > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list >>> > > > > > > > > >>> > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >>> > > > > > > > -- >>> > > > > > > > Regards, >>> > > > > > > > >>> > > > > > > > Christopher Brown >>> > > > > > > > OpenStack Engineer >>> > > > > > > > OCF plc >>> > > > > > > > >>> > > > > > > > Tel: +44 (0)114 257 2200 >>> > > > > > > > Web: www.ocf.co.uk >>> > > > > > > > Blog: blog.ocf.co.uk >>> > > > > > > > Twitter: @ocfplc >>> > > > > > > > >>> > > > > > > > Please note, any emails relating to an OCF Support request >>> must >>> > > > > always >>> > > > > > > > be sent to support at ocf.co.uk for a ticket number to be >>> > > generated or >>> > > > > > > > existing support ticket to be updated. Should this not be >>> done >>> > > then >>> > > > > OCF >>> > > > > > > > >>> > > > > > > > cannot be held responsible for requests not dealt with in a >>> > > timely >>> > > > > > > > manner. >>> > > > > > > > >>> > > > > > > > OCF plc is a company registered in England and Wales. >>> Registered >>> > > > > number >>> > > > > > > > >>> > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office >>> address: >>> > > OCF >>> > > > > plc, >>> > > > > > > > >>> > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, >>> > > Sheffield >>> > > > > S35 >>> > > > > > > > 2PG. >>> > > > > > > > >>> > > > > > > > If you have received this message in error, please notify >>> us >>> > > > > > > > immediately and remove it from your system. >>> > > > > > > > >>> > > > > > > > _______________________________________________ >>> > > > > > > > rdo-list mailing list >>> > > > > > > > rdo-list at redhat.com >>> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list >>> > > > > > > > >>> > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >>> >> >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ak at cloudssky.com Sat Jun 4 13:46:53 2016 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Sat, 4 Jun 2016 15:46:53 +0200 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> Message-ID: Pedro, >> then followed instructions from Redhat Site. Would it be possible to share the link which you followed on the Red Hat site for the TripleO installation? Thanks, -Arash On Sat, Jun 4, 2016 at 1:07 PM, Pedro Sousa wrote: > Sure, > > just let me know how to start, as I don't know how documentation works for > rdoproject but we can share some google drive document or something and I > start contributing to it. > > Regards > > On Sat, Jun 4, 2016 at 2:13 AM, Mohammed Arafa > wrote: > >> Pedro >> I have no objections to working with you to flesh out that document >> On Jun 3, 2016 8:51 PM, "Pedro Sousa" wrote: >> >>> Hi, >>> >>> I finally managed to install a baremetal in mitaka with 1 controller + >>> 1 compute with network isolation. Thank god :) >>> >>> All I did was: >>> >>> #yum install centos-release-openstack-mitaka >>> #sudo yum install python-tripleoclient >>> >>> without epel repos. >>> >>> Then followed instructions from Redhat Site. >>> >>> I downloaded the overcloud images from: >>> http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/ >>> >>> I do have an issue that forces me to delete a json file and run >>> os-refresh-config inside my overcloud nodes other than that it installs >>> fine. >>> >>> Now I'll test with more 2 controllers + 2 computes to have a full HA >>> deployment. >>> >>> If anyone needs help to document this I'll be happy to help. >>> >>> Regards, >>> Pedro Sousa >>> >>> >>> On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy wrote: >>> >>>> The report says: "Fix Released" as of 2016-05-24. >>>> Are you installing on a clean system with the latest repositories? >>>> >>>> Might also want to check your version of rabbitmq: I have >>>> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. >>>> >>>> ----- Original Message ----- >>>> > From: "Pedro Sousa" >>>> > To: "Ronelle Landy" >>>> > Cc: "Christopher Brown" , "Ignacio Bravo" < >>>> ibravo at ltgfederal.com>, "rdo-list" >>>> > >>>> > Sent: Friday, June 3, 2016 1:20:43 PM >>>> > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>>> > >>>> > Anyway to workaround this? Maybe downgrade hiera? >>>> > >>>> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy >>>> wrote: >>>> > >>>> > > I am not sure exactly where you installed from, and when you did >>>> your >>>> > > installation, but any chance, you've hit: >>>> > > https://bugs.launchpad.net/tripleo/+bug/1584892? >>>> > > There is a link bugzilla record. >>>> > > >>>> > > ----- Original Message ----- >>>> > > > From: "Pedro Sousa" >>>> > > > To: "Ronelle Landy" >>>> > > > Cc: "Christopher Brown" , "Ignacio Bravo" < >>>> > > ibravo at ltgfederal.com>, "rdo-list" >>>> > > > >>>> > > > Sent: Friday, June 3, 2016 12:26:58 PM >>>> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>>> > > > >>>> > > > Thanks Ronelle, >>>> > > > >>>> > > > do you think this kind of errors can be related with network >>>> settings? >>>> > > > >>>> > > > "Could not retrieve fact='rabbitmq_nodename', >>>> resolution='': >>>> > > > undefined method `[]' for nil:NilClass Could not retrieve >>>> > > > fact='rabbitmq_nodename', resolution='': undefined >>>> method `[]' >>>> > > > for nil:NilClass" >>>> > > > >>>> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy >>>> wrote: >>>> > > > >>>> > > > > Hi Pedro, >>>> > > > > >>>> > > > > You could use the docs you referred to. >>>> > > > > Alternatively, if you want to use a vm for the undercloud and >>>> baremetal >>>> > > > > machines for the overcloud, it is possible to use Tripleo >>>> Qucikstart >>>> > > with a >>>> > > > > few modifications. >>>> > > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. >>>> > > > > >>>> > > > > ----- Original Message ----- >>>> > > > > > From: "Pedro Sousa" >>>> > > > > > To: "Ronelle Landy" >>>> > > > > > Cc: "Christopher Brown" , "Ignacio Bravo" >>>> < >>>> > > > > ibravo at ltgfederal.com>, "rdo-list" >>>> > > > > > >>>> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM >>>> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>>> > > > > > >>>> > > > > > Hi Ronelle, >>>> > > > > > >>>> > > > > > maybe I understand it wrong but I thought that Tripleo >>>> Quickstart >>>> > > was for >>>> > > > > > deploying virtual environments? >>>> > > > > > >>>> > > > > > And for baremetal we should use >>>> > > > > > >>>> > > > > >>>> > > >>>> http://docs.openstack.org/developer/tripleo-docs/installation/installation.html >>>> > > > > > ? >>>> > > > > > >>>> > > > > > Thanks >>>> > > > > > >>>> > > > > > >>>> > > > > > >>>> > > > > > >>>> > > > > > >>>> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy < >>>> rlandy at redhat.com> >>>> > > wrote: >>>> > > > > > >>>> > > > > > > Hello, >>>> > > > > > > >>>> > > > > > > We have had success deploying RDO (Mitaka) on baremetal >>>> systems - >>>> > > using >>>> > > > > > > Tripleo Quickstart with both single-nic-vlans and >>>> bond-with-vlans >>>> > > > > network >>>> > > > > > > isolation configurations. >>>> > > > > > > >>>> > > > > > > Baremetal can have some complicated networking issues but, >>>> from >>>> > > > > previous >>>> > > > > > > experiences, if a single-controller deployment worked but a >>>> HA >>>> > > > > deployment >>>> > > > > > > did not, I would check: >>>> > > > > > > - does the HA deployment command include: -e >>>> > > > > > > >>>> > > > > >>>> > > >>>> /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >>>> > > > > > > - are there possible MTU issues? >>>> > > > > > > >>>> > > > > > > >>>> > > > > > > ----- Original Message ----- >>>> > > > > > > > From: "Christopher Brown" >>>> > > > > > > > To: pgsousa at gmail.com, ibravo at ltgfederal.com >>>> > > > > > > > Cc: rdo-list at redhat.com >>>> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM >>>> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>>> > > > > > > > >>>> > > > > > > > Hello Ignacio, >>>> > > > > > > > >>>> > > > > > > > Thanks for your response and good to know it isn't just >>>> me! >>>> > > > > > > > >>>> > > > > > > > I would be more than happy to provide developers with >>>> access to >>>> > > our >>>> > > > > > > > bare metal environments. I'll also file some bugzilla >>>> reports to >>>> > > see >>>> > > > > if >>>> > > > > > > > this generates any interest. >>>> > > > > > > > >>>> > > > > > > > Please do let me know if you make any progress - I am >>>> trying to >>>> > > > > deploy >>>> > > > > > > > HA with network isolation, multiple nics and vlans. >>>> > > > > > > > >>>> > > > > > > > The RDO web page states: >>>> > > > > > > > >>>> > > > > > > > "If you want to create a production-ready cloud, you'll >>>> want to >>>> > > use >>>> > > > > the >>>> > > > > > > > TripleO quickstart guide." >>>> > > > > > > > >>>> > > > > > > > which is a contradiction in terms really. >>>> > > > > > > > >>>> > > > > > > > Cheers >>>> > > > > > > > >>>> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo wrote: >>>> > > > > > > > > Pedro / Christopher, >>>> > > > > > > > > >>>> > > > > > > > > Just wanted to share with you that I also had plenty of >>>> issues >>>> > > > > > > > > deploying on bare metal HA servers, and have paused the >>>> > > deployment >>>> > > > > > > > > using TripleO until better winds start to flow here. I >>>> was >>>> > > able to >>>> > > > > > > > > deploy the QuickStart, but on bare metal the history was >>>> > > different. >>>> > > > > > > > > Couldn't even deploy a two server configuration. >>>> > > > > > > > > >>>> > > > > > > > > I was thinking that it would be good to have the >>>> developers >>>> > > have >>>> > > > > > > > > access to one of our environments and go through a full >>>> install >>>> > > > > with >>>> > > > > > > > > us to better see where things fail. We can do this >>>> handholding >>>> > > > > > > > > deployment once every week/month based on developers >>>> time >>>> > > > > > > > > availability. That way we can get a working install, >>>> and we can >>>> > > > > > > > > troubleshoot real life environment problems. >>>> > > > > > > > > >>>> > > > > > > > > >>>> > > > > > > > > IB >>>> > > > > > > > > >>>> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa < >>>> pgsousa at gmail.com> >>>> > > wrote: >>>> > > > > > > > > >>>> > > > > > > > > > Yes. I've used this, but I'll try again as there's >>>> seems to >>>> > > be >>>> > > > > new >>>> > > > > > > > > > updates. >>>> > > > > > > > > > >>>> > > > > > > > > > >>>> > > > > > > > > > >>>> > > > > > > > > > Stable Branch Skip all repos mentioned above, other >>>> than >>>> > > epel- >>>> > > > > > > > > > release which is still required. >>>> > > > > > > > > > Enable latest RDO Stable Delorean repository for all >>>> packages >>>> > > > > > > > > > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo >>>> > > > > https://trunk.r >>>> > > > > > > > > > doproject.org/centos7-liberty/current/delorean.repo >>>> > > > > > > > > > Enable the Delorean Deps repository >>>> > > > > > > > > > sudo curl -o >>>> /etc/yum.repos.d/delorean-deps-liberty.repo >>>> > > > > http://tru >>>> > > > > > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo >>>> > > > > > > > > > >>>> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown < >>>> > > > > cbrown2 at ocf.co. >>>> > > > > > > > > > uk> wrote: >>>> > > > > > > > > > > No, Liberty deployed ok for us. >>>> > > > > > > > > > > >>>> > > > > > > > > > > It suggests to me a package mismatch. Have you >>>> completely >>>> > > > > rebuilt >>>> > > > > > > > > > > the >>>> > > > > > > > > > > undercloud and then the images using Liberty? >>>> > > > > > > > > > > >>>> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa >>>> wrote: >>>> > > > > > > > > > > > AttributeError: 'module' object has no attribute >>>> > > 'PortOpt' >>>> > > > > > > > > > > -- >>>> > > > > > > > > > > Regards, >>>> > > > > > > > > > > >>>> > > > > > > > > > > Christopher Brown >>>> > > > > > > > > > > OpenStack Engineer >>>> > > > > > > > > > > OCF plc >>>> > > > > > > > > > > >>>> > > > > > > > > > > Tel: +44 (0)114 257 2200 >>>> > > > > > > > > > > Web: www.ocf.co.uk >>>> > > > > > > > > > > Blog: blog.ocf.co.uk >>>> > > > > > > > > > > Twitter: @ocfplc >>>> > > > > > > > > > > >>>> > > > > > > > > > > Please note, any emails relating to an OCF Support >>>> request >>>> > > must >>>> > > > > > > > > > > always >>>> > > > > > > > > > > be sent to support at ocf.co.uk for a ticket number >>>> to be >>>> > > > > generated >>>> > > > > > > > > > > or >>>> > > > > > > > > > > existing support ticket to be updated. Should this >>>> not be >>>> > > done >>>> > > > > > > > > > > then OCF >>>> > > > > > > > > > > >>>> > > > > > > > > > > cannot be held responsible for requests not dealt >>>> with in a >>>> > > > > > > > > > > timely >>>> > > > > > > > > > > manner. >>>> > > > > > > > > > > >>>> > > > > > > > > > > OCF plc is a company registered in England and >>>> Wales. >>>> > > > > Registered >>>> > > > > > > > > > > number >>>> > > > > > > > > > > >>>> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered >>>> office >>>> > > address: >>>> > > > > > > > > > > OCF plc, >>>> > > > > > > > > > > >>>> > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, >>>> Chapeltown, >>>> > > > > > > > > > > Sheffield S35 >>>> > > > > > > > > > > 2PG. >>>> > > > > > > > > > > >>>> > > > > > > > > > > If you have received this message in error, please >>>> notify >>>> > > us >>>> > > > > > > > > > > immediately and remove it from your system. >>>> > > > > > > > > > > >>>> > > > > > > > > >>>> > > > > > > > > _______________________________________________ >>>> > > > > > > > > rdo-list mailing list >>>> > > > > > > > > rdo-list at redhat.com >>>> > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list >>>> > > > > > > > > >>>> > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> > > > > > > > -- >>>> > > > > > > > Regards, >>>> > > > > > > > >>>> > > > > > > > Christopher Brown >>>> > > > > > > > OpenStack Engineer >>>> > > > > > > > OCF plc >>>> > > > > > > > >>>> > > > > > > > Tel: +44 (0)114 257 2200 >>>> > > > > > > > Web: www.ocf.co.uk >>>> > > > > > > > Blog: blog.ocf.co.uk >>>> > > > > > > > Twitter: @ocfplc >>>> > > > > > > > >>>> > > > > > > > Please note, any emails relating to an OCF Support >>>> request must >>>> > > > > always >>>> > > > > > > > be sent to support at ocf.co.uk for a ticket number to be >>>> > > generated or >>>> > > > > > > > existing support ticket to be updated. Should this not be >>>> done >>>> > > then >>>> > > > > OCF >>>> > > > > > > > >>>> > > > > > > > cannot be held responsible for requests not dealt with in >>>> a >>>> > > timely >>>> > > > > > > > manner. >>>> > > > > > > > >>>> > > > > > > > OCF plc is a company registered in England and Wales. >>>> Registered >>>> > > > > number >>>> > > > > > > > >>>> > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office >>>> address: >>>> > > OCF >>>> > > > > plc, >>>> > > > > > > > >>>> > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, >>>> > > Sheffield >>>> > > > > S35 >>>> > > > > > > > 2PG. >>>> > > > > > > > >>>> > > > > > > > If you have received this message in error, please notify >>>> us >>>> > > > > > > > immediately and remove it from your system. >>>> > > > > > > > >>>> > > > > > > > _______________________________________________ >>>> > > > > > > > rdo-list mailing list >>>> > > > > > > > rdo-list at redhat.com >>>> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list >>>> > > > > > > > >>>> > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> > > > > > > > >>>> > > > > > > >>>> > > > > > >>>> > > > > >>>> > > > >>>> > > >>>> > >>>> >>> >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Sat Jun 4 13:52:27 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Sat, 4 Jun 2016 14:52:27 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> Message-ID: Sure: https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/director-installation-and-usage/chapter-4-installing-the-undercloud Regards, Pedro Sousa On Sat, Jun 4, 2016 at 2:46 PM, Arash Kaffamanesh wrote: > Pedro, > > >> then followed instructions from Redhat Site. > > Would it be possible to share the link which you followed on the Red Hat > site for the TripleO installation? > > Thanks, > -Arash > > On Sat, Jun 4, 2016 at 1:07 PM, Pedro Sousa wrote: > >> Sure, >> >> just let me know how to start, as I don't know how documentation works >> for rdoproject but we can share some google drive document or something and >> I start contributing to it. >> >> Regards >> >> On Sat, Jun 4, 2016 at 2:13 AM, Mohammed Arafa >> wrote: >> >>> Pedro >>> I have no objections to working with you to flesh out that document >>> On Jun 3, 2016 8:51 PM, "Pedro Sousa" wrote: >>> >>>> Hi, >>>> >>>> I finally managed to install a baremetal in mitaka with 1 controller + >>>> 1 compute with network isolation. Thank god :) >>>> >>>> All I did was: >>>> >>>> #yum install centos-release-openstack-mitaka >>>> #sudo yum install python-tripleoclient >>>> >>>> without epel repos. >>>> >>>> Then followed instructions from Redhat Site. >>>> >>>> I downloaded the overcloud images from: >>>> http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/ >>>> >>>> I do have an issue that forces me to delete a json file and run >>>> os-refresh-config inside my overcloud nodes other than that it installs >>>> fine. >>>> >>>> Now I'll test with more 2 controllers + 2 computes to have a full HA >>>> deployment. >>>> >>>> If anyone needs help to document this I'll be happy to help. >>>> >>>> Regards, >>>> Pedro Sousa >>>> >>>> >>>> On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy >>>> wrote: >>>> >>>>> The report says: "Fix Released" as of 2016-05-24. >>>>> Are you installing on a clean system with the latest repositories? >>>>> >>>>> Might also want to check your version of rabbitmq: I have >>>>> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. >>>>> >>>>> ----- Original Message ----- >>>>> > From: "Pedro Sousa" >>>>> > To: "Ronelle Landy" >>>>> > Cc: "Christopher Brown" , "Ignacio Bravo" < >>>>> ibravo at ltgfederal.com>, "rdo-list" >>>>> > >>>>> > Sent: Friday, June 3, 2016 1:20:43 PM >>>>> > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>>>> > >>>>> > Anyway to workaround this? Maybe downgrade hiera? >>>>> > >>>>> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy >>>>> wrote: >>>>> > >>>>> > > I am not sure exactly where you installed from, and when you did >>>>> your >>>>> > > installation, but any chance, you've hit: >>>>> > > https://bugs.launchpad.net/tripleo/+bug/1584892? >>>>> > > There is a link bugzilla record. >>>>> > > >>>>> > > ----- Original Message ----- >>>>> > > > From: "Pedro Sousa" >>>>> > > > To: "Ronelle Landy" >>>>> > > > Cc: "Christopher Brown" , "Ignacio Bravo" < >>>>> > > ibravo at ltgfederal.com>, "rdo-list" >>>>> > > > >>>>> > > > Sent: Friday, June 3, 2016 12:26:58 PM >>>>> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>>>> > > > >>>>> > > > Thanks Ronelle, >>>>> > > > >>>>> > > > do you think this kind of errors can be related with network >>>>> settings? >>>>> > > > >>>>> > > > "Could not retrieve fact='rabbitmq_nodename', >>>>> resolution='': >>>>> > > > undefined method `[]' for nil:NilClass Could not retrieve >>>>> > > > fact='rabbitmq_nodename', resolution='': undefined >>>>> method `[]' >>>>> > > > for nil:NilClass" >>>>> > > > >>>>> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy >>>>> wrote: >>>>> > > > >>>>> > > > > Hi Pedro, >>>>> > > > > >>>>> > > > > You could use the docs you referred to. >>>>> > > > > Alternatively, if you want to use a vm for the undercloud and >>>>> baremetal >>>>> > > > > machines for the overcloud, it is possible to use Tripleo >>>>> Qucikstart >>>>> > > with a >>>>> > > > > few modifications. >>>>> > > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. >>>>> > > > > >>>>> > > > > ----- Original Message ----- >>>>> > > > > > From: "Pedro Sousa" >>>>> > > > > > To: "Ronelle Landy" >>>>> > > > > > Cc: "Christopher Brown" , "Ignacio >>>>> Bravo" < >>>>> > > > > ibravo at ltgfederal.com>, "rdo-list" >>>>> > > > > > >>>>> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM >>>>> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>>>> > > > > > >>>>> > > > > > Hi Ronelle, >>>>> > > > > > >>>>> > > > > > maybe I understand it wrong but I thought that Tripleo >>>>> Quickstart >>>>> > > was for >>>>> > > > > > deploying virtual environments? >>>>> > > > > > >>>>> > > > > > And for baremetal we should use >>>>> > > > > > >>>>> > > > > >>>>> > > >>>>> http://docs.openstack.org/developer/tripleo-docs/installation/installation.html >>>>> > > > > > ? >>>>> > > > > > >>>>> > > > > > Thanks >>>>> > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> > > > > > >>>>> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy < >>>>> rlandy at redhat.com> >>>>> > > wrote: >>>>> > > > > > >>>>> > > > > > > Hello, >>>>> > > > > > > >>>>> > > > > > > We have had success deploying RDO (Mitaka) on baremetal >>>>> systems - >>>>> > > using >>>>> > > > > > > Tripleo Quickstart with both single-nic-vlans and >>>>> bond-with-vlans >>>>> > > > > network >>>>> > > > > > > isolation configurations. >>>>> > > > > > > >>>>> > > > > > > Baremetal can have some complicated networking issues but, >>>>> from >>>>> > > > > previous >>>>> > > > > > > experiences, if a single-controller deployment worked but >>>>> a HA >>>>> > > > > deployment >>>>> > > > > > > did not, I would check: >>>>> > > > > > > - does the HA deployment command include: -e >>>>> > > > > > > >>>>> > > > > >>>>> > > >>>>> /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >>>>> > > > > > > - are there possible MTU issues? >>>>> > > > > > > >>>>> > > > > > > >>>>> > > > > > > ----- Original Message ----- >>>>> > > > > > > > From: "Christopher Brown" >>>>> > > > > > > > To: pgsousa at gmail.com, ibravo at ltgfederal.com >>>>> > > > > > > > Cc: rdo-list at redhat.com >>>>> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM >>>>> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>>>> > > > > > > > >>>>> > > > > > > > Hello Ignacio, >>>>> > > > > > > > >>>>> > > > > > > > Thanks for your response and good to know it isn't just >>>>> me! >>>>> > > > > > > > >>>>> > > > > > > > I would be more than happy to provide developers with >>>>> access to >>>>> > > our >>>>> > > > > > > > bare metal environments. I'll also file some bugzilla >>>>> reports to >>>>> > > see >>>>> > > > > if >>>>> > > > > > > > this generates any interest. >>>>> > > > > > > > >>>>> > > > > > > > Please do let me know if you make any progress - I am >>>>> trying to >>>>> > > > > deploy >>>>> > > > > > > > HA with network isolation, multiple nics and vlans. >>>>> > > > > > > > >>>>> > > > > > > > The RDO web page states: >>>>> > > > > > > > >>>>> > > > > > > > "If you want to create a production-ready cloud, you'll >>>>> want to >>>>> > > use >>>>> > > > > the >>>>> > > > > > > > TripleO quickstart guide." >>>>> > > > > > > > >>>>> > > > > > > > which is a contradiction in terms really. >>>>> > > > > > > > >>>>> > > > > > > > Cheers >>>>> > > > > > > > >>>>> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo wrote: >>>>> > > > > > > > > Pedro / Christopher, >>>>> > > > > > > > > >>>>> > > > > > > > > Just wanted to share with you that I also had plenty >>>>> of issues >>>>> > > > > > > > > deploying on bare metal HA servers, and have paused the >>>>> > > deployment >>>>> > > > > > > > > using TripleO until better winds start to flow here. I >>>>> was >>>>> > > able to >>>>> > > > > > > > > deploy the QuickStart, but on bare metal the history >>>>> was >>>>> > > different. >>>>> > > > > > > > > Couldn't even deploy a two server configuration. >>>>> > > > > > > > > >>>>> > > > > > > > > I was thinking that it would be good to have the >>>>> developers >>>>> > > have >>>>> > > > > > > > > access to one of our environments and go through a >>>>> full install >>>>> > > > > with >>>>> > > > > > > > > us to better see where things fail. We can do this >>>>> handholding >>>>> > > > > > > > > deployment once every week/month based on developers >>>>> time >>>>> > > > > > > > > availability. That way we can get a working install, >>>>> and we can >>>>> > > > > > > > > troubleshoot real life environment problems. >>>>> > > > > > > > > >>>>> > > > > > > > > >>>>> > > > > > > > > IB >>>>> > > > > > > > > >>>>> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa < >>>>> pgsousa at gmail.com> >>>>> > > wrote: >>>>> > > > > > > > > >>>>> > > > > > > > > > Yes. I've used this, but I'll try again as there's >>>>> seems to >>>>> > > be >>>>> > > > > new >>>>> > > > > > > > > > updates. >>>>> > > > > > > > > > >>>>> > > > > > > > > > >>>>> > > > > > > > > > >>>>> > > > > > > > > > Stable Branch Skip all repos mentioned above, other >>>>> than >>>>> > > epel- >>>>> > > > > > > > > > release which is still required. >>>>> > > > > > > > > > Enable latest RDO Stable Delorean repository for all >>>>> packages >>>>> > > > > > > > > > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo >>>>> > > > > https://trunk.r >>>>> > > > > > > > > > doproject.org/centos7-liberty/current/delorean.repo >>>>> > > > > > > > > > Enable the Delorean Deps repository >>>>> > > > > > > > > > sudo curl -o >>>>> /etc/yum.repos.d/delorean-deps-liberty.repo >>>>> > > > > http://tru >>>>> > > > > > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo >>>>> > > > > > > > > > >>>>> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown < >>>>> > > > > cbrown2 at ocf.co. >>>>> > > > > > > > > > uk> wrote: >>>>> > > > > > > > > > > No, Liberty deployed ok for us. >>>>> > > > > > > > > > > >>>>> > > > > > > > > > > It suggests to me a package mismatch. Have you >>>>> completely >>>>> > > > > rebuilt >>>>> > > > > > > > > > > the >>>>> > > > > > > > > > > undercloud and then the images using Liberty? >>>>> > > > > > > > > > > >>>>> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa >>>>> wrote: >>>>> > > > > > > > > > > > AttributeError: 'module' object has no attribute >>>>> > > 'PortOpt' >>>>> > > > > > > > > > > -- >>>>> > > > > > > > > > > Regards, >>>>> > > > > > > > > > > >>>>> > > > > > > > > > > Christopher Brown >>>>> > > > > > > > > > > OpenStack Engineer >>>>> > > > > > > > > > > OCF plc >>>>> > > > > > > > > > > >>>>> > > > > > > > > > > Tel: +44 (0)114 257 2200 >>>>> > > > > > > > > > > Web: www.ocf.co.uk >>>>> > > > > > > > > > > Blog: blog.ocf.co.uk >>>>> > > > > > > > > > > Twitter: @ocfplc >>>>> > > > > > > > > > > >>>>> > > > > > > > > > > Please note, any emails relating to an OCF Support >>>>> request >>>>> > > must >>>>> > > > > > > > > > > always >>>>> > > > > > > > > > > be sent to support at ocf.co.uk for a ticket number >>>>> to be >>>>> > > > > generated >>>>> > > > > > > > > > > or >>>>> > > > > > > > > > > existing support ticket to be updated. Should this >>>>> not be >>>>> > > done >>>>> > > > > > > > > > > then OCF >>>>> > > > > > > > > > > >>>>> > > > > > > > > > > cannot be held responsible for requests not dealt >>>>> with in a >>>>> > > > > > > > > > > timely >>>>> > > > > > > > > > > manner. >>>>> > > > > > > > > > > >>>>> > > > > > > > > > > OCF plc is a company registered in England and >>>>> Wales. >>>>> > > > > Registered >>>>> > > > > > > > > > > number >>>>> > > > > > > > > > > >>>>> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered >>>>> office >>>>> > > address: >>>>> > > > > > > > > > > OCF plc, >>>>> > > > > > > > > > > >>>>> > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, >>>>> Chapeltown, >>>>> > > > > > > > > > > Sheffield S35 >>>>> > > > > > > > > > > 2PG. >>>>> > > > > > > > > > > >>>>> > > > > > > > > > > If you have received this message in error, please >>>>> notify >>>>> > > us >>>>> > > > > > > > > > > immediately and remove it from your system. >>>>> > > > > > > > > > > >>>>> > > > > > > > > >>>>> > > > > > > > > _______________________________________________ >>>>> > > > > > > > > rdo-list mailing list >>>>> > > > > > > > > rdo-list at redhat.com >>>>> > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list >>>>> > > > > > > > > >>>>> > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >>>>> > > > > > > > -- >>>>> > > > > > > > Regards, >>>>> > > > > > > > >>>>> > > > > > > > Christopher Brown >>>>> > > > > > > > OpenStack Engineer >>>>> > > > > > > > OCF plc >>>>> > > > > > > > >>>>> > > > > > > > Tel: +44 (0)114 257 2200 >>>>> > > > > > > > Web: www.ocf.co.uk >>>>> > > > > > > > Blog: blog.ocf.co.uk >>>>> > > > > > > > Twitter: @ocfplc >>>>> > > > > > > > >>>>> > > > > > > > Please note, any emails relating to an OCF Support >>>>> request must >>>>> > > > > always >>>>> > > > > > > > be sent to support at ocf.co.uk for a ticket number to be >>>>> > > generated or >>>>> > > > > > > > existing support ticket to be updated. Should this not >>>>> be done >>>>> > > then >>>>> > > > > OCF >>>>> > > > > > > > >>>>> > > > > > > > cannot be held responsible for requests not dealt with >>>>> in a >>>>> > > timely >>>>> > > > > > > > manner. >>>>> > > > > > > > >>>>> > > > > > > > OCF plc is a company registered in England and Wales. >>>>> Registered >>>>> > > > > number >>>>> > > > > > > > >>>>> > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office >>>>> address: >>>>> > > OCF >>>>> > > > > plc, >>>>> > > > > > > > >>>>> > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, >>>>> > > Sheffield >>>>> > > > > S35 >>>>> > > > > > > > 2PG. >>>>> > > > > > > > >>>>> > > > > > > > If you have received this message in error, please >>>>> notify us >>>>> > > > > > > > immediately and remove it from your system. >>>>> > > > > > > > >>>>> > > > > > > > _______________________________________________ >>>>> > > > > > > > rdo-list mailing list >>>>> > > > > > > > rdo-list at redhat.com >>>>> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list >>>>> > > > > > > > >>>>> > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >>>>> > > > > > > > >>>>> > > > > > > >>>>> > > > > > >>>>> > > > > >>>>> > > > >>>>> > > >>>>> > >>>>> >>>> >>>> >>>> _______________________________________________ >>>> rdo-list mailing list >>>> rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >>> >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgsousa at gmail.com Sat Jun 4 15:05:51 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Sat, 4 Jun 2016 16:05:51 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> Message-ID: Hi, some update on scaling the cloud: 1 controller + 1 compute -> 1 controller + 3 computes OK 1 controller + 3 computes -> 3 controllers + 3 compute FAILS Problem: The new controller nodes are "stuck" in "pscd start", so it seems to be a problem joining the pacemaker cluster... Did anyone had this problem? Regards On Sat, Jun 4, 2016 at 1:50 AM, Pedro Sousa wrote: > Hi, > > I finally managed to install a baremetal in mitaka with 1 controller + 1 > compute with network isolation. Thank god :) > > All I did was: > > #yum install centos-release-openstack-mitaka > #sudo yum install python-tripleoclient > > without epel repos. > > Then followed instructions from Redhat Site. > > I downloaded the overcloud images from: > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/ > > I do have an issue that forces me to delete a json file and run > os-refresh-config inside my overcloud nodes other than that it installs > fine. > > Now I'll test with more 2 controllers + 2 computes to have a full HA > deployment. > > If anyone needs help to document this I'll be happy to help. > > Regards, > Pedro Sousa > > > On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy wrote: > >> The report says: "Fix Released" as of 2016-05-24. >> Are you installing on a clean system with the latest repositories? >> >> Might also want to check your version of rabbitmq: I have >> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. >> >> ----- Original Message ----- >> > From: "Pedro Sousa" >> > To: "Ronelle Landy" >> > Cc: "Christopher Brown" , "Ignacio Bravo" < >> ibravo at ltgfederal.com>, "rdo-list" >> > >> > Sent: Friday, June 3, 2016 1:20:43 PM >> > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >> > >> > Anyway to workaround this? Maybe downgrade hiera? >> > >> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy >> wrote: >> > >> > > I am not sure exactly where you installed from, and when you did your >> > > installation, but any chance, you've hit: >> > > https://bugs.launchpad.net/tripleo/+bug/1584892? >> > > There is a link bugzilla record. >> > > >> > > ----- Original Message ----- >> > > > From: "Pedro Sousa" >> > > > To: "Ronelle Landy" >> > > > Cc: "Christopher Brown" , "Ignacio Bravo" < >> > > ibravo at ltgfederal.com>, "rdo-list" >> > > > >> > > > Sent: Friday, June 3, 2016 12:26:58 PM >> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >> > > > >> > > > Thanks Ronelle, >> > > > >> > > > do you think this kind of errors can be related with network >> settings? >> > > > >> > > > "Could not retrieve fact='rabbitmq_nodename', >> resolution='': >> > > > undefined method `[]' for nil:NilClass Could not retrieve >> > > > fact='rabbitmq_nodename', resolution='': undefined >> method `[]' >> > > > for nil:NilClass" >> > > > >> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy >> wrote: >> > > > >> > > > > Hi Pedro, >> > > > > >> > > > > You could use the docs you referred to. >> > > > > Alternatively, if you want to use a vm for the undercloud and >> baremetal >> > > > > machines for the overcloud, it is possible to use Tripleo >> Qucikstart >> > > with a >> > > > > few modifications. >> > > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. >> > > > > >> > > > > ----- Original Message ----- >> > > > > > From: "Pedro Sousa" >> > > > > > To: "Ronelle Landy" >> > > > > > Cc: "Christopher Brown" , "Ignacio Bravo" < >> > > > > ibravo at ltgfederal.com>, "rdo-list" >> > > > > > >> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM >> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >> > > > > > >> > > > > > Hi Ronelle, >> > > > > > >> > > > > > maybe I understand it wrong but I thought that Tripleo >> Quickstart >> > > was for >> > > > > > deploying virtual environments? >> > > > > > >> > > > > > And for baremetal we should use >> > > > > > >> > > > > >> > > >> http://docs.openstack.org/developer/tripleo-docs/installation/installation.html >> > > > > > ? >> > > > > > >> > > > > > Thanks >> > > > > > >> > > > > > >> > > > > > >> > > > > > >> > > > > > >> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy < >> rlandy at redhat.com> >> > > wrote: >> > > > > > >> > > > > > > Hello, >> > > > > > > >> > > > > > > We have had success deploying RDO (Mitaka) on baremetal >> systems - >> > > using >> > > > > > > Tripleo Quickstart with both single-nic-vlans and >> bond-with-vlans >> > > > > network >> > > > > > > isolation configurations. >> > > > > > > >> > > > > > > Baremetal can have some complicated networking issues but, >> from >> > > > > previous >> > > > > > > experiences, if a single-controller deployment worked but a HA >> > > > > deployment >> > > > > > > did not, I would check: >> > > > > > > - does the HA deployment command include: -e >> > > > > > > >> > > > > >> > > >> /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >> > > > > > > - are there possible MTU issues? >> > > > > > > >> > > > > > > >> > > > > > > ----- Original Message ----- >> > > > > > > > From: "Christopher Brown" >> > > > > > > > To: pgsousa at gmail.com, ibravo at ltgfederal.com >> > > > > > > > Cc: rdo-list at redhat.com >> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM >> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >> > > > > > > > >> > > > > > > > Hello Ignacio, >> > > > > > > > >> > > > > > > > Thanks for your response and good to know it isn't just me! >> > > > > > > > >> > > > > > > > I would be more than happy to provide developers with >> access to >> > > our >> > > > > > > > bare metal environments. I'll also file some bugzilla >> reports to >> > > see >> > > > > if >> > > > > > > > this generates any interest. >> > > > > > > > >> > > > > > > > Please do let me know if you make any progress - I am >> trying to >> > > > > deploy >> > > > > > > > HA with network isolation, multiple nics and vlans. >> > > > > > > > >> > > > > > > > The RDO web page states: >> > > > > > > > >> > > > > > > > "If you want to create a production-ready cloud, you'll >> want to >> > > use >> > > > > the >> > > > > > > > TripleO quickstart guide." >> > > > > > > > >> > > > > > > > which is a contradiction in terms really. >> > > > > > > > >> > > > > > > > Cheers >> > > > > > > > >> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo wrote: >> > > > > > > > > Pedro / Christopher, >> > > > > > > > > >> > > > > > > > > Just wanted to share with you that I also had plenty of >> issues >> > > > > > > > > deploying on bare metal HA servers, and have paused the >> > > deployment >> > > > > > > > > using TripleO until better winds start to flow here. I was >> > > able to >> > > > > > > > > deploy the QuickStart, but on bare metal the history was >> > > different. >> > > > > > > > > Couldn't even deploy a two server configuration. >> > > > > > > > > >> > > > > > > > > I was thinking that it would be good to have the >> developers >> > > have >> > > > > > > > > access to one of our environments and go through a full >> install >> > > > > with >> > > > > > > > > us to better see where things fail. We can do this >> handholding >> > > > > > > > > deployment once every week/month based on developers time >> > > > > > > > > availability. That way we can get a working install, and >> we can >> > > > > > > > > troubleshoot real life environment problems. >> > > > > > > > > >> > > > > > > > > >> > > > > > > > > IB >> > > > > > > > > >> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa < >> pgsousa at gmail.com> >> > > wrote: >> > > > > > > > > >> > > > > > > > > > Yes. I've used this, but I'll try again as there's >> seems to >> > > be >> > > > > new >> > > > > > > > > > updates. >> > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > > Stable Branch Skip all repos mentioned above, other than >> > > epel- >> > > > > > > > > > release which is still required. >> > > > > > > > > > Enable latest RDO Stable Delorean repository for all >> packages >> > > > > > > > > > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo >> > > > > https://trunk.r >> > > > > > > > > > doproject.org/centos7-liberty/current/delorean.repo >> > > > > > > > > > Enable the Delorean Deps repository >> > > > > > > > > > sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo >> > > > > http://tru >> > > > > > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo >> > > > > > > > > > >> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown < >> > > > > cbrown2 at ocf.co. >> > > > > > > > > > uk> wrote: >> > > > > > > > > > > No, Liberty deployed ok for us. >> > > > > > > > > > > >> > > > > > > > > > > It suggests to me a package mismatch. Have you >> completely >> > > > > rebuilt >> > > > > > > > > > > the >> > > > > > > > > > > undercloud and then the images using Liberty? >> > > > > > > > > > > >> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: >> > > > > > > > > > > > AttributeError: 'module' object has no attribute >> > > 'PortOpt' >> > > > > > > > > > > -- >> > > > > > > > > > > Regards, >> > > > > > > > > > > >> > > > > > > > > > > Christopher Brown >> > > > > > > > > > > OpenStack Engineer >> > > > > > > > > > > OCF plc >> > > > > > > > > > > >> > > > > > > > > > > Tel: +44 (0)114 257 2200 >> > > > > > > > > > > Web: www.ocf.co.uk >> > > > > > > > > > > Blog: blog.ocf.co.uk >> > > > > > > > > > > Twitter: @ocfplc >> > > > > > > > > > > >> > > > > > > > > > > Please note, any emails relating to an OCF Support >> request >> > > must >> > > > > > > > > > > always >> > > > > > > > > > > be sent to support at ocf.co.uk for a ticket number to >> be >> > > > > generated >> > > > > > > > > > > or >> > > > > > > > > > > existing support ticket to be updated. Should this >> not be >> > > done >> > > > > > > > > > > then OCF >> > > > > > > > > > > >> > > > > > > > > > > cannot be held responsible for requests not dealt >> with in a >> > > > > > > > > > > timely >> > > > > > > > > > > manner. >> > > > > > > > > > > >> > > > > > > > > > > OCF plc is a company registered in England and Wales. >> > > > > Registered >> > > > > > > > > > > number >> > > > > > > > > > > >> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office >> > > address: >> > > > > > > > > > > OCF plc, >> > > > > > > > > > > >> > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, >> Chapeltown, >> > > > > > > > > > > Sheffield S35 >> > > > > > > > > > > 2PG. >> > > > > > > > > > > >> > > > > > > > > > > If you have received this message in error, please >> notify >> > > us >> > > > > > > > > > > immediately and remove it from your system. >> > > > > > > > > > > >> > > > > > > > > >> > > > > > > > > _______________________________________________ >> > > > > > > > > rdo-list mailing list >> > > > > > > > > rdo-list at redhat.com >> > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list >> > > > > > > > > >> > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > > > > > -- >> > > > > > > > Regards, >> > > > > > > > >> > > > > > > > Christopher Brown >> > > > > > > > OpenStack Engineer >> > > > > > > > OCF plc >> > > > > > > > >> > > > > > > > Tel: +44 (0)114 257 2200 >> > > > > > > > Web: www.ocf.co.uk >> > > > > > > > Blog: blog.ocf.co.uk >> > > > > > > > Twitter: @ocfplc >> > > > > > > > >> > > > > > > > Please note, any emails relating to an OCF Support request >> must >> > > > > always >> > > > > > > > be sent to support at ocf.co.uk for a ticket number to be >> > > generated or >> > > > > > > > existing support ticket to be updated. Should this not be >> done >> > > then >> > > > > OCF >> > > > > > > > >> > > > > > > > cannot be held responsible for requests not dealt with in a >> > > timely >> > > > > > > > manner. >> > > > > > > > >> > > > > > > > OCF plc is a company registered in England and Wales. >> Registered >> > > > > number >> > > > > > > > >> > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office >> address: >> > > OCF >> > > > > plc, >> > > > > > > > >> > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, >> > > Sheffield >> > > > > S35 >> > > > > > > > 2PG. >> > > > > > > > >> > > > > > > > If you have received this message in error, please notify us >> > > > > > > > immediately and remove it from your system. >> > > > > > > > >> > > > > > > > _______________________________________________ >> > > > > > > > rdo-list mailing list >> > > > > > > > rdo-list at redhat.com >> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list >> > > > > > > > >> > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > > > > > >> > > > > > > >> > > > > > >> > > > > >> > > > >> > > >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Sat Jun 4 15:14:11 2016 From: marius at remote-lab.net (Marius Cornea) Date: Sat, 4 Jun 2016 17:14:11 +0200 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> Message-ID: Hi Pedro, Scaling out controller nodes is not supported at this moment: https://bugzilla.redhat.com/show_bug.cgi?id=1243312 On Sat, Jun 4, 2016 at 5:05 PM, Pedro Sousa wrote: > Hi, > > some update on scaling the cloud: > > 1 controller + 1 compute -> 1 controller + 3 computes OK > > 1 controller + 3 computes -> 3 controllers + 3 compute FAILS > > Problem: The new controller nodes are "stuck" in "pscd start", so it seems > to be a problem joining the pacemaker cluster... Did anyone had this > problem? > > Regards > > > > > > > On Sat, Jun 4, 2016 at 1:50 AM, Pedro Sousa wrote: >> >> Hi, >> >> I finally managed to install a baremetal in mitaka with 1 controller + 1 >> compute with network isolation. Thank god :) >> >> All I did was: >> >> #yum install centos-release-openstack-mitaka >> #sudo yum install python-tripleoclient >> >> without epel repos. >> >> Then followed instructions from Redhat Site. >> >> I downloaded the overcloud images from: >> http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/ >> >> I do have an issue that forces me to delete a json file and run >> os-refresh-config inside my overcloud nodes other than that it installs >> fine. >> >> Now I'll test with more 2 controllers + 2 computes to have a full HA >> deployment. >> >> If anyone needs help to document this I'll be happy to help. >> >> Regards, >> Pedro Sousa >> >> >> On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy wrote: >>> >>> The report says: "Fix Released" as of 2016-05-24. >>> Are you installing on a clean system with the latest repositories? >>> >>> Might also want to check your version of rabbitmq: I have >>> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. >>> >>> ----- Original Message ----- >>> > From: "Pedro Sousa" >>> > To: "Ronelle Landy" >>> > Cc: "Christopher Brown" , "Ignacio Bravo" >>> > , "rdo-list" >>> > >>> > Sent: Friday, June 3, 2016 1:20:43 PM >>> > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>> > >>> > Anyway to workaround this? Maybe downgrade hiera? >>> > >>> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy >>> > wrote: >>> > >>> > > I am not sure exactly where you installed from, and when you did your >>> > > installation, but any chance, you've hit: >>> > > https://bugs.launchpad.net/tripleo/+bug/1584892? >>> > > There is a link bugzilla record. >>> > > >>> > > ----- Original Message ----- >>> > > > From: "Pedro Sousa" >>> > > > To: "Ronelle Landy" >>> > > > Cc: "Christopher Brown" , "Ignacio Bravo" < >>> > > ibravo at ltgfederal.com>, "rdo-list" >>> > > > >>> > > > Sent: Friday, June 3, 2016 12:26:58 PM >>> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>> > > > >>> > > > Thanks Ronelle, >>> > > > >>> > > > do you think this kind of errors can be related with network >>> > > > settings? >>> > > > >>> > > > "Could not retrieve fact='rabbitmq_nodename', >>> > > > resolution='': >>> > > > undefined method `[]' for nil:NilClass Could not retrieve >>> > > > fact='rabbitmq_nodename', resolution='': undefined >>> > > > method `[]' >>> > > > for nil:NilClass" >>> > > > >>> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy >>> > > > wrote: >>> > > > >>> > > > > Hi Pedro, >>> > > > > >>> > > > > You could use the docs you referred to. >>> > > > > Alternatively, if you want to use a vm for the undercloud and >>> > > > > baremetal >>> > > > > machines for the overcloud, it is possible to use Tripleo >>> > > > > Qucikstart >>> > > with a >>> > > > > few modifications. >>> > > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. >>> > > > > >>> > > > > ----- Original Message ----- >>> > > > > > From: "Pedro Sousa" >>> > > > > > To: "Ronelle Landy" >>> > > > > > Cc: "Christopher Brown" , "Ignacio Bravo" < >>> > > > > ibravo at ltgfederal.com>, "rdo-list" >>> > > > > > >>> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM >>> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>> > > > > > >>> > > > > > Hi Ronelle, >>> > > > > > >>> > > > > > maybe I understand it wrong but I thought that Tripleo >>> > > > > > Quickstart >>> > > was for >>> > > > > > deploying virtual environments? >>> > > > > > >>> > > > > > And for baremetal we should use >>> > > > > > >>> > > > > >>> > > >>> > > http://docs.openstack.org/developer/tripleo-docs/installation/installation.html >>> > > > > > ? >>> > > > > > >>> > > > > > Thanks >>> > > > > > >>> > > > > > >>> > > > > > >>> > > > > > >>> > > > > > >>> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy >>> > > > > > >>> > > wrote: >>> > > > > > >>> > > > > > > Hello, >>> > > > > > > >>> > > > > > > We have had success deploying RDO (Mitaka) on baremetal >>> > > > > > > systems - >>> > > using >>> > > > > > > Tripleo Quickstart with both single-nic-vlans and >>> > > > > > > bond-with-vlans >>> > > > > network >>> > > > > > > isolation configurations. >>> > > > > > > >>> > > > > > > Baremetal can have some complicated networking issues but, >>> > > > > > > from >>> > > > > previous >>> > > > > > > experiences, if a single-controller deployment worked but a >>> > > > > > > HA >>> > > > > deployment >>> > > > > > > did not, I would check: >>> > > > > > > - does the HA deployment command include: -e >>> > > > > > > >>> > > > > >>> > > >>> > > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >>> > > > > > > - are there possible MTU issues? >>> > > > > > > >>> > > > > > > >>> > > > > > > ----- Original Message ----- >>> > > > > > > > From: "Christopher Brown" >>> > > > > > > > To: pgsousa at gmail.com, ibravo at ltgfederal.com >>> > > > > > > > Cc: rdo-list at redhat.com >>> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM >>> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>> > > > > > > > >>> > > > > > > > Hello Ignacio, >>> > > > > > > > >>> > > > > > > > Thanks for your response and good to know it isn't just me! >>> > > > > > > > >>> > > > > > > > I would be more than happy to provide developers with >>> > > > > > > > access to >>> > > our >>> > > > > > > > bare metal environments. I'll also file some bugzilla >>> > > > > > > > reports to >>> > > see >>> > > > > if >>> > > > > > > > this generates any interest. >>> > > > > > > > >>> > > > > > > > Please do let me know if you make any progress - I am >>> > > > > > > > trying to >>> > > > > deploy >>> > > > > > > > HA with network isolation, multiple nics and vlans. >>> > > > > > > > >>> > > > > > > > The RDO web page states: >>> > > > > > > > >>> > > > > > > > "If you want to create a production-ready cloud, you'll >>> > > > > > > > want to >>> > > use >>> > > > > the >>> > > > > > > > TripleO quickstart guide." >>> > > > > > > > >>> > > > > > > > which is a contradiction in terms really. >>> > > > > > > > >>> > > > > > > > Cheers >>> > > > > > > > >>> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo wrote: >>> > > > > > > > > Pedro / Christopher, >>> > > > > > > > > >>> > > > > > > > > Just wanted to share with you that I also had plenty of >>> > > > > > > > > issues >>> > > > > > > > > deploying on bare metal HA servers, and have paused the >>> > > deployment >>> > > > > > > > > using TripleO until better winds start to flow here. I >>> > > > > > > > > was >>> > > able to >>> > > > > > > > > deploy the QuickStart, but on bare metal the history was >>> > > different. >>> > > > > > > > > Couldn't even deploy a two server configuration. >>> > > > > > > > > >>> > > > > > > > > I was thinking that it would be good to have the >>> > > > > > > > > developers >>> > > have >>> > > > > > > > > access to one of our environments and go through a full >>> > > > > > > > > install >>> > > > > with >>> > > > > > > > > us to better see where things fail. We can do this >>> > > > > > > > > handholding >>> > > > > > > > > deployment once every week/month based on developers time >>> > > > > > > > > availability. That way we can get a working install, and >>> > > > > > > > > we can >>> > > > > > > > > troubleshoot real life environment problems. >>> > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > IB >>> > > > > > > > > >>> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa >>> > > > > > > > > >>> > > wrote: >>> > > > > > > > > >>> > > > > > > > > > Yes. I've used this, but I'll try again as there's >>> > > > > > > > > > seems to >>> > > be >>> > > > > new >>> > > > > > > > > > updates. >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > > Stable Branch Skip all repos mentioned above, other >>> > > > > > > > > > than >>> > > epel- >>> > > > > > > > > > release which is still required. >>> > > > > > > > > > Enable latest RDO Stable Delorean repository for all >>> > > > > > > > > > packages >>> > > > > > > > > > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo >>> > > > > https://trunk.r >>> > > > > > > > > > doproject.org/centos7-liberty/current/delorean.repo >>> > > > > > > > > > Enable the Delorean Deps repository >>> > > > > > > > > > sudo curl -o >>> > > > > > > > > > /etc/yum.repos.d/delorean-deps-liberty.repo >>> > > > > http://tru >>> > > > > > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo >>> > > > > > > > > > >>> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown < >>> > > > > cbrown2 at ocf.co. >>> > > > > > > > > > uk> wrote: >>> > > > > > > > > > > No, Liberty deployed ok for us. >>> > > > > > > > > > > >>> > > > > > > > > > > It suggests to me a package mismatch. Have you >>> > > > > > > > > > > completely >>> > > > > rebuilt >>> > > > > > > > > > > the >>> > > > > > > > > > > undercloud and then the images using Liberty? >>> > > > > > > > > > > >>> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa wrote: >>> > > > > > > > > > > > AttributeError: 'module' object has no attribute >>> > > 'PortOpt' >>> > > > > > > > > > > -- >>> > > > > > > > > > > Regards, >>> > > > > > > > > > > >>> > > > > > > > > > > Christopher Brown >>> > > > > > > > > > > OpenStack Engineer >>> > > > > > > > > > > OCF plc >>> > > > > > > > > > > >>> > > > > > > > > > > Tel: +44 (0)114 257 2200 >>> > > > > > > > > > > Web: www.ocf.co.uk >>> > > > > > > > > > > Blog: blog.ocf.co.uk >>> > > > > > > > > > > Twitter: @ocfplc >>> > > > > > > > > > > >>> > > > > > > > > > > Please note, any emails relating to an OCF Support >>> > > > > > > > > > > request >>> > > must >>> > > > > > > > > > > always >>> > > > > > > > > > > be sent to support at ocf.co.uk for a ticket number to >>> > > > > > > > > > > be >>> > > > > generated >>> > > > > > > > > > > or >>> > > > > > > > > > > existing support ticket to be updated. Should this >>> > > > > > > > > > > not be >>> > > done >>> > > > > > > > > > > then OCF >>> > > > > > > > > > > >>> > > > > > > > > > > cannot be held responsible for requests not dealt >>> > > > > > > > > > > with in a >>> > > > > > > > > > > timely >>> > > > > > > > > > > manner. >>> > > > > > > > > > > >>> > > > > > > > > > > OCF plc is a company registered in England and Wales. >>> > > > > Registered >>> > > > > > > > > > > number >>> > > > > > > > > > > >>> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office >>> > > address: >>> > > > > > > > > > > OCF plc, >>> > > > > > > > > > > >>> > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, >>> > > > > > > > > > > Chapeltown, >>> > > > > > > > > > > Sheffield S35 >>> > > > > > > > > > > 2PG. >>> > > > > > > > > > > >>> > > > > > > > > > > If you have received this message in error, please >>> > > > > > > > > > > notify >>> > > us >>> > > > > > > > > > > immediately and remove it from your system. >>> > > > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > _______________________________________________ >>> > > > > > > > > rdo-list mailing list >>> > > > > > > > > rdo-list at redhat.com >>> > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list >>> > > > > > > > > >>> > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >>> > > > > > > > -- >>> > > > > > > > Regards, >>> > > > > > > > >>> > > > > > > > Christopher Brown >>> > > > > > > > OpenStack Engineer >>> > > > > > > > OCF plc >>> > > > > > > > >>> > > > > > > > Tel: +44 (0)114 257 2200 >>> > > > > > > > Web: www.ocf.co.uk >>> > > > > > > > Blog: blog.ocf.co.uk >>> > > > > > > > Twitter: @ocfplc >>> > > > > > > > >>> > > > > > > > Please note, any emails relating to an OCF Support request >>> > > > > > > > must >>> > > > > always >>> > > > > > > > be sent to support at ocf.co.uk for a ticket number to be >>> > > generated or >>> > > > > > > > existing support ticket to be updated. Should this not be >>> > > > > > > > done >>> > > then >>> > > > > OCF >>> > > > > > > > >>> > > > > > > > cannot be held responsible for requests not dealt with in a >>> > > timely >>> > > > > > > > manner. >>> > > > > > > > >>> > > > > > > > OCF plc is a company registered in England and Wales. >>> > > > > > > > Registered >>> > > > > number >>> > > > > > > > >>> > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office >>> > > > > > > > address: >>> > > OCF >>> > > > > plc, >>> > > > > > > > >>> > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, >>> > > Sheffield >>> > > > > S35 >>> > > > > > > > 2PG. >>> > > > > > > > >>> > > > > > > > If you have received this message in error, please notify >>> > > > > > > > us >>> > > > > > > > immediately and remove it from your system. >>> > > > > > > > >>> > > > > > > > _______________________________________________ >>> > > > > > > > rdo-list mailing list >>> > > > > > > > rdo-list at redhat.com >>> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list >>> > > > > > > > >>> > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >> >> > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From pgsousa at gmail.com Sat Jun 4 16:04:19 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Sat, 4 Jun 2016 17:04:19 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> Message-ID: Thanks Marius, I can confirm that it installs fine with 3 controllers + 3 computes after recreating the stack Regards On Sat, Jun 4, 2016 at 4:14 PM, Marius Cornea wrote: > Hi Pedro, > > Scaling out controller nodes is not supported at this moment: > https://bugzilla.redhat.com/show_bug.cgi?id=1243312 > > On Sat, Jun 4, 2016 at 5:05 PM, Pedro Sousa wrote: > > Hi, > > > > some update on scaling the cloud: > > > > 1 controller + 1 compute -> 1 controller + 3 computes OK > > > > 1 controller + 3 computes -> 3 controllers + 3 compute FAILS > > > > Problem: The new controller nodes are "stuck" in "pscd start", so it > seems > > to be a problem joining the pacemaker cluster... Did anyone had this > > problem? > > > > Regards > > > > > > > > > > > > > > On Sat, Jun 4, 2016 at 1:50 AM, Pedro Sousa wrote: > >> > >> Hi, > >> > >> I finally managed to install a baremetal in mitaka with 1 controller + > 1 > >> compute with network isolation. Thank god :) > >> > >> All I did was: > >> > >> #yum install centos-release-openstack-mitaka > >> #sudo yum install python-tripleoclient > >> > >> without epel repos. > >> > >> Then followed instructions from Redhat Site. > >> > >> I downloaded the overcloud images from: > >> > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/ > >> > >> I do have an issue that forces me to delete a json file and run > >> os-refresh-config inside my overcloud nodes other than that it installs > >> fine. > >> > >> Now I'll test with more 2 controllers + 2 computes to have a full HA > >> deployment. > >> > >> If anyone needs help to document this I'll be happy to help. > >> > >> Regards, > >> Pedro Sousa > >> > >> > >> On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy > wrote: > >>> > >>> The report says: "Fix Released" as of 2016-05-24. > >>> Are you installing on a clean system with the latest repositories? > >>> > >>> Might also want to check your version of rabbitmq: I have > >>> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. > >>> > >>> ----- Original Message ----- > >>> > From: "Pedro Sousa" > >>> > To: "Ronelle Landy" > >>> > Cc: "Christopher Brown" , "Ignacio Bravo" > >>> > , "rdo-list" > >>> > > >>> > Sent: Friday, June 3, 2016 1:20:43 PM > >>> > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > >>> > > >>> > Anyway to workaround this? Maybe downgrade hiera? > >>> > > >>> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy > >>> > wrote: > >>> > > >>> > > I am not sure exactly where you installed from, and when you did > your > >>> > > installation, but any chance, you've hit: > >>> > > https://bugs.launchpad.net/tripleo/+bug/1584892? > >>> > > There is a link bugzilla record. > >>> > > > >>> > > ----- Original Message ----- > >>> > > > From: "Pedro Sousa" > >>> > > > To: "Ronelle Landy" > >>> > > > Cc: "Christopher Brown" , "Ignacio Bravo" < > >>> > > ibravo at ltgfederal.com>, "rdo-list" > >>> > > > > >>> > > > Sent: Friday, June 3, 2016 12:26:58 PM > >>> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > >>> > > > > >>> > > > Thanks Ronelle, > >>> > > > > >>> > > > do you think this kind of errors can be related with network > >>> > > > settings? > >>> > > > > >>> > > > "Could not retrieve fact='rabbitmq_nodename', > >>> > > > resolution='': > >>> > > > undefined method `[]' for nil:NilClass Could not retrieve > >>> > > > fact='rabbitmq_nodename', resolution='': undefined > >>> > > > method `[]' > >>> > > > for nil:NilClass" > >>> > > > > >>> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy > > >>> > > > wrote: > >>> > > > > >>> > > > > Hi Pedro, > >>> > > > > > >>> > > > > You could use the docs you referred to. > >>> > > > > Alternatively, if you want to use a vm for the undercloud and > >>> > > > > baremetal > >>> > > > > machines for the overcloud, it is possible to use Tripleo > >>> > > > > Qucikstart > >>> > > with a > >>> > > > > few modifications. > >>> > > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. > >>> > > > > > >>> > > > > ----- Original Message ----- > >>> > > > > > From: "Pedro Sousa" > >>> > > > > > To: "Ronelle Landy" > >>> > > > > > Cc: "Christopher Brown" , "Ignacio > Bravo" < > >>> > > > > ibravo at ltgfederal.com>, "rdo-list" > >>> > > > > > > >>> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM > >>> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > >>> > > > > > > >>> > > > > > Hi Ronelle, > >>> > > > > > > >>> > > > > > maybe I understand it wrong but I thought that Tripleo > >>> > > > > > Quickstart > >>> > > was for > >>> > > > > > deploying virtual environments? > >>> > > > > > > >>> > > > > > And for baremetal we should use > >>> > > > > > > >>> > > > > > >>> > > > >>> > > > http://docs.openstack.org/developer/tripleo-docs/installation/installation.html > >>> > > > > > ? > >>> > > > > > > >>> > > > > > Thanks > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy > >>> > > > > > > >>> > > wrote: > >>> > > > > > > >>> > > > > > > Hello, > >>> > > > > > > > >>> > > > > > > We have had success deploying RDO (Mitaka) on baremetal > >>> > > > > > > systems - > >>> > > using > >>> > > > > > > Tripleo Quickstart with both single-nic-vlans and > >>> > > > > > > bond-with-vlans > >>> > > > > network > >>> > > > > > > isolation configurations. > >>> > > > > > > > >>> > > > > > > Baremetal can have some complicated networking issues but, > >>> > > > > > > from > >>> > > > > previous > >>> > > > > > > experiences, if a single-controller deployment worked but a > >>> > > > > > > HA > >>> > > > > deployment > >>> > > > > > > did not, I would check: > >>> > > > > > > - does the HA deployment command include: -e > >>> > > > > > > > >>> > > > > > >>> > > > >>> > > > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > >>> > > > > > > - are there possible MTU issues? > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > ----- Original Message ----- > >>> > > > > > > > From: "Christopher Brown" > >>> > > > > > > > To: pgsousa at gmail.com, ibravo at ltgfederal.com > >>> > > > > > > > Cc: rdo-list at redhat.com > >>> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM > >>> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > >>> > > > > > > > > >>> > > > > > > > Hello Ignacio, > >>> > > > > > > > > >>> > > > > > > > Thanks for your response and good to know it isn't just > me! > >>> > > > > > > > > >>> > > > > > > > I would be more than happy to provide developers with > >>> > > > > > > > access to > >>> > > our > >>> > > > > > > > bare metal environments. I'll also file some bugzilla > >>> > > > > > > > reports to > >>> > > see > >>> > > > > if > >>> > > > > > > > this generates any interest. > >>> > > > > > > > > >>> > > > > > > > Please do let me know if you make any progress - I am > >>> > > > > > > > trying to > >>> > > > > deploy > >>> > > > > > > > HA with network isolation, multiple nics and vlans. > >>> > > > > > > > > >>> > > > > > > > The RDO web page states: > >>> > > > > > > > > >>> > > > > > > > "If you want to create a production-ready cloud, you'll > >>> > > > > > > > want to > >>> > > use > >>> > > > > the > >>> > > > > > > > TripleO quickstart guide." > >>> > > > > > > > > >>> > > > > > > > which is a contradiction in terms really. > >>> > > > > > > > > >>> > > > > > > > Cheers > >>> > > > > > > > > >>> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo wrote: > >>> > > > > > > > > Pedro / Christopher, > >>> > > > > > > > > > >>> > > > > > > > > Just wanted to share with you that I also had plenty of > >>> > > > > > > > > issues > >>> > > > > > > > > deploying on bare metal HA servers, and have paused the > >>> > > deployment > >>> > > > > > > > > using TripleO until better winds start to flow here. I > >>> > > > > > > > > was > >>> > > able to > >>> > > > > > > > > deploy the QuickStart, but on bare metal the history > was > >>> > > different. > >>> > > > > > > > > Couldn't even deploy a two server configuration. > >>> > > > > > > > > > >>> > > > > > > > > I was thinking that it would be good to have the > >>> > > > > > > > > developers > >>> > > have > >>> > > > > > > > > access to one of our environments and go through a full > >>> > > > > > > > > install > >>> > > > > with > >>> > > > > > > > > us to better see where things fail. We can do this > >>> > > > > > > > > handholding > >>> > > > > > > > > deployment once every week/month based on developers > time > >>> > > > > > > > > availability. That way we can get a working install, > and > >>> > > > > > > > > we can > >>> > > > > > > > > troubleshoot real life environment problems. > >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > IB > >>> > > > > > > > > > >>> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa > >>> > > > > > > > > > >>> > > wrote: > >>> > > > > > > > > > >>> > > > > > > > > > Yes. I've used this, but I'll try again as there's > >>> > > > > > > > > > seems to > >>> > > be > >>> > > > > new > >>> > > > > > > > > > updates. > >>> > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > > Stable Branch Skip all repos mentioned above, other > >>> > > > > > > > > > than > >>> > > epel- > >>> > > > > > > > > > release which is still required. > >>> > > > > > > > > > Enable latest RDO Stable Delorean repository for all > >>> > > > > > > > > > packages > >>> > > > > > > > > > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo > >>> > > > > https://trunk.r > >>> > > > > > > > > > doproject.org/centos7-liberty/current/delorean.repo > >>> > > > > > > > > > Enable the Delorean Deps repository > >>> > > > > > > > > > sudo curl -o > >>> > > > > > > > > > /etc/yum.repos.d/delorean-deps-liberty.repo > >>> > > > > http://tru > >>> > > > > > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo > >>> > > > > > > > > > > >>> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher Brown < > >>> > > > > cbrown2 at ocf.co. > >>> > > > > > > > > > uk> wrote: > >>> > > > > > > > > > > No, Liberty deployed ok for us. > >>> > > > > > > > > > > > >>> > > > > > > > > > > It suggests to me a package mismatch. Have you > >>> > > > > > > > > > > completely > >>> > > > > rebuilt > >>> > > > > > > > > > > the > >>> > > > > > > > > > > undercloud and then the images using Liberty? > >>> > > > > > > > > > > > >>> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro Sousa > wrote: > >>> > > > > > > > > > > > AttributeError: 'module' object has no attribute > >>> > > 'PortOpt' > >>> > > > > > > > > > > -- > >>> > > > > > > > > > > Regards, > >>> > > > > > > > > > > > >>> > > > > > > > > > > Christopher Brown > >>> > > > > > > > > > > OpenStack Engineer > >>> > > > > > > > > > > OCF plc > >>> > > > > > > > > > > > >>> > > > > > > > > > > Tel: +44 (0)114 257 2200 > >>> > > > > > > > > > > Web: www.ocf.co.uk > >>> > > > > > > > > > > Blog: blog.ocf.co.uk > >>> > > > > > > > > > > Twitter: @ocfplc > >>> > > > > > > > > > > > >>> > > > > > > > > > > Please note, any emails relating to an OCF Support > >>> > > > > > > > > > > request > >>> > > must > >>> > > > > > > > > > > always > >>> > > > > > > > > > > be sent to support at ocf.co.uk for a ticket number > to > >>> > > > > > > > > > > be > >>> > > > > generated > >>> > > > > > > > > > > or > >>> > > > > > > > > > > existing support ticket to be updated. Should this > >>> > > > > > > > > > > not be > >>> > > done > >>> > > > > > > > > > > then OCF > >>> > > > > > > > > > > > >>> > > > > > > > > > > cannot be held responsible for requests not dealt > >>> > > > > > > > > > > with in a > >>> > > > > > > > > > > timely > >>> > > > > > > > > > > manner. > >>> > > > > > > > > > > > >>> > > > > > > > > > > OCF plc is a company registered in England and > Wales. > >>> > > > > Registered > >>> > > > > > > > > > > number > >>> > > > > > > > > > > > >>> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. Registered > office > >>> > > address: > >>> > > > > > > > > > > OCF plc, > >>> > > > > > > > > > > > >>> > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, > >>> > > > > > > > > > > Chapeltown, > >>> > > > > > > > > > > Sheffield S35 > >>> > > > > > > > > > > 2PG. > >>> > > > > > > > > > > > >>> > > > > > > > > > > If you have received this message in error, please > >>> > > > > > > > > > > notify > >>> > > us > >>> > > > > > > > > > > immediately and remove it from your system. > >>> > > > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > _______________________________________________ > >>> > > > > > > > > rdo-list mailing list > >>> > > > > > > > > rdo-list at redhat.com > >>> > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > >>> > > > > > > > > > >>> > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > >>> > > > > > > > -- > >>> > > > > > > > Regards, > >>> > > > > > > > > >>> > > > > > > > Christopher Brown > >>> > > > > > > > OpenStack Engineer > >>> > > > > > > > OCF plc > >>> > > > > > > > > >>> > > > > > > > Tel: +44 (0)114 257 2200 > >>> > > > > > > > Web: www.ocf.co.uk > >>> > > > > > > > Blog: blog.ocf.co.uk > >>> > > > > > > > Twitter: @ocfplc > >>> > > > > > > > > >>> > > > > > > > Please note, any emails relating to an OCF Support > request > >>> > > > > > > > must > >>> > > > > always > >>> > > > > > > > be sent to support at ocf.co.uk for a ticket number to be > >>> > > generated or > >>> > > > > > > > existing support ticket to be updated. Should this not be > >>> > > > > > > > done > >>> > > then > >>> > > > > OCF > >>> > > > > > > > > >>> > > > > > > > cannot be held responsible for requests not dealt with > in a > >>> > > timely > >>> > > > > > > > manner. > >>> > > > > > > > > >>> > > > > > > > OCF plc is a company registered in England and Wales. > >>> > > > > > > > Registered > >>> > > > > number > >>> > > > > > > > > >>> > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office > >>> > > > > > > > address: > >>> > > OCF > >>> > > > > plc, > >>> > > > > > > > > >>> > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, > >>> > > Sheffield > >>> > > > > S35 > >>> > > > > > > > 2PG. > >>> > > > > > > > > >>> > > > > > > > If you have received this message in error, please notify > >>> > > > > > > > us > >>> > > > > > > > immediately and remove it from your system. > >>> > > > > > > > > >>> > > > > > > > _______________________________________________ > >>> > > > > > > > rdo-list mailing list > >>> > > > > > > > rdo-list at redhat.com > >>> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > >>> > > > > > > > > >>> > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > >>> > > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >> > >> > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkanthi at yahoo.com Sat Jun 4 16:52:46 2016 From: pkanthi at yahoo.com (Prakash Kanthi) Date: Sat, 4 Jun 2016 16:52:46 +0000 (UTC) Subject: [rdo-list] TripleO Install Failure References: <749735896.1189718.1465059166336.JavaMail.yahoo.ref@mail.yahoo.com> Message-ID: <749735896.1189718.1465059166336.JavaMail.yahoo@mail.yahoo.com> Hi There, I am trying to install OpenStack using TripleO quickstart script on a single server. I am running following error and the script stops. Can you please tell me what is going on? Thanks,PK TASK [setup/undercloud : Set_fact for undercloud ip] ***************************task path: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:180Saturday 04 June 2016 ?11:22:33 -0500 (0:01:35.278) ? ? ? 0:08:30.041 *********?ok: [192.168.0.24] => {"ansible_facts": {"undercloud_ip": "192.168.23.37"}, "changed": false} TASK [setup/undercloud : Wait until ssh is available on undercloud node] *******task path: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:184Saturday 04 June 2016 ?11:22:34 -0500 (0:00:01.249) ? ? ? 0:08:31.291 *********?ok: [192.168.0.24] => {"changed": false, "elapsed": 0, "path": null, "port": 22, "search_regex": null, "state": "started"} TASK [setup/undercloud : Add undercloud vm to inventory] ***********************task path: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:192Saturday 04 June 2016 ?11:22:36 -0500 (0:00:01.610) ? ? ? 0:08:32.902 *********?creating host via 'add_host': hostname=undercloudchanged: [192.168.0.24] => {"add_host": {"groups": ["undercloud"], "host_name": "undercloud", "host_vars": {"ansible_fqdn": "undercloud", "ansible_host": "undercloud", "ansible_private_key_file": "/root/.quickstart/id_rsa_undercloud", "ansible_ssh_extra_args": "-F \"/root/.quickstart/ssh.config.ansible\"", "ansible_user": "stack"}}, "changed": true} TASK [setup/undercloud : Generate ssh configuration] ***************************task path: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:202Saturday 04 June 2016 ?11:22:36 -0500 (0:00:00.687) ? ? ? 0:08:33.590 *********?changed: [192.168.0.24 -> localhost] => {"changed": true, "checksum": "cf7f920ffcaffc8087068797ced179782cb2c167", "dest": "/root/.quickstart/ssh.config.ansible", "gid": 0, "group": "root", "md5sum": "92a43943cbdc33b719c87d7f51e5c66a", "mode": "0644", "owner": "root", "size": 813, "src": "/root/.ansible/tmp/ansible-tmp-1465057357.19-199551301253928/source", "state": "file", "uid": 0} TASK [setup/undercloud : Configure Ironic pxe_ssh driver] **********************task path: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:211Saturday 04 June 2016 ?11:22:38 -0500 (0:00:01.813) ? ? ? 0:08:35.404 *********?fatal: [192.168.0.24]: UNREACHABLE! => {"changed": false, "msg": "[Errno -2] Name or service not known", "unreachable": true} PLAY [Rebuild inventory] ******************************************************* TASK [setup] *******************************************************************Saturday 04 June 2016 ?11:22:39 -0500 (0:00:01.065) ? ? ? 0:08:36.469 *********?ok: [localhost] TASK [rebuild-inventory : Ensure local working dir exists] *********************task path: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/rebuild-inventory/tasks/main.yml:1Saturday 04 June 2016 ?11:22:47 -0500 (0:00:07.951) ? ? ? 0:08:44.421 *********?ok: [localhost -> localhost] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/root/.quickstart", "size": 4096, "state": "directory", "uid": 0} TASK [rebuild-inventory : rebuild-inventory] ***********************************task path: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/rebuild-inventory/tasks/main.yml:11Saturday 04 June 2016 ?11:22:48 -0500 (0:00:01.076) ? ? ? 0:08:45.497 *********?changed: [localhost] => {"changed": true, "checksum": "41c0b5fb2439a528d0b0c6b0f979d3159c3446ca", "dest": "/root/.quickstart/hosts", "gid": 0, "group": "root", "md5sum": "0358d6b476bc5993eb9f31c57d234be6", "mode": "0644", "owner": "root", "size": 410, "src": "/root/.ansible/tmp/ansible-tmp-1465057369.05-223284897662661/source", "state": "file", "uid": 0} PLAY [Install undercloud and deploy overcloud] ********************************* TASK [tripleo/undercloud : include] ********************************************task path: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/undercloud/tasks/main.yml:1Saturday 04 June 2016 ?11:22:50 -0500 (0:00:01.630) ? ? ? 0:08:47.130 *********?included: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/undercloud/tasks/create-scripts.yml for undercloud TASK [tripleo/undercloud : Create undercloud configuration] ********************task path: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/undercloud/tasks/create-scripts.yml:3Saturday 04 June 2016 ?11:22:51 -0500 (0:00:01.053) ? ? ? 0:08:48.184 *********?fatal: [undercloud]: UNREACHABLE! => {"changed": false, "msg": "[Errno -2] Name or service not known", "unreachable": true} PLAY RECAP *********************************************************************192.168.0.24 ? ? ? ? ? ? ? : ok=92 ? changed=46 ? unreachable=1 ? ?failed=0 ??localhost ? ? ? ? ? ? ? ? ?: ok=10 ? changed=4 ? ?unreachable=0 ? ?failed=0 ??undercloud ? ? ? ? ? ? ? ? : ok=1 ? ?changed=0 ? ?unreachable=1 ? ?failed=0 ?? Saturday 04 June 2016 ?11:22:53 -0500 (0:00:01.470) ? ? ? 0:08:49.655 *********?===============================================================================?TASK: setup/undercloud : Get undercloud vm ip address ------------------ 95.28sTASK: setup/undercloud : Resize undercloud image (call virt-resize) ---- 93.98sTASK: setup/undercloud : Upload undercloud volume to storage pool ------ 66.46sTASK: setup/undercloud : Get qcow2 image from cache -------------------- 58.57sTASK: setup/undercloud : Copy instackenv.json to appliance ------------- 14.46sTASK: setup ------------------------------------------------------------- 8.64sTASK: setup/undercloud : Inject undercloud ssh public key to appliance --- 8.36sTASK: setup ------------------------------------------------------------- 8.32sTASK: setup ------------------------------------------------------------- 7.95sTASK: setup/undercloud : Perform selinux relabel on undercloud image ---- 4.34sTASK: setup/overcloud : Check if overcloud volumes exist ---------------- 2.88sTASK: environment/setup : Whitelist bridges for unprivileged access ----- 2.73sTASK: environment/setup : Start libvirt networks ------------------------ 2.61sTASK: setup ------------------------------------------------------------- 2.48sTASK: environment/teardown : Undefine libvirt networks ------------------ 2.46sTASK: parts/libvirt : Install packages for libvirt ---------------------- 2.46sTASK: provision/teardown : Remove non-root user account ----------------- 2.43sTASK: teardown/nodes : Delete baremetal vm storage ---------------------- 2.39sTASK: setup/undercloud : Start undercloud vm ---------------------------- 2.36sTASK: environment/setup : Mark ?libvirt networks as autostarted --------- 2.34s[root at sightApps65 ostest]#? -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Sat Jun 4 19:32:04 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Sat, 4 Jun 2016 21:32:04 +0200 Subject: [rdo-list] [Fedocal] Reminder meeting : RDO meeting In-Reply-To: <57513763.1070500@redhat.com> References: <20160530150003.0125060A4009@fedocal02.phx2.fedoraproject.org> <57513763.1070500@redhat.com> Message-ID: 2016-06-03 9:53 GMT+02:00 Marc Dequ?nes (Duck) : > Quack, > > On 05/31/2016 12:00 AM, hguemar at fedoraproject.org wrote: >> Dear all, >> >> You are kindly invited to the meeting: >> RDO meeting on 2016-06-01 from 15:00:00 to 16:00:00 UTC >> At rdo at irc.freenode.net > > I'd try to come from time to time, but maybe not all times as I may have > conflicting meetings/work. There is a community meeting already and I > try to notify Rich of important things so it should be ok. > > As for next week, there is an important meeting early in the morning for > LinuxCon Japan, so I would most probably not be there for this (late) > RDO meeting, sorry. > > \_o< > No problem, Marc-san you're invited to edit the agenda and feel free to ping us. Dozo ogenki de, H. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From przemek.stobrawa at gmail.com Sun Jun 5 07:17:01 2016 From: przemek.stobrawa at gmail.com (przemek stobrawa) Date: Sun, 5 Jun 2016 09:17:01 +0200 Subject: [rdo-list] Help Message-ID: 4 cze 2016 21:34 napisa?(a): > Send rdo-list mailing list submissions to > rdo-list at redhat.com > > To subscribe or unsubscribe via the World Wide Web, visit > https://www.redhat.com/mailman/listinfo/rdo-list > or, via email, send a message with subject or body 'help' to > rdo-list-request at redhat.com > > You can reach the person managing the list at > rdo-list-owner at redhat.com > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of rdo-list digest..." > > > Today's Topics: > > 1. TripleO Install Failure (Prakash Kanthi) > 2. Re: [Fedocal] Reminder meeting : RDO meeting (Ha?kel) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sat, 4 Jun 2016 16:52:46 +0000 (UTC) > From: Prakash Kanthi > To: "rdo-list at redhat.com" > Subject: [rdo-list] TripleO Install Failure > Message-ID: > <749735896.1189718.1465059166336.JavaMail.yahoo at mail.yahoo.com> > Content-Type: text/plain; charset="utf-8" > > Hi There, > I am trying to install OpenStack using TripleO quickstart script on a > single server. I am running following error and the script stops. Can you > please tell me what is going on? > Thanks,PK > > TASK [setup/undercloud : Set_fact for undercloud ip] > ***************************task path: > /root/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:180Saturday > 04 June 2016 ?11:22:33 -0500 (0:01:35.278) ? ? ? 0:08:30.041 *********?ok: > [192.168.0.24] => {"ansible_facts": {"undercloud_ip": "192.168.23.37"}, > "changed": false} > TASK [setup/undercloud : Wait until ssh is available on undercloud node] > *******task path: > /root/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:184Saturday > 04 June 2016 ?11:22:34 -0500 (0:00:01.249) ? ? ? 0:08:31.291 *********?ok: > [192.168.0.24] => {"changed": false, "elapsed": 0, "path": null, "port": > 22, "search_regex": null, "state": "started"} > TASK [setup/undercloud : Add undercloud vm to inventory] > ***********************task path: > /root/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:192Saturday > 04 June 2016 ?11:22:36 -0500 (0:00:01.610) ? ? ? 0:08:32.902 > *********?creating host via 'add_host': hostname=undercloudchanged: > [192.168.0.24] => {"add_host": {"groups": ["undercloud"], "host_name": > "undercloud", "host_vars": {"ansible_fqdn": "undercloud", "ansible_host": > "undercloud", "ansible_private_key_file": > "/root/.quickstart/id_rsa_undercloud", "ansible_ssh_extra_args": "-F > \"/root/.quickstart/ssh.config.ansible\"", "ansible_user": "stack"}}, > "changed": true} > TASK [setup/undercloud : Generate ssh configuration] > ***************************task path: > /root/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:202Saturday > 04 June 2016 ?11:22:36 -0500 (0:00:00.687) ? ? ? 0:08:33.590 > *********?changed: [192.168.0.24 -> localhost] => {"changed": true, > "checksum": "cf7f920ffcaffc8087068797ced179782cb2c167", "dest": > "/root/.quickstart/ssh.config.ansible", "gid": 0, "group": "root", > "md5sum": "92a43943cbdc33b719c87d7f51e5c66a", "mode": "0644", "owner": > "root", "size": 813, "src": > "/root/.ansible/tmp/ansible-tmp-1465057357.19-199551301253928/source", > "state": "file", "uid": 0} > TASK [setup/undercloud : Configure Ironic pxe_ssh driver] > **********************task path: > /root/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:211Saturday > 04 June 2016 ?11:22:38 -0500 (0:00:01.813) ? ? ? 0:08:35.404 > *********?fatal: [192.168.0.24]: UNREACHABLE! => {"changed": false, "msg": > "[Errno -2] Name or service not known", "unreachable": true} > PLAY [Rebuild inventory] > ******************************************************* > TASK [setup] > *******************************************************************Saturday > 04 June 2016 ?11:22:39 -0500 (0:00:01.065) ? ? ? 0:08:36.469 *********?ok: > [localhost] > TASK [rebuild-inventory : Ensure local working dir exists] > *********************task path: > /root/.quickstart/usr/local/share/tripleo-quickstart/roles/rebuild-inventory/tasks/main.yml:1Saturday > 04 June 2016 ?11:22:47 -0500 (0:00:07.951) ? ? ? 0:08:44.421 *********?ok: > [localhost -> localhost] => {"changed": false, "gid": 0, "group": "root", > "mode": "0755", "owner": "root", "path": "/root/.quickstart", "size": 4096, > "state": "directory", "uid": 0} > TASK [rebuild-inventory : rebuild-inventory] > ***********************************task path: > /root/.quickstart/usr/local/share/tripleo-quickstart/roles/rebuild-inventory/tasks/main.yml:11Saturday > 04 June 2016 ?11:22:48 -0500 (0:00:01.076) ? ? ? 0:08:45.497 > *********?changed: [localhost] => {"changed": true, "checksum": > "41c0b5fb2439a528d0b0c6b0f979d3159c3446ca", "dest": > "/root/.quickstart/hosts", "gid": 0, "group": "root", "md5sum": > "0358d6b476bc5993eb9f31c57d234be6", "mode": "0644", "owner": "root", > "size": 410, "src": > "/root/.ansible/tmp/ansible-tmp-1465057369.05-223284897662661/source", > "state": "file", "uid": 0} > PLAY [Install undercloud and deploy overcloud] > ********************************* > TASK [tripleo/undercloud : include] > ********************************************task path: > /root/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/undercloud/tasks/main.yml:1Saturday > 04 June 2016 ?11:22:50 -0500 (0:00:01.630) ? ? ? 0:08:47.130 > *********?included: > /root/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/undercloud/tasks/create-scripts.yml > for undercloud > TASK [tripleo/undercloud : Create undercloud configuration] > ********************task path: > /root/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/undercloud/tasks/create-scripts.yml:3Saturday > 04 June 2016 ?11:22:51 -0500 (0:00:01.053) ? ? ? 0:08:48.184 > *********?fatal: [undercloud]: UNREACHABLE! => {"changed": false, "msg": > "[Errno -2] Name or service not known", "unreachable": true} > PLAY RECAP > *********************************************************************192.168.0.24 > ? ? ? ? ? ? ? : ok=92 ? changed=46 ? unreachable=1 ? ?failed=0 ??localhost > ? ? ? ? ? ? ? ? ?: ok=10 ? changed=4 ? ?unreachable=0 ? ?failed=0 > ??undercloud ? ? ? ? ? ? ? ? : ok=1 ? ?changed=0 ? ?unreachable=1 ? > ?failed=0 ?? > Saturday 04 June 2016 ?11:22:53 -0500 (0:00:01.470) ? ? ? 0:08:49.655 > *********?===============================================================================?TASK: > setup/undercloud : Get undercloud vm ip address ------------------ > 95.28sTASK: setup/undercloud : Resize undercloud image (call virt-resize) > ---- 93.98sTASK: setup/undercloud : Upload undercloud volume to storage > pool ------ 66.46sTASK: setup/undercloud : Get qcow2 image from cache > -------------------- 58.57sTASK: setup/undercloud : Copy instackenv.json to > appliance ------------- 14.46sTASK: setup > ------------------------------------------------------------- 8.64sTASK: > setup/undercloud : Inject undercloud ssh public key to appliance --- > 8.36sTASK: setup > ------------------------------------------------------------- 8.32sTASK: > setup ------------------------------------------------------------- > 7.95sTASK: setup/undercloud : Perform selinux relabel on undercloud image > ---- 4.34sTASK: setup/overcloud : Check if overc! > loud volumes exist ---------------- 2.88sTASK: environment/setup : > Whitelist bridges for unprivileged access ----- 2.73sTASK: > environment/setup : Start libvirt networks ------------------------ > 2.61sTASK: setup > ------------------------------------------------------------- 2.48sTASK: > environment/teardown : Undefine libvirt networks ------------------ > 2.46sTASK: parts/libvirt : Install packages for libvirt > ---------------------- 2.46sTASK: provision/teardown : Remove non-root user > account ----------------- 2.43sTASK: teardown/nodes : Delete baremetal vm > storage ---------------------- 2.39sTASK: setup/undercloud : Start > undercloud vm ---------------------------- 2.36sTASK: environment/setup : > Mark ?libvirt networks as autostarted --------- 2.34s[root at sightApps65 > ostest]#? > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > https://www.redhat.com/archives/rdo-list/attachments/20160604/583b5e90/attachment.html > > > > ------------------------------ > > Message: 2 > Date: Sat, 4 Jun 2016 21:32:04 +0200 > From: Ha?kel > To: Marc Dequ?nes (Duck) > Cc: "rdo-list at redhat.com" > Subject: Re: [rdo-list] [Fedocal] Reminder meeting : RDO meeting > Message-ID: > SK6c3RPT3urw at mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > 2016-06-03 9:53 GMT+02:00 Marc Dequ?nes (Duck) : > > Quack, > > > > On 05/31/2016 12:00 AM, hguemar at fedoraproject.org wrote: > >> Dear all, > >> > >> You are kindly invited to the meeting: > >> RDO meeting on 2016-06-01 from 15:00:00 to 16:00:00 UTC > >> At rdo at irc.freenode.net > > > > I'd try to come from time to time, but maybe not all times as I may have > > conflicting meetings/work. There is a community meeting already and I > > try to notify Rich of important things so it should be ok. > > > > As for next week, there is an important meeting early in the morning for > > LinuxCon Japan, so I would most probably not be there for this (late) > > RDO meeting, sorry. > > > > \_o< > > > > No problem, Marc-san you're invited to edit the agenda and feel free to > ping us. > > Dozo ogenki de, > H. > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > ------------------------------ > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > > End of rdo-list Digest, Vol 39, Issue 23 > **************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggillies at redhat.com Sun Jun 5 23:37:53 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Mon, 6 Jun 2016 09:37:53 +1000 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> Message-ID: <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> Hi Everyone, I just wanted to say I have been following this thread quite closely and can sympathize with some of the pain people are going through to get tripleO to work. Currently it's quite difficult and a bit opaque on how to actually utilise the stable mitaka repos in order to build a functional undercloud and overcloud environment. First I wanted to share the steps I have undergone in order to get a functional overcloud working with RDO Mitaka utilising the RDO stable release built by CentOS, and then I'll talk about some specific steps I think need to be undertaken by the RDO/TripleO team in order to provide a better experience in the future. To get a functional overcloud using RDO Mitaka, you need to do the following 1) Install EPEL on your undercloud 2) Install https://www.rdoproject.org/repos/rdo-release.rpm on your undercloud 3) Following the normal steps to install your undercloud (modifying undercloud.conf, and running openstack undercloud install 4) You will now need to manually patch ironic on the undercloud in order to make sure repeated introspection works. This might not be needed if you don't do any introspection, but I find more often than not you end up having to do it, so it's worthwhile. The bug you need to patch is [1] and I typically run the following commands to apply the patch # sudo su - $ cd /usr/lib/python2.7/site-packages $ curl 'https://review.openstack.org/changes/306421/revisions/abd50d8438e7d371ce24f97d8f8f67052b562007/patch?download' | base64 -d | patch -p1 $ systemctl restart openstack-ironic-inspector $ systemctl restart openstack-ironic-inspector-dnsmasq $ exit # 5) Manually patch the undercloud to build overcloud images using rhos-release rpm only (which utilises the stable Mitaka repo from CentOS, and nothing from RDO Trunk [delorean]). I do this by modifying the file /usr/lib/python2.7/site-packages/tripleoclient/v1/overcloud_image.py At around line 467 you will see a reference to epel, I add a new line after that to include the rdo_release DIB element to the build as well. This typically makes the file look something like http://paste.openstack.org/show/508196/ (note like 468). Then I create a directory to store my images and build them specifying the mitaka version of rdo_release. I then upload these images # mkdir ~/images # cd ~/images # export RDO_RELEASE=mitaka # openstack overcloud image build --all # openstack overcloud image upload --update-existing 6) Because of the bug at [2] which affects the ironic agent ramdisk, we need to build a set of images utilising RDO Trunk for the mitaka branch (where the fix is applied), and then upload *only* the new ironic ramdisk. This is done with # mkdir ~/images-mitaka-trunk # cd ~/images-mitaka-trunk # export USE_DELOREAN_TRUNK=1 # export DELOREAN_TRUNK_REPO="http://trunk.rdoproject.org/centos7-mitaka/current/" # export DELOREAN_REPO_FILE="delorean.repo" # openstack overcloud image build --type agent-ramdisk # sudo cp ironic-python-agent.initramfs /httpboot/agent.ramdisk 7) Follow the rest of the documentation to deploy the overcloud normally Please note that obviously your mileage may vary, and this is by all means not an exclusive list of the problems. I have however used these steps to do multiple node deployments (10+ nodes) with HA over different hardware sets with different networking setups (single nic, multiple nic with bonding + vlans). With all the different repos floating around, all which change very rapidly, combined with the documentation defaults targeting developers and CI systems (not end users), it's hard to not only get a stable TripleO install up, but also communicate and discuss clearly with others what is working, what is broken, and how to compare two installations to see if they are experiencing the same issues. To this end I would like to suggest to the RDO and TripleO community that we undertake the following 1) Overhaul all the TripleO documentation so that all the steps default to utilising/deploying using RDO Stable (that is, the releases done by CBS). There should be colored boxes with alt steps for those who wish to use RDO Trunk on the stable branch, and RDO Trunk from master. This basically inverts the current pattern. I think anyone, Operator or developer, who is working through the documentation for the first time, should be given steps that maximise the chance of success, and thus the most stable release we have. Once a user has gone through the process once, they can look at the alternative steps for more aggressive releases 2) Patch python-tripleoclient so that by default, when you run "openstack overcloud image build" it builds the images utilising the rdo_release DIB element, and sets the RDO_RELEASE environment variable to be 'mitaka' or whenever the current stable release is (and we should endevour to update it with new releases). There should be no extra environment variables necessary to build images, and by default it should never touch anything RDO Trunk (delorean) related 3) For bugs like the two I have mentioned above, we need to have some sort of robust process for either backporting those patches to the builds in CBS (I understand we don't do this for various reasons), or we need some kind of tooling or solution that allows operators to apply only the fixes they need from RDO Trunk (delorean). We need to ensure that when an Operator utilises TripleO they have the greatest chance of success, and bugs such as these which severely impact the deployment process harm the adoption of TripleO and RDO. 4) We should curate and keep an up to date page on rdoproject.org that does highlight the outstanding issues related to TripleO on the RDO Stable (CBS) releases. These should have links to relevant bugzillas, clean instructions on how to work around the issue, or cleanly apply a patch to avoid the issue, and as new releases make it out, we should update the page to drop off workarounds that are no longer needed. The goal is to push Operators/Users to working with our most stable code as much as possible, and track/curate issues around that. This way everyone should be on the same page, issues are easier to discuss and diagnose, and overall peoples experiences should be better. I'm interested in thoughts, feedback, and concerns, both from the RDO and TripleO community, and from the Operator/User community. Regards, Graeme [1] https://bugs.launchpad.net/ironic-inspector/+bug/1570447 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1322892 On 05/06/16 02:04, Pedro Sousa wrote: > Thanks Marius, > > I can confirm that it installs fine with 3 controllers + 3 computes > after recreating the stack > > Regards > > On Sat, Jun 4, 2016 at 4:14 PM, Marius Cornea > wrote: > > Hi Pedro, > > Scaling out controller nodes is not supported at this moment: > https://bugzilla.redhat.com/show_bug.cgi?id=1243312 > > On Sat, Jun 4, 2016 at 5:05 PM, Pedro Sousa > wrote: > > Hi, > > > > some update on scaling the cloud: > > > > 1 controller + 1 compute -> 1 controller + 3 computes OK > > > > 1 controller + 3 computes -> 3 controllers + 3 compute FAILS > > > > Problem: The new controller nodes are "stuck" in "pscd start", so > it seems > > to be a problem joining the pacemaker cluster... Did anyone had this > > problem? > > > > Regards > > > > > > > > > > > > > > On Sat, Jun 4, 2016 at 1:50 AM, Pedro Sousa > wrote: > >> > >> Hi, > >> > >> I finally managed to install a baremetal in mitaka with 1 > controller + 1 > >> compute with network isolation. Thank god :) > >> > >> All I did was: > >> > >> #yum install centos-release-openstack-mitaka > >> #sudo yum install python-tripleoclient > >> > >> without epel repos. > >> > >> Then followed instructions from Redhat Site. > >> > >> I downloaded the overcloud images from: > >> > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/ > >> > >> I do have an issue that forces me to delete a json file and run > >> os-refresh-config inside my overcloud nodes other than that it > installs > >> fine. > >> > >> Now I'll test with more 2 controllers + 2 computes to have a full HA > >> deployment. > >> > >> If anyone needs help to document this I'll be happy to help. > >> > >> Regards, > >> Pedro Sousa > >> > >> > >> On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy > wrote: > >>> > >>> The report says: "Fix Released" as of 2016-05-24. > >>> Are you installing on a clean system with the latest repositories? > >>> > >>> Might also want to check your version of rabbitmq: I have > >>> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. > >>> > >>> ----- Original Message ----- > >>> > From: "Pedro Sousa" > > >>> > To: "Ronelle Landy" > > >>> > Cc: "Christopher Brown" >, "Ignacio Bravo" > >>> > >, "rdo-list" > >>> > > > >>> > Sent: Friday, June 3, 2016 1:20:43 PM > >>> > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > >>> > > >>> > Anyway to workaround this? Maybe downgrade hiera? > >>> > > >>> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy > > > >>> > wrote: > >>> > > >>> > > I am not sure exactly where you installed from, and when you > did your > >>> > > installation, but any chance, you've hit: > >>> > > https://bugs.launchpad.net/tripleo/+bug/1584892? > >>> > > There is a link bugzilla record. > >>> > > > >>> > > ----- Original Message ----- > >>> > > > From: "Pedro Sousa" > > >>> > > > To: "Ronelle Landy" > > >>> > > > Cc: "Christopher Brown" >, "Ignacio Bravo" < > >>> > > ibravo at ltgfederal.com >, > "rdo-list" > >>> > > > > > >>> > > > Sent: Friday, June 3, 2016 12:26:58 PM > >>> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > >>> > > > > >>> > > > Thanks Ronelle, > >>> > > > > >>> > > > do you think this kind of errors can be related with network > >>> > > > settings? > >>> > > > > >>> > > > "Could not retrieve fact='rabbitmq_nodename', > >>> > > > resolution='': > >>> > > > undefined method `[]' for nil:NilClass Could not retrieve > >>> > > > fact='rabbitmq_nodename', resolution='': undefined > >>> > > > method `[]' > >>> > > > for nil:NilClass" > >>> > > > > >>> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy > > > >>> > > > wrote: > >>> > > > > >>> > > > > Hi Pedro, > >>> > > > > > >>> > > > > You could use the docs you referred to. > >>> > > > > Alternatively, if you want to use a vm for the > undercloud and > >>> > > > > baremetal > >>> > > > > machines for the overcloud, it is possible to use Tripleo > >>> > > > > Qucikstart > >>> > > with a > >>> > > > > few modifications. > >>> > > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. > >>> > > > > > >>> > > > > ----- Original Message ----- > >>> > > > > > From: "Pedro Sousa" > > >>> > > > > > To: "Ronelle Landy" > > >>> > > > > > Cc: "Christopher Brown" >, "Ignacio Bravo" < > >>> > > > > ibravo at ltgfederal.com >, > "rdo-list" > >>> > > > > > > > >>> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM > >>> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > >>> > > > > > > >>> > > > > > Hi Ronelle, > >>> > > > > > > >>> > > > > > maybe I understand it wrong but I thought that Tripleo > >>> > > > > > Quickstart > >>> > > was for > >>> > > > > > deploying virtual environments? > >>> > > > > > > >>> > > > > > And for baremetal we should use > >>> > > > > > > >>> > > > > > >>> > > > >>> > > > http://docs.openstack.org/developer/tripleo-docs/installation/installation.html > >>> > > > > > ? > >>> > > > > > > >>> > > > > > Thanks > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy > >>> > > > > > > > >>> > > wrote: > >>> > > > > > > >>> > > > > > > Hello, > >>> > > > > > > > >>> > > > > > > We have had success deploying RDO (Mitaka) on baremetal > >>> > > > > > > systems - > >>> > > using > >>> > > > > > > Tripleo Quickstart with both single-nic-vlans and > >>> > > > > > > bond-with-vlans > >>> > > > > network > >>> > > > > > > isolation configurations. > >>> > > > > > > > >>> > > > > > > Baremetal can have some complicated networking > issues but, > >>> > > > > > > from > >>> > > > > previous > >>> > > > > > > experiences, if a single-controller deployment > worked but a > >>> > > > > > > HA > >>> > > > > deployment > >>> > > > > > > did not, I would check: > >>> > > > > > > - does the HA deployment command include: -e > >>> > > > > > > > >>> > > > > > >>> > > > >>> > > > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > >>> > > > > > > - are there possible MTU issues? > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > ----- Original Message ----- > >>> > > > > > > > From: "Christopher Brown" > > >>> > > > > > > > To: pgsousa at gmail.com , > ibravo at ltgfederal.com > >>> > > > > > > > Cc: rdo-list at redhat.com > >>> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM > >>> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable > version? > >>> > > > > > > > > >>> > > > > > > > Hello Ignacio, > >>> > > > > > > > > >>> > > > > > > > Thanks for your response and good to know it isn't > just me! > >>> > > > > > > > > >>> > > > > > > > I would be more than happy to provide developers with > >>> > > > > > > > access to > >>> > > our > >>> > > > > > > > bare metal environments. I'll also file some bugzilla > >>> > > > > > > > reports to > >>> > > see > >>> > > > > if > >>> > > > > > > > this generates any interest. > >>> > > > > > > > > >>> > > > > > > > Please do let me know if you make any progress - I am > >>> > > > > > > > trying to > >>> > > > > deploy > >>> > > > > > > > HA with network isolation, multiple nics and vlans. > >>> > > > > > > > > >>> > > > > > > > The RDO web page states: > >>> > > > > > > > > >>> > > > > > > > "If you want to create a production-ready cloud, > you'll > >>> > > > > > > > want to > >>> > > use > >>> > > > > the > >>> > > > > > > > TripleO quickstart guide." > >>> > > > > > > > > >>> > > > > > > > which is a contradiction in terms really. > >>> > > > > > > > > >>> > > > > > > > Cheers > >>> > > > > > > > > >>> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo > wrote: > >>> > > > > > > > > Pedro / Christopher, > >>> > > > > > > > > > >>> > > > > > > > > Just wanted to share with you that I also had > plenty of > >>> > > > > > > > > issues > >>> > > > > > > > > deploying on bare metal HA servers, and have > paused the > >>> > > deployment > >>> > > > > > > > > using TripleO until better winds start to flow > here. I > >>> > > > > > > > > was > >>> > > able to > >>> > > > > > > > > deploy the QuickStart, but on bare metal the > history was > >>> > > different. > >>> > > > > > > > > Couldn't even deploy a two server configuration. > >>> > > > > > > > > > >>> > > > > > > > > I was thinking that it would be good to have the > >>> > > > > > > > > developers > >>> > > have > >>> > > > > > > > > access to one of our environments and go through > a full > >>> > > > > > > > > install > >>> > > > > with > >>> > > > > > > > > us to better see where things fail. We can do this > >>> > > > > > > > > handholding > >>> > > > > > > > > deployment once every week/month based on > developers time > >>> > > > > > > > > availability. That way we can get a working > install, and > >>> > > > > > > > > we can > >>> > > > > > > > > troubleshoot real life environment problems. > >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > IB > >>> > > > > > > > > > >>> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa > >>> > > > > > > > > > > >>> > > wrote: > >>> > > > > > > > > > >>> > > > > > > > > > Yes. I've used this, but I'll try again as there's > >>> > > > > > > > > > seems to > >>> > > be > >>> > > > > new > >>> > > > > > > > > > updates. > >>> > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > > Stable Branch Skip all repos mentioned above, > other > >>> > > > > > > > > > than > >>> > > epel- > >>> > > > > > > > > > release which is still required. > >>> > > > > > > > > > Enable latest RDO Stable Delorean repository > for all > >>> > > > > > > > > > packages > >>> > > > > > > > > > sudo curl -o > /etc/yum.repos.d/delorean-liberty.repo > >>> > > > > https://trunk.r > >>> > > > > > > > > > > doproject.org/centos7-liberty/current/delorean.repo > > >>> > > > > > > > > > Enable the Delorean Deps repository > >>> > > > > > > > > > sudo curl -o > >>> > > > > > > > > > /etc/yum.repos.d/delorean-deps-liberty.repo > >>> > > > > http://tru > >>> > > > > > > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo > > >>> > > > > > > > > > > >>> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher > Brown < > >>> > > > > cbrown2 at ocf.co . > >>> > > > > > > > > > uk> wrote: > >>> > > > > > > > > > > No, Liberty deployed ok for us. > >>> > > > > > > > > > > > >>> > > > > > > > > > > It suggests to me a package mismatch. Have you > >>> > > > > > > > > > > completely > >>> > > > > rebuilt > >>> > > > > > > > > > > the > >>> > > > > > > > > > > undercloud and then the images using Liberty? > >>> > > > > > > > > > > > >>> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro > Sousa wrote: > >>> > > > > > > > > > > > AttributeError: 'module' object has no > attribute > >>> > > 'PortOpt' > >>> > > > > > > > > > > -- > >>> > > > > > > > > > > Regards, > >>> > > > > > > > > > > > >>> > > > > > > > > > > Christopher Brown > >>> > > > > > > > > > > OpenStack Engineer > >>> > > > > > > > > > > OCF plc > >>> > > > > > > > > > > > >>> > > > > > > > > > > Tel: +44 (0)114 257 2200 > > >>> > > > > > > > > > > Web: www.ocf.co.uk > >>> > > > > > > > > > > Blog: blog.ocf.co.uk > >>> > > > > > > > > > > Twitter: @ocfplc > >>> > > > > > > > > > > > >>> > > > > > > > > > > Please note, any emails relating to an OCF > Support > >>> > > > > > > > > > > request > >>> > > must > >>> > > > > > > > > > > always > >>> > > > > > > > > > > be sent to support at ocf.co.uk > for a ticket number to > >>> > > > > > > > > > > be > >>> > > > > generated > >>> > > > > > > > > > > or > >>> > > > > > > > > > > existing support ticket to be updated. > Should this > >>> > > > > > > > > > > not be > >>> > > done > >>> > > > > > > > > > > then OCF > >>> > > > > > > > > > > > >>> > > > > > > > > > > cannot be held responsible for requests not > dealt > >>> > > > > > > > > > > with in a > >>> > > > > > > > > > > timely > >>> > > > > > > > > > > manner. > >>> > > > > > > > > > > > >>> > > > > > > > > > > OCF plc is a company registered in England > and Wales. > >>> > > > > Registered > >>> > > > > > > > > > > number > >>> > > > > > > > > > > > >>> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. > Registered office > >>> > > address: > >>> > > > > > > > > > > OCF plc, > >>> > > > > > > > > > > > >>> > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, > >>> > > > > > > > > > > Chapeltown, > >>> > > > > > > > > > > Sheffield S35 > >>> > > > > > > > > > > 2PG. > >>> > > > > > > > > > > > >>> > > > > > > > > > > If you have received this message in error, > please > >>> > > > > > > > > > > notify > >>> > > us > >>> > > > > > > > > > > immediately and remove it from your system. > >>> > > > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > _______________________________________________ > >>> > > > > > > > > rdo-list mailing list > >>> > > > > > > > > rdo-list at redhat.com > >>> > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > >>> > > > > > > > > > >>> > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > >>> > > > > > > > -- > >>> > > > > > > > Regards, > >>> > > > > > > > > >>> > > > > > > > Christopher Brown > >>> > > > > > > > OpenStack Engineer > >>> > > > > > > > OCF plc > >>> > > > > > > > > >>> > > > > > > > Tel: +44 (0)114 257 2200 > > >>> > > > > > > > Web: www.ocf.co.uk > >>> > > > > > > > Blog: blog.ocf.co.uk > >>> > > > > > > > Twitter: @ocfplc > >>> > > > > > > > > >>> > > > > > > > Please note, any emails relating to an OCF Support > request > >>> > > > > > > > must > >>> > > > > always > >>> > > > > > > > be sent to support at ocf.co.uk > for a ticket number to be > >>> > > generated or > >>> > > > > > > > existing support ticket to be updated. Should this > not be > >>> > > > > > > > done > >>> > > then > >>> > > > > OCF > >>> > > > > > > > > >>> > > > > > > > cannot be held responsible for requests not dealt > with in a > >>> > > timely > >>> > > > > > > > manner. > >>> > > > > > > > > >>> > > > > > > > OCF plc is a company registered in England and Wales. > >>> > > > > > > > Registered > >>> > > > > number > >>> > > > > > > > > >>> > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office > >>> > > > > > > > address: > >>> > > OCF > >>> > > > > plc, > >>> > > > > > > > > >>> > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, > Chapeltown, > >>> > > Sheffield > >>> > > > > S35 > >>> > > > > > > > 2PG. > >>> > > > > > > > > >>> > > > > > > > If you have received this message in error, please > notify > >>> > > > > > > > us > >>> > > > > > > > immediately and remove it from your system. > >>> > > > > > > > > >>> > > > > > > > _______________________________________________ > >>> > > > > > > > rdo-list mailing list > >>> > > > > > > > rdo-list at redhat.com > >>> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > >>> > > > > > > > > >>> > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > >>> > > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >>> > > >> > >> > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From dms at redhat.com Mon Jun 6 00:10:41 2016 From: dms at redhat.com (David Moreau Simard) Date: Sun, 5 Jun 2016 20:10:41 -0400 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> Message-ID: On Sun, Jun 5, 2016 at 7:37 PM, Graeme Gillies wrote: > 1) Install EPEL on your undercloud I have little relevant TripleO experience to contribute to this thread but I am pretty sure we no longer require anything from EPEL. In the past, this may have been true, notably for Ceph but the CentOS storage SIG has all the dependencies self-contained. The Ceph Storage SIG repository for Ceph, under CentOS, is included in the "centos-release-openstack-mitaka" package or standalone as "centos-release-ceph-hammer". What do you need EPEL for ? Did we forget something else ? David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] From ggillies at redhat.com Mon Jun 6 00:22:44 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Mon, 6 Jun 2016 10:22:44 +1000 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> Message-ID: <84ce5a01-301e-eb81-8123-73ed63e88e55@redhat.com> On 06/06/16 10:10, David Moreau Simard wrote: > On Sun, Jun 5, 2016 at 7:37 PM, Graeme Gillies wrote: >> 1) Install EPEL on your undercloud > > I have little relevant TripleO experience to contribute to this thread > but I am pretty sure we no longer require anything from EPEL. > In the past, this may have been true, notably for Ceph but the CentOS > storage SIG has all the dependencies self-contained. > > The Ceph Storage SIG repository for Ceph, under CentOS, is included in > the "centos-release-openstack-mitaka" package or standalone as > "centos-release-ceph-hammer". > > What do you need EPEL for ? Did we forget something else ? > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > I can't speak for the TripleO/RDO team but I can confirm that our instructions specify to install epel [1], and the overcloud images do indeed get built with epel enabled. Looking through the packages on the overcloud the main ones I see being sourced from epel are http://paste.openstack.org/show/508202/ Regards, Graeme [1] http://tripleo.org/installation/installation.html#installing-the-undercloud -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From pgsousa at gmail.com Mon Jun 6 00:26:13 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Mon, 6 Jun 2016 01:26:13 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> Message-ID: Hi Graeme, my 2 cents here, my experience was different from yours. I've managed to install tripleo with 3 controllers + 3 computes without epel repos and without applying any patches, using mitaka repos from cbs. I didn't have any issues with introspection, but I tested on the same hw (Dell PowerEdge R430). Then I downloaded some prebuild overcloud images from delorean repos: http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/ That being said, there's a lot to be done concerning install documentation, and how to report bugs and issues, specifically for baremetal environments, where a lot of people like me intend to use tripleo for production. Regards On Mon, Jun 6, 2016 at 12:37 AM, Graeme Gillies wrote: > Hi Everyone, > > I just wanted to say I have been following this thread quite closely and > can sympathize with some of the pain people are going through to get > tripleO to work. > > Currently it's quite difficult and a bit opaque on how to actually > utilise the stable mitaka repos in order to build a functional > undercloud and overcloud environment. > > First I wanted to share the steps I have undergone in order to get a > functional overcloud working with RDO Mitaka utilising the RDO stable > release built by CentOS, and then I'll talk about some specific steps I > think need to be undertaken by the RDO/TripleO team in order to provide > a better experience in the future. > > To get a functional overcloud using RDO Mitaka, you need to do the > following > > 1) Install EPEL on your undercloud > 2) Install https://www.rdoproject.org/repos/rdo-release.rpm on your > undercloud > 3) Following the normal steps to install your undercloud (modifying > undercloud.conf, and running openstack undercloud install > 4) You will now need to manually patch ironic on the undercloud in order > to make sure repeated introspection works. This might not be needed if > you don't do any introspection, but I find more often than not you end > up having to do it, so it's worthwhile. The bug you need to patch is [1] > and I typically run the following commands to apply the patch > > # sudo su - > $ cd /usr/lib/python2.7/site-packages > $ curl > ' > https://review.openstack.org/changes/306421/revisions/abd50d8438e7d371ce24f97d8f8f67052b562007/patch?download > ' > | base64 -d | patch -p1 > $ systemctl restart openstack-ironic-inspector > $ systemctl restart openstack-ironic-inspector-dnsmasq > $ exit > # > > 5) Manually patch the undercloud to build overcloud images using > rhos-release rpm only (which utilises the stable Mitaka repo from > CentOS, and nothing from RDO Trunk [delorean]). I do this by modifying > the file > > /usr/lib/python2.7/site-packages/tripleoclient/v1/overcloud_image.py > > At around line 467 you will see a reference to epel, I add a new line > after that to include the rdo_release DIB element to the build as well. > This typically makes the file look something like > > http://paste.openstack.org/show/508196/ > > (note like 468). Then I create a directory to store my images and build > them specifying the mitaka version of rdo_release. I then upload these > images > > # mkdir ~/images > # cd ~/images > # export RDO_RELEASE=mitaka > # openstack overcloud image build --all > # openstack overcloud image upload --update-existing > > 6) Because of the bug at [2] which affects the ironic agent ramdisk, we > need to build a set of images utilising RDO Trunk for the mitaka branch > (where the fix is applied), and then upload *only* the new ironic > ramdisk. This is done with > > # mkdir ~/images-mitaka-trunk > # cd ~/images-mitaka-trunk > # export USE_DELOREAN_TRUNK=1 > # export > DELOREAN_TRUNK_REPO="http://trunk.rdoproject.org/centos7-mitaka/current/" > # export DELOREAN_REPO_FILE="delorean.repo" > # openstack overcloud image build --type agent-ramdisk > # sudo cp ironic-python-agent.initramfs /httpboot/agent.ramdisk > > 7) Follow the rest of the documentation to deploy the overcloud normally > > Please note that obviously your mileage may vary, and this is by all > means not an exclusive list of the problems. I have however used these > steps to do multiple node deployments (10+ nodes) with HA over different > hardware sets with different networking setups (single nic, multiple nic > with bonding + vlans). > > With all the different repos floating around, all which change very > rapidly, combined with the documentation defaults targeting developers > and CI systems (not end users), it's hard to not only get a stable > TripleO install up, but also communicate and discuss clearly with others > what is working, what is broken, and how to compare two installations to > see if they are experiencing the same issues. > > To this end I would like to suggest to the RDO and TripleO community > that we undertake the following > > 1) Overhaul all the TripleO documentation so that all the steps default > to utilising/deploying using RDO Stable (that is, the releases done by > CBS). There should be colored boxes with alt steps for those who wish to > use RDO Trunk on the stable branch, and RDO Trunk from master. This > basically inverts the current pattern. I think anyone, Operator or > developer, who is working through the documentation for the first time, > should be given steps that maximise the chance of success, and thus the > most stable release we have. Once a user has gone through the process > once, they can look at the alternative steps for more aggressive releases > > 2) Patch python-tripleoclient so that by default, when you run > "openstack overcloud image build" it builds the images utilising the > rdo_release DIB element, and sets the RDO_RELEASE environment variable > to be 'mitaka' or whenever the current stable release is (and we should > endevour to update it with new releases). There should be no extra > environment variables necessary to build images, and by default it > should never touch anything RDO Trunk (delorean) related > > 3) For bugs like the two I have mentioned above, we need to have some > sort of robust process for either backporting those patches to the > builds in CBS (I understand we don't do this for various reasons), or we > need some kind of tooling or solution that allows operators to apply > only the fixes they need from RDO Trunk (delorean). We need to ensure > that when an Operator utilises TripleO they have the greatest chance of > success, and bugs such as these which severely impact the deployment > process harm the adoption of TripleO and RDO. > > 4) We should curate and keep an up to date page on rdoproject.org that > does highlight the outstanding issues related to TripleO on the RDO > Stable (CBS) releases. These should have links to relevant bugzillas, > clean instructions on how to work around the issue, or cleanly apply a > patch to avoid the issue, and as new releases make it out, we should > update the page to drop off workarounds that are no longer needed. > > The goal is to push Operators/Users to working with our most stable code > as much as possible, and track/curate issues around that. This way > everyone should be on the same page, issues are easier to discuss and > diagnose, and overall peoples experiences should be better. > > I'm interested in thoughts, feedback, and concerns, both from the RDO > and TripleO community, and from the Operator/User community. > > Regards, > > Graeme > > [1] https://bugs.launchpad.net/ironic-inspector/+bug/1570447 > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1322892 > > On 05/06/16 02:04, Pedro Sousa wrote: > > Thanks Marius, > > > > I can confirm that it installs fine with 3 controllers + 3 computes > > after recreating the stack > > > > Regards > > > > On Sat, Jun 4, 2016 at 4:14 PM, Marius Cornea > > wrote: > > > > Hi Pedro, > > > > Scaling out controller nodes is not supported at this moment: > > https://bugzilla.redhat.com/show_bug.cgi?id=1243312 > > > > On Sat, Jun 4, 2016 at 5:05 PM, Pedro Sousa > > wrote: > > > Hi, > > > > > > some update on scaling the cloud: > > > > > > 1 controller + 1 compute -> 1 controller + 3 computes OK > > > > > > 1 controller + 3 computes -> 3 controllers + 3 compute FAILS > > > > > > Problem: The new controller nodes are "stuck" in "pscd start", so > > it seems > > > to be a problem joining the pacemaker cluster... Did anyone had > this > > > problem? > > > > > > Regards > > > > > > > > > > > > > > > > > > > > > On Sat, Jun 4, 2016 at 1:50 AM, Pedro Sousa > > wrote: > > >> > > >> Hi, > > >> > > >> I finally managed to install a baremetal in mitaka with 1 > > controller + 1 > > >> compute with network isolation. Thank god :) > > >> > > >> All I did was: > > >> > > >> #yum install centos-release-openstack-mitaka > > >> #sudo yum install python-tripleoclient > > >> > > >> without epel repos. > > >> > > >> Then followed instructions from Redhat Site. > > >> > > >> I downloaded the overcloud images from: > > >> > > > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/ > > >> > > >> I do have an issue that forces me to delete a json file and run > > >> os-refresh-config inside my overcloud nodes other than that it > > installs > > >> fine. > > >> > > >> Now I'll test with more 2 controllers + 2 computes to have a full > HA > > >> deployment. > > >> > > >> If anyone needs help to document this I'll be happy to help. > > >> > > >> Regards, > > >> Pedro Sousa > > >> > > >> > > >> On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy > > wrote: > > >>> > > >>> The report says: "Fix Released" as of 2016-05-24. > > >>> Are you installing on a clean system with the latest > repositories? > > >>> > > >>> Might also want to check your version of rabbitmq: I have > > >>> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. > > >>> > > >>> ----- Original Message ----- > > >>> > From: "Pedro Sousa" pgsousa at gmail.com>> > > >>> > To: "Ronelle Landy" rlandy at redhat.com>> > > >>> > Cc: "Christopher Brown" > >, "Ignacio Bravo" > > >>> > >, > "rdo-list" > > >>> > > > > >>> > Sent: Friday, June 3, 2016 1:20:43 PM > > >>> > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > >>> > > > >>> > Anyway to workaround this? Maybe downgrade hiera? > > >>> > > > >>> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy > > > > > >>> > wrote: > > >>> > > > >>> > > I am not sure exactly where you installed from, and when you > > did your > > >>> > > installation, but any chance, you've hit: > > >>> > > https://bugs.launchpad.net/tripleo/+bug/1584892? > > >>> > > There is a link bugzilla record. > > >>> > > > > >>> > > ----- Original Message ----- > > >>> > > > From: "Pedro Sousa" > > > > >>> > > > To: "Ronelle Landy" > > > > >>> > > > Cc: "Christopher Brown" > >, "Ignacio Bravo" < > > >>> > > ibravo at ltgfederal.com >, > > "rdo-list" > > >>> > > > > > > >>> > > > Sent: Friday, June 3, 2016 12:26:58 PM > > >>> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > >>> > > > > > >>> > > > Thanks Ronelle, > > >>> > > > > > >>> > > > do you think this kind of errors can be related with > network > > >>> > > > settings? > > >>> > > > > > >>> > > > "Could not retrieve fact='rabbitmq_nodename', > > >>> > > > resolution='': > > >>> > > > undefined method `[]' for nil:NilClass Could not retrieve > > >>> > > > fact='rabbitmq_nodename', resolution='': > undefined > > >>> > > > method `[]' > > >>> > > > for nil:NilClass" > > >>> > > > > > >>> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy > > > > > >>> > > > wrote: > > >>> > > > > > >>> > > > > Hi Pedro, > > >>> > > > > > > >>> > > > > You could use the docs you referred to. > > >>> > > > > Alternatively, if you want to use a vm for the > > undercloud and > > >>> > > > > baremetal > > >>> > > > > machines for the overcloud, it is possible to use Tripleo > > >>> > > > > Qucikstart > > >>> > > with a > > >>> > > > > few modifications. > > >>> > > > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. > > >>> > > > > > > >>> > > > > ----- Original Message ----- > > >>> > > > > > From: "Pedro Sousa" > > > > >>> > > > > > To: "Ronelle Landy" > > > > >>> > > > > > Cc: "Christopher Brown" > >, "Ignacio Bravo" < > > >>> > > > > ibravo at ltgfederal.com >, > > "rdo-list" > > >>> > > > > > > > > >>> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM > > >>> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable > version? > > >>> > > > > > > > >>> > > > > > Hi Ronelle, > > >>> > > > > > > > >>> > > > > > maybe I understand it wrong but I thought that Tripleo > > >>> > > > > > Quickstart > > >>> > > was for > > >>> > > > > > deploying virtual environments? > > >>> > > > > > > > >>> > > > > > And for baremetal we should use > > >>> > > > > > > > >>> > > > > > > >>> > > > > >>> > > > > > http://docs.openstack.org/developer/tripleo-docs/installation/installation.html > > >>> > > > > > ? > > >>> > > > > > > > >>> > > > > > Thanks > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy > > >>> > > > > > > > > >>> > > wrote: > > >>> > > > > > > > >>> > > > > > > Hello, > > >>> > > > > > > > > >>> > > > > > > We have had success deploying RDO (Mitaka) on > baremetal > > >>> > > > > > > systems - > > >>> > > using > > >>> > > > > > > Tripleo Quickstart with both single-nic-vlans and > > >>> > > > > > > bond-with-vlans > > >>> > > > > network > > >>> > > > > > > isolation configurations. > > >>> > > > > > > > > >>> > > > > > > Baremetal can have some complicated networking > > issues but, > > >>> > > > > > > from > > >>> > > > > previous > > >>> > > > > > > experiences, if a single-controller deployment > > worked but a > > >>> > > > > > > HA > > >>> > > > > deployment > > >>> > > > > > > did not, I would check: > > >>> > > > > > > - does the HA deployment command include: -e > > >>> > > > > > > > > >>> > > > > > > >>> > > > > >>> > > > > > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > > >>> > > > > > > - are there possible MTU issues? > > >>> > > > > > > > > >>> > > > > > > > > >>> > > > > > > ----- Original Message ----- > > >>> > > > > > > > From: "Christopher Brown" > > > > >>> > > > > > > > To: pgsousa at gmail.com , > > ibravo at ltgfederal.com > > >>> > > > > > > > Cc: rdo-list at redhat.com rdo-list at redhat.com> > > >>> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM > > >>> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable > > version? > > >>> > > > > > > > > > >>> > > > > > > > Hello Ignacio, > > >>> > > > > > > > > > >>> > > > > > > > Thanks for your response and good to know it isn't > > just me! > > >>> > > > > > > > > > >>> > > > > > > > I would be more than happy to provide developers > with > > >>> > > > > > > > access to > > >>> > > our > > >>> > > > > > > > bare metal environments. I'll also file some > bugzilla > > >>> > > > > > > > reports to > > >>> > > see > > >>> > > > > if > > >>> > > > > > > > this generates any interest. > > >>> > > > > > > > > > >>> > > > > > > > Please do let me know if you make any progress - I > am > > >>> > > > > > > > trying to > > >>> > > > > deploy > > >>> > > > > > > > HA with network isolation, multiple nics and vlans. > > >>> > > > > > > > > > >>> > > > > > > > The RDO web page states: > > >>> > > > > > > > > > >>> > > > > > > > "If you want to create a production-ready cloud, > > you'll > > >>> > > > > > > > want to > > >>> > > use > > >>> > > > > the > > >>> > > > > > > > TripleO quickstart guide." > > >>> > > > > > > > > > >>> > > > > > > > which is a contradiction in terms really. > > >>> > > > > > > > > > >>> > > > > > > > Cheers > > >>> > > > > > > > > > >>> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo > > wrote: > > >>> > > > > > > > > Pedro / Christopher, > > >>> > > > > > > > > > > >>> > > > > > > > > Just wanted to share with you that I also had > > plenty of > > >>> > > > > > > > > issues > > >>> > > > > > > > > deploying on bare metal HA servers, and have > > paused the > > >>> > > deployment > > >>> > > > > > > > > using TripleO until better winds start to flow > > here. I > > >>> > > > > > > > > was > > >>> > > able to > > >>> > > > > > > > > deploy the QuickStart, but on bare metal the > > history was > > >>> > > different. > > >>> > > > > > > > > Couldn't even deploy a two server configuration. > > >>> > > > > > > > > > > >>> > > > > > > > > I was thinking that it would be good to have the > > >>> > > > > > > > > developers > > >>> > > have > > >>> > > > > > > > > access to one of our environments and go through > > a full > > >>> > > > > > > > > install > > >>> > > > > with > > >>> > > > > > > > > us to better see where things fail. We can do > this > > >>> > > > > > > > > handholding > > >>> > > > > > > > > deployment once every week/month based on > > developers time > > >>> > > > > > > > > availability. That way we can get a working > > install, and > > >>> > > > > > > > > we can > > >>> > > > > > > > > troubleshoot real life environment problems. > > >>> > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > IB > > >>> > > > > > > > > > > >>> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa > > >>> > > > > > > > > > > > >>> > > wrote: > > >>> > > > > > > > > > > >>> > > > > > > > > > Yes. I've used this, but I'll try again as > there's > > >>> > > > > > > > > > seems to > > >>> > > be > > >>> > > > > new > > >>> > > > > > > > > > updates. > > >>> > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > Stable Branch Skip all repos mentioned above, > > other > > >>> > > > > > > > > > than > > >>> > > epel- > > >>> > > > > > > > > > release which is still required. > > >>> > > > > > > > > > Enable latest RDO Stable Delorean repository > > for all > > >>> > > > > > > > > > packages > > >>> > > > > > > > > > sudo curl -o > > /etc/yum.repos.d/delorean-liberty.repo > > >>> > > > > https://trunk.r > > >>> > > > > > > > > > > > doproject.org/centos7-liberty/current/delorean.repo > > > > >>> > > > > > > > > > Enable the Delorean Deps repository > > >>> > > > > > > > > > sudo curl -o > > >>> > > > > > > > > > /etc/yum.repos.d/delorean-deps-liberty.repo > > >>> > > > > http://tru > > >>> > > > > > > > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher > > Brown < > > >>> > > > > cbrown2 at ocf.co . > > >>> > > > > > > > > > uk> wrote: > > >>> > > > > > > > > > > No, Liberty deployed ok for us. > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > It suggests to me a package mismatch. Have > you > > >>> > > > > > > > > > > completely > > >>> > > > > rebuilt > > >>> > > > > > > > > > > the > > >>> > > > > > > > > > > undercloud and then the images using Liberty? > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro > > Sousa wrote: > > >>> > > > > > > > > > > > AttributeError: 'module' object has no > > attribute > > >>> > > 'PortOpt' > > >>> > > > > > > > > > > -- > > >>> > > > > > > > > > > Regards, > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > Christopher Brown > > >>> > > > > > > > > > > OpenStack Engineer > > >>> > > > > > > > > > > OCF plc > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > Tel: +44 (0)114 257 2200 > > > > >>> > > > > > > > > > > Web: www.ocf.co.uk > > >>> > > > > > > > > > > Blog: blog.ocf.co.uk > > >>> > > > > > > > > > > Twitter: @ocfplc > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > Please note, any emails relating to an OCF > > Support > > >>> > > > > > > > > > > request > > >>> > > must > > >>> > > > > > > > > > > always > > >>> > > > > > > > > > > be sent to support at ocf.co.uk > > for a ticket number to > > >>> > > > > > > > > > > be > > >>> > > > > generated > > >>> > > > > > > > > > > or > > >>> > > > > > > > > > > existing support ticket to be updated. > > Should this > > >>> > > > > > > > > > > not be > > >>> > > done > > >>> > > > > > > > > > > then OCF > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > cannot be held responsible for requests not > > dealt > > >>> > > > > > > > > > > with in a > > >>> > > > > > > > > > > timely > > >>> > > > > > > > > > > manner. > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > OCF plc is a company registered in England > > and Wales. > > >>> > > > > Registered > > >>> > > > > > > > > > > number > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. > > Registered office > > >>> > > address: > > >>> > > > > > > > > > > OCF plc, > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, > > >>> > > > > > > > > > > Chapeltown, > > >>> > > > > > > > > > > Sheffield S35 > > >>> > > > > > > > > > > 2PG. > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > If you have received this message in error, > > please > > >>> > > > > > > > > > > notify > > >>> > > us > > >>> > > > > > > > > > > immediately and remove it from your system. > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > _______________________________________________ > > >>> > > > > > > > > rdo-list mailing list > > >>> > > > > > > > > rdo-list at redhat.com > > >>> > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > >>> > > > > > > > > > > >>> > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > >>> > > > > > > > -- > > >>> > > > > > > > Regards, > > >>> > > > > > > > > > >>> > > > > > > > Christopher Brown > > >>> > > > > > > > OpenStack Engineer > > >>> > > > > > > > OCF plc > > >>> > > > > > > > > > >>> > > > > > > > Tel: +44 (0)114 257 2200 > > > > >>> > > > > > > > Web: www.ocf.co.uk > > >>> > > > > > > > Blog: blog.ocf.co.uk > > >>> > > > > > > > Twitter: @ocfplc > > >>> > > > > > > > > > >>> > > > > > > > Please note, any emails relating to an OCF Support > > request > > >>> > > > > > > > must > > >>> > > > > always > > >>> > > > > > > > be sent to support at ocf.co.uk > > for a ticket number to be > > >>> > > generated or > > >>> > > > > > > > existing support ticket to be updated. Should this > > not be > > >>> > > > > > > > done > > >>> > > then > > >>> > > > > OCF > > >>> > > > > > > > > > >>> > > > > > > > cannot be held responsible for requests not dealt > > with in a > > >>> > > timely > > >>> > > > > > > > manner. > > >>> > > > > > > > > > >>> > > > > > > > OCF plc is a company registered in England and > Wales. > > >>> > > > > > > > Registered > > >>> > > > > number > > >>> > > > > > > > > > >>> > > > > > > > 4132533, VAT number GB 780 6803 14. Registered > office > > >>> > > > > > > > address: > > >>> > > OCF > > >>> > > > > plc, > > >>> > > > > > > > > > >>> > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, > > Chapeltown, > > >>> > > Sheffield > > >>> > > > > S35 > > >>> > > > > > > > 2PG. > > >>> > > > > > > > > > >>> > > > > > > > If you have received this message in error, please > > notify > > >>> > > > > > > > us > > >>> > > > > > > > immediately and remove it from your system. > > >>> > > > > > > > > > >>> > > > > > > > _______________________________________________ > > >>> > > > > > > > rdo-list mailing list > > >>> > > > > > > > rdo-list at redhat.com > > >>> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > >>> > > > > > > > > > >>> > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > >>> > > > > > > > > > >>> > > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >> > > >> > > > > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggillies at redhat.com Mon Jun 6 00:51:14 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Mon, 6 Jun 2016 10:51:14 +1000 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> Message-ID: <0d69c56a-1e27-9623-7e08-b652010bb3e3@redhat.com> On 06/06/16 10:26, Pedro Sousa wrote: > Hi Graeme, > > my 2 cents here, my experience was different from yours. > > I've managed to install tripleo with 3 controllers + 3 computes without > epel repos and without applying any patches, using mitaka repos from > cbs. I didn't have any issues with introspection, but I tested on the > same hw (Dell PowerEdge R430). Then I downloaded some prebuild overcloud > images from delorean > repos: http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/ > > That being said, there's a lot to be done concerning install > documentation, and how to report bugs and issues, specifically for > baremetal environments, where a lot of people like me intend to use > tripleo for production. > > Regards The patch I note in step 4) is to work around an issue that only affects you if you introspect a node more than once. It will always work the first time, it's the second time that it will fail. So if you are only introspecting once then you don't need it. The patch I apply in step 5) is to work around an issue that is fixed in DLRN, and is only necessary if you want to build images yourself and not use prebuilt ones. By using prebuilt ones from DLRN, you are in effect getting the fix by nature of that (and thus don't need the patch). Note that if you are also sourcing your overcloud-full.qcow2 from DLRN repos, then you are no longer deploying an overcloud that uses CBS, you are deploying an overcloud from DLRN. You can confirm this by checking if the rdo-release rpm is installed on your overcloud, and check that in /etc/yum.repos.d there is no 'delorean.repo' Regards, Graeme > > > > > > On Mon, Jun 6, 2016 at 12:37 AM, Graeme Gillies > wrote: > > Hi Everyone, > > I just wanted to say I have been following this thread quite closely and > can sympathize with some of the pain people are going through to get > tripleO to work. > > Currently it's quite difficult and a bit opaque on how to actually > utilise the stable mitaka repos in order to build a functional > undercloud and overcloud environment. > > First I wanted to share the steps I have undergone in order to get a > functional overcloud working with RDO Mitaka utilising the RDO stable > release built by CentOS, and then I'll talk about some specific steps I > think need to be undertaken by the RDO/TripleO team in order to provide > a better experience in the future. > > To get a functional overcloud using RDO Mitaka, you need to do the > following > > 1) Install EPEL on your undercloud > 2) Install https://www.rdoproject.org/repos/rdo-release.rpm on your > undercloud > 3) Following the normal steps to install your undercloud (modifying > undercloud.conf, and running openstack undercloud install > 4) You will now need to manually patch ironic on the undercloud in order > to make sure repeated introspection works. This might not be needed if > you don't do any introspection, but I find more often than not you end > up having to do it, so it's worthwhile. The bug you need to patch is [1] > and I typically run the following commands to apply the patch > > # sudo su - > $ cd /usr/lib/python2.7/site-packages > $ curl > 'https://review.openstack.org/changes/306421/revisions/abd50d8438e7d371ce24f97d8f8f67052b562007/patch?download' > | base64 -d | patch -p1 > $ systemctl restart openstack-ironic-inspector > $ systemctl restart openstack-ironic-inspector-dnsmasq > $ exit > # > > 5) Manually patch the undercloud to build overcloud images using > rhos-release rpm only (which utilises the stable Mitaka repo from > CentOS, and nothing from RDO Trunk [delorean]). I do this by modifying > the file > > /usr/lib/python2.7/site-packages/tripleoclient/v1/overcloud_image.py > > At around line 467 you will see a reference to epel, I add a new line > after that to include the rdo_release DIB element to the build as well. > This typically makes the file look something like > > http://paste.openstack.org/show/508196/ > > (note like 468). Then I create a directory to store my images and build > them specifying the mitaka version of rdo_release. I then upload these > images > > # mkdir ~/images > # cd ~/images > # export RDO_RELEASE=mitaka > # openstack overcloud image build --all > # openstack overcloud image upload --update-existing > > 6) Because of the bug at [2] which affects the ironic agent ramdisk, we > need to build a set of images utilising RDO Trunk for the mitaka branch > (where the fix is applied), and then upload *only* the new ironic > ramdisk. This is done with > > # mkdir ~/images-mitaka-trunk > # cd ~/images-mitaka-trunk > # export USE_DELOREAN_TRUNK=1 > # export > DELOREAN_TRUNK_REPO="http://trunk.rdoproject.org/centos7-mitaka/current/" > # export DELOREAN_REPO_FILE="delorean.repo" > # openstack overcloud image build --type agent-ramdisk > # sudo cp ironic-python-agent.initramfs /httpboot/agent.ramdisk > > 7) Follow the rest of the documentation to deploy the overcloud normally > > Please note that obviously your mileage may vary, and this is by all > means not an exclusive list of the problems. I have however used these > steps to do multiple node deployments (10+ nodes) with HA over different > hardware sets with different networking setups (single nic, multiple nic > with bonding + vlans). > > With all the different repos floating around, all which change very > rapidly, combined with the documentation defaults targeting developers > and CI systems (not end users), it's hard to not only get a stable > TripleO install up, but also communicate and discuss clearly with others > what is working, what is broken, and how to compare two installations to > see if they are experiencing the same issues. > > To this end I would like to suggest to the RDO and TripleO community > that we undertake the following > > 1) Overhaul all the TripleO documentation so that all the steps default > to utilising/deploying using RDO Stable (that is, the releases done by > CBS). There should be colored boxes with alt steps for those who wish to > use RDO Trunk on the stable branch, and RDO Trunk from master. This > basically inverts the current pattern. I think anyone, Operator or > developer, who is working through the documentation for the first time, > should be given steps that maximise the chance of success, and thus the > most stable release we have. Once a user has gone through the process > once, they can look at the alternative steps for more aggressive > releases > > 2) Patch python-tripleoclient so that by default, when you run > "openstack overcloud image build" it builds the images utilising the > rdo_release DIB element, and sets the RDO_RELEASE environment variable > to be 'mitaka' or whenever the current stable release is (and we should > endevour to update it with new releases). There should be no extra > environment variables necessary to build images, and by default it > should never touch anything RDO Trunk (delorean) related > > 3) For bugs like the two I have mentioned above, we need to have some > sort of robust process for either backporting those patches to the > builds in CBS (I understand we don't do this for various reasons), or we > need some kind of tooling or solution that allows operators to apply > only the fixes they need from RDO Trunk (delorean). We need to ensure > that when an Operator utilises TripleO they have the greatest chance of > success, and bugs such as these which severely impact the deployment > process harm the adoption of TripleO and RDO. > > 4) We should curate and keep an up to date page on rdoproject.org > that > does highlight the outstanding issues related to TripleO on the RDO > Stable (CBS) releases. These should have links to relevant bugzillas, > clean instructions on how to work around the issue, or cleanly apply a > patch to avoid the issue, and as new releases make it out, we should > update the page to drop off workarounds that are no longer needed. > > The goal is to push Operators/Users to working with our most stable code > as much as possible, and track/curate issues around that. This way > everyone should be on the same page, issues are easier to discuss and > diagnose, and overall peoples experiences should be better. > > I'm interested in thoughts, feedback, and concerns, both from the RDO > and TripleO community, and from the Operator/User community. > > Regards, > > Graeme > > [1] https://bugs.launchpad.net/ironic-inspector/+bug/1570447 > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1322892 > > On 05/06/16 02:04, Pedro Sousa wrote: > > Thanks Marius, > > > > I can confirm that it installs fine with 3 controllers + 3 computes > > after recreating the stack > > > > Regards > > > > On Sat, Jun 4, 2016 at 4:14 PM, Marius Cornea > > >> wrote: > > > > Hi Pedro, > > > > Scaling out controller nodes is not supported at this moment: > > https://bugzilla.redhat.com/show_bug.cgi?id=1243312 > > > > On Sat, Jun 4, 2016 at 5:05 PM, Pedro Sousa > > >> wrote: > > > Hi, > > > > > > some update on scaling the cloud: > > > > > > 1 controller + 1 compute -> 1 controller + 3 computes OK > > > > > > 1 controller + 3 computes -> 3 controllers + 3 compute FAILS > > > > > > Problem: The new controller nodes are "stuck" in "pscd start", so > > it seems > > > to be a problem joining the pacemaker cluster... Did anyone had this > > > problem? > > > > > > Regards > > > > > > > > > > > > > > > > > > > > > On Sat, Jun 4, 2016 at 1:50 AM, Pedro Sousa > > >> wrote: > > >> > > >> Hi, > > >> > > >> I finally managed to install a baremetal in mitaka with 1 > > controller + 1 > > >> compute with network isolation. Thank god :) > > >> > > >> All I did was: > > >> > > >> #yum install centos-release-openstack-mitaka > > >> #sudo yum install python-tripleoclient > > >> > > >> without epel repos. > > >> > > >> Then followed instructions from Redhat Site. > > >> > > >> I downloaded the overcloud images from: > > >> > > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/ > > >> > > >> I do have an issue that forces me to delete a json file and run > > >> os-refresh-config inside my overcloud nodes other than that it > > installs > > >> fine. > > >> > > >> Now I'll test with more 2 controllers + 2 computes to have a full HA > > >> deployment. > > >> > > >> If anyone needs help to document this I'll be happy to help. > > >> > > >> Regards, > > >> Pedro Sousa > > >> > > >> > > >> On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy > > >> wrote: > > >>> > > >>> The report says: "Fix Released" as of 2016-05-24. > > >>> Are you installing on a clean system with the latest repositories? > > >>> > > >>> Might also want to check your version of rabbitmq: I have > > >>> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. > > >>> > > >>> ----- Original Message ----- > > >>> > From: "Pedro Sousa" > >> > > >>> > To: "Ronelle Landy" > >> > > >>> > Cc: "Christopher Brown" > > >>, "Ignacio Bravo" > > >>> > > >>, > "rdo-list" > > >>> > > >> > > >>> > Sent: Friday, June 3, 2016 1:20:43 PM > > >>> > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > >>> > > > >>> > Anyway to workaround this? Maybe downgrade hiera? > > >>> > > > >>> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy > > > >> > > >>> > wrote: > > >>> > > > >>> > > I am not sure exactly where you installed from, and when you > > did your > > >>> > > installation, but any chance, you've hit: > > >>> > > https://bugs.launchpad.net/tripleo/+bug/1584892? > > >>> > > There is a link bugzilla record. > > >>> > > > > >>> > > ----- Original Message ----- > > >>> > > > From: "Pedro Sousa" > > >> > > >>> > > > To: "Ronelle Landy" > > >> > > >>> > > > Cc: "Christopher Brown" > > >>, "Ignacio Bravo" < > > >>> > > ibravo at ltgfederal.com > >>, > > "rdo-list" > > >>> > > > > >> > > >>> > > > Sent: Friday, June 3, 2016 12:26:58 PM > > >>> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > >>> > > > > > >>> > > > Thanks Ronelle, > > >>> > > > > > >>> > > > do you think this kind of errors can be related with network > > >>> > > > settings? > > >>> > > > > > >>> > > > "Could not retrieve fact='rabbitmq_nodename', > > >>> > > > resolution='': > > >>> > > > undefined method `[]' for nil:NilClass Could not retrieve > > >>> > > > fact='rabbitmq_nodename', resolution='': undefined > > >>> > > > method `[]' > > >>> > > > for nil:NilClass" > > >>> > > > > > >>> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy > > > >> > > >>> > > > wrote: > > >>> > > > > > >>> > > > > Hi Pedro, > > >>> > > > > > > >>> > > > > You could use the docs you referred to. > > >>> > > > > Alternatively, if you want to use a vm for the > > undercloud and > > >>> > > > > baremetal > > >>> > > > > machines for the overcloud, it is possible to use Tripleo > > >>> > > > > Qucikstart > > >>> > > with a > > >>> > > > > few modifications. > > >>> > > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. > > >>> > > > > > > >>> > > > > ----- Original Message ----- > > >>> > > > > > From: "Pedro Sousa" > > >> > > >>> > > > > > To: "Ronelle Landy" > > >> > > >>> > > > > > Cc: "Christopher Brown" > > >>, "Ignacio Bravo" < > > >>> > > > > ibravo at ltgfederal.com > >>, > > "rdo-list" > > >>> > > > > > > >> > > >>> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM > > >>> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > >>> > > > > > > > >>> > > > > > Hi Ronelle, > > >>> > > > > > > > >>> > > > > > maybe I understand it wrong but I thought that Tripleo > > >>> > > > > > Quickstart > > >>> > > was for > > >>> > > > > > deploying virtual environments? > > >>> > > > > > > > >>> > > > > > And for baremetal we should use > > >>> > > > > > > > >>> > > > > > > >>> > > > > >>> > > > > http://docs.openstack.org/developer/tripleo-docs/installation/installation.html > > >>> > > > > > ? > > >>> > > > > > > > >>> > > > > > Thanks > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy > > >>> > > > > > > >> > > >>> > > wrote: > > >>> > > > > > > > >>> > > > > > > Hello, > > >>> > > > > > > > > >>> > > > > > > We have had success deploying RDO (Mitaka) on baremetal > > >>> > > > > > > systems - > > >>> > > using > > >>> > > > > > > Tripleo Quickstart with both single-nic-vlans and > > >>> > > > > > > bond-with-vlans > > >>> > > > > network > > >>> > > > > > > isolation configurations. > > >>> > > > > > > > > >>> > > > > > > Baremetal can have some complicated networking > > issues but, > > >>> > > > > > > from > > >>> > > > > previous > > >>> > > > > > > experiences, if a single-controller deployment > > worked but a > > >>> > > > > > > HA > > >>> > > > > deployment > > >>> > > > > > > did not, I would check: > > >>> > > > > > > - does the HA deployment command include: -e > > >>> > > > > > > > > >>> > > > > > > >>> > > > > >>> > > > > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > > >>> > > > > > > - are there possible MTU issues? > > >>> > > > > > > > > >>> > > > > > > > > >>> > > > > > > ----- Original Message ----- > > >>> > > > > > > > From: "Christopher Brown" > > >> > > >>> > > > > > > > To: pgsousa at gmail.com > >, > > ibravo at ltgfederal.com > > > > >>> > > > > > > > Cc: rdo-list at redhat.com > > > > >>> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM > > >>> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable > > version? > > >>> > > > > > > > > > >>> > > > > > > > Hello Ignacio, > > >>> > > > > > > > > > >>> > > > > > > > Thanks for your response and good to know it > isn't > > just me! > > >>> > > > > > > > > > >>> > > > > > > > I would be more than happy to provide > developers with > > >>> > > > > > > > access to > > >>> > > our > > >>> > > > > > > > bare metal environments. I'll also file some > bugzilla > > >>> > > > > > > > reports to > > >>> > > see > > >>> > > > > if > > >>> > > > > > > > this generates any interest. > > >>> > > > > > > > > > >>> > > > > > > > Please do let me know if you make any > progress - I am > > >>> > > > > > > > trying to > > >>> > > > > deploy > > >>> > > > > > > > HA with network isolation, multiple nics and > vlans. > > >>> > > > > > > > > > >>> > > > > > > > The RDO web page states: > > >>> > > > > > > > > > >>> > > > > > > > "If you want to create a production-ready cloud, > > you'll > > >>> > > > > > > > want to > > >>> > > use > > >>> > > > > the > > >>> > > > > > > > TripleO quickstart guide." > > >>> > > > > > > > > > >>> > > > > > > > which is a contradiction in terms really. > > >>> > > > > > > > > > >>> > > > > > > > Cheers > > >>> > > > > > > > > > >>> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo > > wrote: > > >>> > > > > > > > > Pedro / Christopher, > > >>> > > > > > > > > > > >>> > > > > > > > > Just wanted to share with you that I also had > > plenty of > > >>> > > > > > > > > issues > > >>> > > > > > > > > deploying on bare metal HA servers, and have > > paused the > > >>> > > deployment > > >>> > > > > > > > > using TripleO until better winds start to flow > > here. I > > >>> > > > > > > > > was > > >>> > > able to > > >>> > > > > > > > > deploy the QuickStart, but on bare metal the > > history was > > >>> > > different. > > >>> > > > > > > > > Couldn't even deploy a two server > configuration. > > >>> > > > > > > > > > > >>> > > > > > > > > I was thinking that it would be good to > have the > > >>> > > > > > > > > developers > > >>> > > have > > >>> > > > > > > > > access to one of our environments and go > through > > a full > > >>> > > > > > > > > install > > >>> > > > > with > > >>> > > > > > > > > us to better see where things fail. We can > do this > > >>> > > > > > > > > handholding > > >>> > > > > > > > > deployment once every week/month based on > > developers time > > >>> > > > > > > > > availability. That way we can get a working > > install, and > > >>> > > > > > > > > we can > > >>> > > > > > > > > troubleshoot real life environment problems. > > >>> > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > IB > > >>> > > > > > > > > > > >>> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa > > >>> > > > > > > > > >> > > >>> > > wrote: > > >>> > > > > > > > > > > >>> > > > > > > > > > Yes. I've used this, but I'll try again as there's > > >>> > > > > > > > > > seems to > > >>> > > be > > >>> > > > > new > > >>> > > > > > > > > > updates. > > >>> > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > Stable Branch Skip all repos mentioned above, > > other > > >>> > > > > > > > > > than > > >>> > > epel- > > >>> > > > > > > > > > release which is still required. > > >>> > > > > > > > > > Enable latest RDO Stable Delorean repository > > for all > > >>> > > > > > > > > > packages > > >>> > > > > > > > > > sudo curl -o > > /etc/yum.repos.d/delorean-liberty.repo > > >>> > > > > https://trunk.r > > >>> > > > > > > > > > > > doproject.org/centos7-liberty/current/delorean.repo > > > > > >>> > > > > > > > > > Enable the Delorean Deps repository > > >>> > > > > > > > > > sudo curl -o > > >>> > > > > > > > > > /etc/yum.repos.d/delorean-deps-liberty.repo > > >>> > > > > http://tru > > >>> > > > > > > > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher > > Brown < > > >>> > > > > cbrown2 at ocf.co > >. > > >>> > > > > > > > > > uk> wrote: > > >>> > > > > > > > > > > No, Liberty deployed ok for us. > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > It suggests to me a package mismatch. Have you > > >>> > > > > > > > > > > completely > > >>> > > > > rebuilt > > >>> > > > > > > > > > > the > > >>> > > > > > > > > > > undercloud and then the images using Liberty? > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro > > Sousa wrote: > > >>> > > > > > > > > > > > AttributeError: 'module' object has no > > attribute > > >>> > > 'PortOpt' > > >>> > > > > > > > > > > -- > > >>> > > > > > > > > > > Regards, > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > Christopher Brown > > >>> > > > > > > > > > > OpenStack Engineer > > >>> > > > > > > > > > > OCF plc > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > Tel: +44 (0)114 257 2200 > > > > >>> > > > > > > > > > > Web: www.ocf.co.uk > > > >>> > > > > > > > > > > Blog: blog.ocf.co.uk > > > >>> > > > > > > > > > > Twitter: @ocfplc > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > Please note, any emails relating to an OCF > > Support > > >>> > > > > > > > > > > request > > >>> > > must > > >>> > > > > > > > > > > always > > >>> > > > > > > > > > > be sent to support at ocf.co.uk > > > for a > ticket number to > > >>> > > > > > > > > > > be > > >>> > > > > generated > > >>> > > > > > > > > > > or > > >>> > > > > > > > > > > existing support ticket to be updated. > > Should this > > >>> > > > > > > > > > > not be > > >>> > > done > > >>> > > > > > > > > > > then OCF > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > cannot be held responsible for > requests not > > dealt > > >>> > > > > > > > > > > with in a > > >>> > > > > > > > > > > timely > > >>> > > > > > > > > > > manner. > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > OCF plc is a company registered in England > > and Wales. > > >>> > > > > Registered > > >>> > > > > > > > > > > number > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. > > Registered office > > >>> > > address: > > >>> > > > > > > > > > > OCF plc, > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe > Park, > > >>> > > > > > > > > > > Chapeltown, > > >>> > > > > > > > > > > Sheffield S35 > > >>> > > > > > > > > > > 2PG. > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > If you have received this message in > error, > > please > > >>> > > > > > > > > > > notify > > >>> > > us > > >>> > > > > > > > > > > immediately and remove it from your > system. > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > > _______________________________________________ > > >>> > > > > > > > > rdo-list mailing list > > >>> > > > > > > > > rdo-list at redhat.com > > > > >>> > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > >>> > > > > > > > > > > >>> > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > >>> > > > > > > > -- > > >>> > > > > > > > Regards, > > >>> > > > > > > > > > >>> > > > > > > > Christopher Brown > > >>> > > > > > > > OpenStack Engineer > > >>> > > > > > > > OCF plc > > >>> > > > > > > > > > >>> > > > > > > > Tel: +44 (0)114 257 2200 > > > > >>> > > > > > > > Web: www.ocf.co.uk > > > >>> > > > > > > > Blog: blog.ocf.co.uk > > > >>> > > > > > > > Twitter: @ocfplc > > >>> > > > > > > > > > >>> > > > > > > > Please note, any emails relating to an OCF Support > > request > > >>> > > > > > > > must > > >>> > > > > always > > >>> > > > > > > > be sent to support at ocf.co.uk > > > for a > ticket number to be > > >>> > > generated or > > >>> > > > > > > > existing support ticket to be updated. > Should this > > not be > > >>> > > > > > > > done > > >>> > > then > > >>> > > > > OCF > > >>> > > > > > > > > > >>> > > > > > > > cannot be held responsible for requests not > dealt > > with in a > > >>> > > timely > > >>> > > > > > > > manner. > > >>> > > > > > > > > > >>> > > > > > > > OCF plc is a company registered in England > and Wales. > > >>> > > > > > > > Registered > > >>> > > > > number > > >>> > > > > > > > > > >>> > > > > > > > 4132533, VAT number GB 780 6803 14. > Registered office > > >>> > > > > > > > address: > > >>> > > OCF > > >>> > > > > plc, > > >>> > > > > > > > > > >>> > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, > > Chapeltown, > > >>> > > Sheffield > > >>> > > > > S35 > > >>> > > > > > > > 2PG. > > >>> > > > > > > > > > >>> > > > > > > > If you have received this message in error, > please > > notify > > >>> > > > > > > > us > > >>> > > > > > > > immediately and remove it from your system. > > >>> > > > > > > > > > >>> > > > > > > > _______________________________________________ > > >>> > > > > > > > rdo-list mailing list > > >>> > > > > > > > rdo-list at redhat.com > > > > >>> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > >>> > > > > > > > > > >>> > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > >>> > > > > > > > > > >>> > > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >> > > >> > > > > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From me at gbraad.nl Mon Jun 6 03:32:34 2016 From: me at gbraad.nl (Gerard Braad) Date: Mon, 6 Jun 2016 11:32:34 +0800 Subject: [rdo-list] [tripleo] How about support of AMT for deployment? Message-ID: HI, Some time ago Michele Baldessari wrote about deploying OpenStack TripleO on a bunch of Intel NUCs. To enable this he needed to make some minor changes [1] to os-cloud-config and python-tripleoclient to add basic support for AMT (utilizing wsmancli). I have tried this out myself with a set of Lenovo M93p's and it does work. I believe basic support of AMT can be very helpful to allow tests on more COTS products, when IPMI is not available. Is this something that can be considered for inclusion? regards, Gerard [1] http://acksyn.org/files/tripleo/nuc.patch -- Gerard Braad | http://gbraad.nl [ Doing Open Source Matters ] From ggillies at redhat.com Mon Jun 6 03:39:07 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Mon, 6 Jun 2016 13:39:07 +1000 Subject: [rdo-list] [tripleo] How about support of AMT for deployment? In-Reply-To: References: Message-ID: On 06/06/16 13:32, Gerard Braad wrote: > HI, > > > Some time ago Michele Baldessari wrote about deploying OpenStack > TripleO on a bunch of Intel NUCs. To enable this he needed to make > some minor changes [1] to os-cloud-config and python-tripleoclient to > add basic support for AMT (utilizing wsmancli). > > I have tried this out myself with a set of Lenovo M93p's and it does > work. I believe basic support of AMT can be very helpful to allow > tests on more COTS products, when IPMI is not available. Is this > something that can be considered for inclusion? > > regards, > > > Gerard > > [1] http://acksyn.org/files/tripleo/nuc.patch > Full support for Intel AMT and NUCs has been in upstream TripleO for a while (and I believe downstream as of RHOS 8). For NUCs without AMT you can use the wol driver which I use for all testing in my NUC lab. Regards, Graeme -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From me at gbraad.nl Mon Jun 6 04:13:14 2016 From: me at gbraad.nl (Gerard Braad) Date: Mon, 6 Jun 2016 12:13:14 +0800 Subject: [rdo-list] [tripleo] How about support of AMT for deployment? In-Reply-To: References: Message-ID: Hi. On Mon, Jun 6, 2016 at 11:39 AM, Graeme Gillies wrote: > Full support for Intel AMT and NUCs has been in upstream TripleO for a > while (and I believe downstream as of RHOS 8). For NUCs without AMT you > can use the wol driver which I use for all testing in my NUC lab. Is support available because of the following? /tripleo-common/tripleo_common/utils/nodes.py: 187: '.*_amt': PrefixedDriverInfo('amt'), If so, I was unable to find documentation that describes this is possible. It would have been sufficient to describe a node in instackenv.json as follows "pm_type":"pxe_amt", "pm_user":"admin", "pm_password":"admin", "pm_addr":"192.168.1.101", "mac": ["de:8a:ad:be:3c:ef:"] without any additional patches or software? The article [1] I refer to was written last March. And this is also what I needed to make it work. regards, Gerard [1] http://acksyn.org/posts/2016/03/tripleo-on-nucs/ -- Gerard Braad | http://gbraad.nl [ Doing Open Source Matters ] From dtantsur at redhat.com Mon Jun 6 04:30:20 2016 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 6 Jun 2016 06:30:20 +0200 Subject: [rdo-list] [tripleo] How about support of AMT for deployment? In-Reply-To: References: Message-ID: <672f37dd-ab14-1173-5ffc-50e9ebe00fd9@redhat.com> On 06/06/2016 06:13 AM, Gerard Braad wrote: > Hi. Hi Gerard. > > > On Mon, Jun 6, 2016 at 11:39 AM, Graeme Gillies wrote: >> Full support for Intel AMT and NUCs has been in upstream TripleO for a >> while (and I believe downstream as of RHOS 8). For NUCs without AMT you >> can use the wol driver which I use for all testing in my NUC lab. > > Is support available because of the following? > > /tripleo-common/tripleo_common/utils/nodes.py: > 187: '.*_amt': PrefixedDriverInfo('amt'), Yes. > > If so, I was unable to find documentation that describes this is possible. > > It would have been sufficient to describe a node in instackenv.json as follows > > "pm_type":"pxe_amt", > "pm_user":"admin", > "pm_password":"admin", > "pm_addr":"192.168.1.101", > "mac": ["de:8a:ad:be:3c:ef:"] > > without any additional patches or software? The article [1] I refer to > was written last March. And this is also what I needed to make it > work. Should be, the patch enabling it was merged post-March. > > regards, > > > Gerard > > > [1] http://acksyn.org/posts/2016/03/tripleo-on-nucs/ > From me at gbraad.nl Mon Jun 6 04:57:36 2016 From: me at gbraad.nl (Gerard Braad) Date: Mon, 6 Jun 2016 12:57:36 +0800 Subject: [rdo-list] [tripleo] How about support of AMT for deployment? In-Reply-To: <672f37dd-ab14-1173-5ffc-50e9ebe00fd9@redhat.com> References: <672f37dd-ab14-1173-5ffc-50e9ebe00fd9@redhat.com> Message-ID: Hi, On Mon, Jun 6, 2016 at 12:30 PM, Dmitry Tantsur wrote: > On 06/06/2016 06:13 AM, Gerard Braad wrote: >> Is support available because of the following? >> /tripleo-common/tripleo_common/utils/nodes.py: >> 187: '.*_amt': PrefixedDriverInfo('amt'), > Yes. So, is this in Liberty packages? If so, I will retest this later this week. regards, Gerard -- Gerard Braad | http://gbraad.nl [ Doing Open Source Matters ] From dtantsur at redhat.com Mon Jun 6 05:52:54 2016 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 6 Jun 2016 07:52:54 +0200 Subject: [rdo-list] [tripleo] How about support of AMT for deployment? In-Reply-To: References: <672f37dd-ab14-1173-5ffc-50e9ebe00fd9@redhat.com> Message-ID: On 06/06/2016 06:57 AM, Gerard Braad wrote: > Hi, > > On Mon, Jun 6, 2016 at 12:30 PM, Dmitry Tantsur wrote: >> On 06/06/2016 06:13 AM, Gerard Braad wrote: >>> Is support available because of the following? >>> /tripleo-common/tripleo_common/utils/nodes.py: >>> 187: '.*_amt': PrefixedDriverInfo('amt'), >> Yes. > > So, is this in Liberty packages? If so, I will retest this later this week. Sorry, I didn't realize you're asking about Liberty. I'm afraid it's only available in Mitaka. You can always enroll nodes manually, something like: ironic node-create -d pxe_amt -i amt_username=user -i amt_password=password -i amt_address=address ironic port-create -n NODE_UUID -a MAC and then proceed with 'configure boot' and inspection. > > regards, > > Gerard > From me at gbraad.nl Mon Jun 6 06:02:50 2016 From: me at gbraad.nl (Gerard Braad) Date: Mon, 6 Jun 2016 14:02:50 +0800 Subject: [rdo-list] [tripleo] How about support of AMT for deployment? In-Reply-To: <20160606055834.GA20209@palahniuk.int.rhx> References: <20160606055834.GA20209@palahniuk.int.rhx> Message-ID: Hi, On Mon, Jun 6, 2016 at 1:58 PM, Michele Baldessari wrote: > As Dmitry mentioned, most of the stuff has landed already. I > haven't yet played with it, because the disk hosting my > undercloud vm died and I had no time to reinstall. Once I get > to it, I will drop you a line (unless you beat me to it ;) Thanks for the input... I also have to make some time for this... I was lucky our home systems had to be reinstalled, so I was able to do a whole flow... but I will try as what Dmitry just suggested. I will try to manually enroll a system. cheers, Gerard -- Gerard Braad | http://gbraad.nl [ Doing Open Source Matters ] From michele at acksyn.org Mon Jun 6 05:58:34 2016 From: michele at acksyn.org (Michele Baldessari) Date: Mon, 6 Jun 2016 07:58:34 +0200 Subject: [rdo-list] [tripleo] How about support of AMT for deployment? In-Reply-To: References: Message-ID: <20160606055834.GA20209@palahniuk.int.rhx> Hi Gerard, On Mon, Jun 06, 2016 at 11:32:34AM +0800, Gerard Braad wrote: > Some time ago Michele Baldessari wrote about deploying OpenStack > TripleO on a bunch of Intel NUCs. To enable this he needed to make > some minor changes [1] to os-cloud-config and python-tripleoclient to > add basic support for AMT (utilizing wsmancli). > > I have tried this out myself with a set of Lenovo M93p's and it does > work. I believe basic support of AMT can be very helpful to allow > tests on more COTS products, when IPMI is not available. Is this > something that can be considered for inclusion? As Dmitry mentioned, most of the stuff has landed already. I haven't yet played with it, because the disk hosting my undercloud vm died and I had no time to reinstall. Once I get to it, I will drop you a line (unless you beat me to it ;) cheers, Michele -- Michele Baldessari C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D From pgsousa at gmail.com Mon Jun 6 09:33:15 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Mon, 6 Jun 2016 10:33:15 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <0d69c56a-1e27-9623-7e08-b652010bb3e3@redhat.com> References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> <0d69c56a-1e27-9623-7e08-b652010bb3e3@redhat.com> Message-ID: Hi Graeme, I ran introspection several times and didn't have the problem you mentioned. Always worked fine. However I did have problems with my generated overcloud images: "Could not retrieve fact='rabbitmq_nodename', resolution='': undefined method `[]' for nil:NilClass Could not retrieve fact='rabbitmq_nodename', resolution='': undefined method `' for nil:NilClass" After downloading the delorean generated images the problem was gone. Regards, Pedro Sousa How On Mon, Jun 6, 2016 at 1:51 AM, Graeme Gillies wrote: > On 06/06/16 10:26, Pedro Sousa wrote: > > Hi Graeme, > > > > my 2 cents here, my experience was different from yours. > > > > I've managed to install tripleo with 3 controllers + 3 computes without > > epel repos and without applying any patches, using mitaka repos from > > cbs. I didn't have any issues with introspection, but I tested on the > > same hw (Dell PowerEdge R430). Then I downloaded some prebuild overcloud > > images from delorean > > repos: > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/ > > > > That being said, there's a lot to be done concerning install > > documentation, and how to report bugs and issues, specifically for > > baremetal environments, where a lot of people like me intend to use > > tripleo for production. > > > > Regards > > The patch I note in step 4) is to work around an issue that only affects > you if you introspect a node more than once. It will always work the > first time, it's the second time that it will fail. So if you are only > introspecting once then you don't need it. > > The patch I apply in step 5) is to work around an issue that is fixed in > DLRN, and is only necessary if you want to build images yourself and not > use prebuilt ones. By using prebuilt ones from DLRN, you are in effect > getting the fix by nature of that (and thus don't need the patch). > > Note that if you are also sourcing your overcloud-full.qcow2 from DLRN > repos, then you are no longer deploying an overcloud that uses CBS, you > are deploying an overcloud from DLRN. You can confirm this by checking > if the rdo-release rpm is installed on your overcloud, and check that in > /etc/yum.repos.d there is no 'delorean.repo' > > Regards, > > Graeme > > > > > > > > > > > > > On Mon, Jun 6, 2016 at 12:37 AM, Graeme Gillies > > wrote: > > > > Hi Everyone, > > > > I just wanted to say I have been following this thread quite closely > and > > can sympathize with some of the pain people are going through to get > > tripleO to work. > > > > Currently it's quite difficult and a bit opaque on how to actually > > utilise the stable mitaka repos in order to build a functional > > undercloud and overcloud environment. > > > > First I wanted to share the steps I have undergone in order to get a > > functional overcloud working with RDO Mitaka utilising the RDO stable > > release built by CentOS, and then I'll talk about some specific > steps I > > think need to be undertaken by the RDO/TripleO team in order to > provide > > a better experience in the future. > > > > To get a functional overcloud using RDO Mitaka, you need to do the > > following > > > > 1) Install EPEL on your undercloud > > 2) Install https://www.rdoproject.org/repos/rdo-release.rpm on your > > undercloud > > 3) Following the normal steps to install your undercloud (modifying > > undercloud.conf, and running openstack undercloud install > > 4) You will now need to manually patch ironic on the undercloud in > order > > to make sure repeated introspection works. This might not be needed > if > > you don't do any introspection, but I find more often than not you > end > > up having to do it, so it's worthwhile. The bug you need to patch is > [1] > > and I typically run the following commands to apply the patch > > > > # sudo su - > > $ cd /usr/lib/python2.7/site-packages > > $ curl > > ' > https://review.openstack.org/changes/306421/revisions/abd50d8438e7d371ce24f97d8f8f67052b562007/patch?download > ' > > | base64 -d | patch -p1 > > $ systemctl restart openstack-ironic-inspector > > $ systemctl restart openstack-ironic-inspector-dnsmasq > > $ exit > > # > > > > 5) Manually patch the undercloud to build overcloud images using > > rhos-release rpm only (which utilises the stable Mitaka repo from > > CentOS, and nothing from RDO Trunk [delorean]). I do this by > modifying > > the file > > > > /usr/lib/python2.7/site-packages/tripleoclient/v1/overcloud_image.py > > > > At around line 467 you will see a reference to epel, I add a new line > > after that to include the rdo_release DIB element to the build as > well. > > This typically makes the file look something like > > > > http://paste.openstack.org/show/508196/ > > > > (note like 468). Then I create a directory to store my images and > build > > them specifying the mitaka version of rdo_release. I then upload > these > > images > > > > # mkdir ~/images > > # cd ~/images > > # export RDO_RELEASE=mitaka > > # openstack overcloud image build --all > > # openstack overcloud image upload --update-existing > > > > 6) Because of the bug at [2] which affects the ironic agent ramdisk, > we > > need to build a set of images utilising RDO Trunk for the mitaka > branch > > (where the fix is applied), and then upload *only* the new ironic > > ramdisk. This is done with > > > > # mkdir ~/images-mitaka-trunk > > # cd ~/images-mitaka-trunk > > # export USE_DELOREAN_TRUNK=1 > > # export > > DELOREAN_TRUNK_REPO=" > http://trunk.rdoproject.org/centos7-mitaka/current/" > > # export DELOREAN_REPO_FILE="delorean.repo" > > # openstack overcloud image build --type agent-ramdisk > > # sudo cp ironic-python-agent.initramfs /httpboot/agent.ramdisk > > > > 7) Follow the rest of the documentation to deploy the overcloud > normally > > > > Please note that obviously your mileage may vary, and this is by all > > means not an exclusive list of the problems. I have however used > these > > steps to do multiple node deployments (10+ nodes) with HA over > different > > hardware sets with different networking setups (single nic, multiple > nic > > with bonding + vlans). > > > > With all the different repos floating around, all which change very > > rapidly, combined with the documentation defaults targeting > developers > > and CI systems (not end users), it's hard to not only get a stable > > TripleO install up, but also communicate and discuss clearly with > others > > what is working, what is broken, and how to compare two > installations to > > see if they are experiencing the same issues. > > > > To this end I would like to suggest to the RDO and TripleO community > > that we undertake the following > > > > 1) Overhaul all the TripleO documentation so that all the steps > default > > to utilising/deploying using RDO Stable (that is, the releases done > by > > CBS). There should be colored boxes with alt steps for those who > wish to > > use RDO Trunk on the stable branch, and RDO Trunk from master. This > > basically inverts the current pattern. I think anyone, Operator or > > developer, who is working through the documentation for the first > time, > > should be given steps that maximise the chance of success, and thus > the > > most stable release we have. Once a user has gone through the process > > once, they can look at the alternative steps for more aggressive > > releases > > > > 2) Patch python-tripleoclient so that by default, when you run > > "openstack overcloud image build" it builds the images utilising the > > rdo_release DIB element, and sets the RDO_RELEASE environment > variable > > to be 'mitaka' or whenever the current stable release is (and we > should > > endevour to update it with new releases). There should be no extra > > environment variables necessary to build images, and by default it > > should never touch anything RDO Trunk (delorean) related > > > > 3) For bugs like the two I have mentioned above, we need to have some > > sort of robust process for either backporting those patches to the > > builds in CBS (I understand we don't do this for various reasons), > or we > > need some kind of tooling or solution that allows operators to apply > > only the fixes they need from RDO Trunk (delorean). We need to ensure > > that when an Operator utilises TripleO they have the greatest chance > of > > success, and bugs such as these which severely impact the deployment > > process harm the adoption of TripleO and RDO. > > > > 4) We should curate and keep an up to date page on rdoproject.org > > that > > does highlight the outstanding issues related to TripleO on the RDO > > Stable (CBS) releases. These should have links to relevant bugzillas, > > clean instructions on how to work around the issue, or cleanly apply > a > > patch to avoid the issue, and as new releases make it out, we should > > update the page to drop off workarounds that are no longer needed. > > > > The goal is to push Operators/Users to working with our most stable > code > > as much as possible, and track/curate issues around that. This way > > everyone should be on the same page, issues are easier to discuss and > > diagnose, and overall peoples experiences should be better. > > > > I'm interested in thoughts, feedback, and concerns, both from the RDO > > and TripleO community, and from the Operator/User community. > > > > Regards, > > > > Graeme > > > > [1] https://bugs.launchpad.net/ironic-inspector/+bug/1570447 > > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1322892 > > > > On 05/06/16 02:04, Pedro Sousa wrote: > > > Thanks Marius, > > > > > > I can confirm that it installs fine with 3 controllers + 3 computes > > > after recreating the stack > > > > > > Regards > > > > > > On Sat, Jun 4, 2016 at 4:14 PM, Marius Cornea < > marius at remote-lab.net > > > >> > wrote: > > > > > > Hi Pedro, > > > > > > Scaling out controller nodes is not supported at this moment: > > > https://bugzilla.redhat.com/show_bug.cgi?id=1243312 > > > > > > On Sat, Jun 4, 2016 at 5:05 PM, Pedro Sousa > > > >> wrote: > > > > Hi, > > > > > > > > some update on scaling the cloud: > > > > > > > > 1 controller + 1 compute -> 1 controller + 3 computes OK > > > > > > > > 1 controller + 3 computes -> 3 controllers + 3 compute FAILS > > > > > > > > Problem: The new controller nodes are "stuck" in "pscd > start", so > > > it seems > > > > to be a problem joining the pacemaker cluster... Did anyone > had this > > > > problem? > > > > > > > > Regards > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Sat, Jun 4, 2016 at 1:50 AM, Pedro Sousa < > pgsousa at gmail.com > > > >> wrote: > > > >> > > > >> Hi, > > > >> > > > >> I finally managed to install a baremetal in mitaka with 1 > > > controller + 1 > > > >> compute with network isolation. Thank god :) > > > >> > > > >> All I did was: > > > >> > > > >> #yum install centos-release-openstack-mitaka > > > >> #sudo yum install python-tripleoclient > > > >> > > > >> without epel repos. > > > >> > > > >> Then followed instructions from Redhat Site. > > > >> > > > >> I downloaded the overcloud images from: > > > >> > > > > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/ > > > >> > > > >> I do have an issue that forces me to delete a json file and > run > > > >> os-refresh-config inside my overcloud nodes other than that > it > > > installs > > > >> fine. > > > >> > > > >> Now I'll test with more 2 controllers + 2 computes to have > a full HA > > > >> deployment. > > > >> > > > >> If anyone needs help to document this I'll be happy to help. > > > >> > > > >> Regards, > > > >> Pedro Sousa > > > >> > > > >> > > > >> On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy < > rlandy at redhat.com > > > >> wrote: > > > >>> > > > >>> The report says: "Fix Released" as of 2016-05-24. > > > >>> Are you installing on a clean system with the latest > repositories? > > > >>> > > > >>> Might also want to check your version of rabbitmq: I have > > > >>> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. > > > >>> > > > >>> ----- Original Message ----- > > > >>> > From: "Pedro Sousa" pgsousa at gmail.com> > > >> > > > >>> > To: "Ronelle Landy" rlandy at redhat.com> > > >> > > > >>> > Cc: "Christopher Brown" cbrown2 at ocf.co.uk> > > > >>, > "Ignacio Bravo" > > > >>> > > > >>, > > "rdo-list" > > > >>> > > > >> > > > >>> > Sent: Friday, June 3, 2016 1:20:43 PM > > > >>> > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > > >>> > > > > >>> > Anyway to workaround this? Maybe downgrade hiera? > > > >>> > > > > >>> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy > > > > > >> > > > >>> > wrote: > > > >>> > > > > >>> > > I am not sure exactly where you installed from, and > when you > > > did your > > > >>> > > installation, but any chance, you've hit: > > > >>> > > https://bugs.launchpad.net/tripleo/+bug/1584892? > > > >>> > > There is a link bugzilla record. > > > >>> > > > > > >>> > > ----- Original Message ----- > > > >>> > > > From: "Pedro Sousa" pgsousa at gmail.com> > > > >> > > > >>> > > > To: "Ronelle Landy" rlandy at redhat.com> > > > >> > > > >>> > > > Cc: "Christopher Brown" cbrown2 at ocf.co.uk> > > > >>, > "Ignacio Bravo" < > > > >>> > > ibravo at ltgfederal.com > > >>, > > > "rdo-list" > > > >>> > > > > > >> > > > >>> > > > Sent: Friday, June 3, 2016 12:26:58 PM > > > >>> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable > version? > > > >>> > > > > > > >>> > > > Thanks Ronelle, > > > >>> > > > > > > >>> > > > do you think this kind of errors can be related with > network > > > >>> > > > settings? > > > >>> > > > > > > >>> > > > "Could not retrieve fact='rabbitmq_nodename', > > > >>> > > > resolution='': > > > >>> > > > undefined method `[]' for nil:NilClass Could not > retrieve > > > >>> > > > fact='rabbitmq_nodename', resolution='': > undefined > > > >>> > > > method `[]' > > > >>> > > > for nil:NilClass" > > > >>> > > > > > > >>> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy > > > > > >> > > > >>> > > > wrote: > > > >>> > > > > > > >>> > > > > Hi Pedro, > > > >>> > > > > > > > >>> > > > > You could use the docs you referred to. > > > >>> > > > > Alternatively, if you want to use a vm for the > > > undercloud and > > > >>> > > > > baremetal > > > >>> > > > > machines for the overcloud, it is possible to use > Tripleo > > > >>> > > > > Qucikstart > > > >>> > > with a > > > >>> > > > > few modifications. > > > >>> > > > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. > > > >>> > > > > > > > >>> > > > > ----- Original Message ----- > > > >>> > > > > > From: "Pedro Sousa" pgsousa at gmail.com> > > > >> > > > >>> > > > > > To: "Ronelle Landy" rlandy at redhat.com> > > > >> > > > >>> > > > > > Cc: "Christopher Brown" > > > >>, > "Ignacio Bravo" < > > > >>> > > > > ibravo at ltgfederal.com ibravo at ltgfederal.com> > > >>, > > > "rdo-list" > > > >>> > > > > > > > > >> > > > >>> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM > > > >>> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable > version? > > > >>> > > > > > > > > >>> > > > > > Hi Ronelle, > > > >>> > > > > > > > > >>> > > > > > maybe I understand it wrong but I thought that > Tripleo > > > >>> > > > > > Quickstart > > > >>> > > was for > > > >>> > > > > > deploying virtual environments? > > > >>> > > > > > > > > >>> > > > > > And for baremetal we should use > > > >>> > > > > > > > > >>> > > > > > > > >>> > > > > > >>> > > > > > > http://docs.openstack.org/developer/tripleo-docs/installation/installation.html > > > >>> > > > > > ? > > > >>> > > > > > > > > >>> > > > > > Thanks > > > >>> > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > >>> > > > > > > > > >>> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy > > > >>> > > > > > > > >> > > > >>> > > wrote: > > > >>> > > > > > > > > >>> > > > > > > Hello, > > > >>> > > > > > > > > > >>> > > > > > > We have had success deploying RDO (Mitaka) on > baremetal > > > >>> > > > > > > systems - > > > >>> > > using > > > >>> > > > > > > Tripleo Quickstart with both single-nic-vlans > and > > > >>> > > > > > > bond-with-vlans > > > >>> > > > > network > > > >>> > > > > > > isolation configurations. > > > >>> > > > > > > > > > >>> > > > > > > Baremetal can have some complicated networking > > > issues but, > > > >>> > > > > > > from > > > >>> > > > > previous > > > >>> > > > > > > experiences, if a single-controller deployment > > > worked but a > > > >>> > > > > > > HA > > > >>> > > > > deployment > > > >>> > > > > > > did not, I would check: > > > >>> > > > > > > - does the HA deployment command include: -e > > > >>> > > > > > > > > > >>> > > > > > > > >>> > > > > > >>> > > > > > > /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > > > >>> > > > > > > - are there possible MTU issues? > > > >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > ----- Original Message ----- > > > >>> > > > > > > > From: "Christopher Brown" > > > >> > > > >>> > > > > > > > To: pgsousa at gmail.com > > > >, > > > ibravo at ltgfederal.com > > > > > > >>> > > > > > > > Cc: rdo-list at redhat.com > > > > > > > >>> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM > > > >>> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo > stable > > > version? > > > >>> > > > > > > > > > > >>> > > > > > > > Hello Ignacio, > > > >>> > > > > > > > > > > >>> > > > > > > > Thanks for your response and good to know it > > isn't > > > just me! > > > >>> > > > > > > > > > > >>> > > > > > > > I would be more than happy to provide > > developers with > > > >>> > > > > > > > access to > > > >>> > > our > > > >>> > > > > > > > bare metal environments. I'll also file some > > bugzilla > > > >>> > > > > > > > reports to > > > >>> > > see > > > >>> > > > > if > > > >>> > > > > > > > this generates any interest. > > > >>> > > > > > > > > > > >>> > > > > > > > Please do let me know if you make any > > progress - I am > > > >>> > > > > > > > trying to > > > >>> > > > > deploy > > > >>> > > > > > > > HA with network isolation, multiple nics and > > vlans. > > > >>> > > > > > > > > > > >>> > > > > > > > The RDO web page states: > > > >>> > > > > > > > > > > >>> > > > > > > > "If you want to create a production-ready > cloud, > > > you'll > > > >>> > > > > > > > want to > > > >>> > > use > > > >>> > > > > the > > > >>> > > > > > > > TripleO quickstart guide." > > > >>> > > > > > > > > > > >>> > > > > > > > which is a contradiction in terms really. > > > >>> > > > > > > > > > > >>> > > > > > > > Cheers > > > >>> > > > > > > > > > > >>> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio > Bravo > > > wrote: > > > >>> > > > > > > > > Pedro / Christopher, > > > >>> > > > > > > > > > > > >>> > > > > > > > > Just wanted to share with you that I also > had > > > plenty of > > > >>> > > > > > > > > issues > > > >>> > > > > > > > > deploying on bare metal HA servers, and > have > > > paused the > > > >>> > > deployment > > > >>> > > > > > > > > using TripleO until better winds start to > flow > > > here. I > > > >>> > > > > > > > > was > > > >>> > > able to > > > >>> > > > > > > > > deploy the QuickStart, but on bare metal > the > > > history was > > > >>> > > different. > > > >>> > > > > > > > > Couldn't even deploy a two server > > configuration. > > > >>> > > > > > > > > > > > >>> > > > > > > > > I was thinking that it would be good to > > have the > > > >>> > > > > > > > > developers > > > >>> > > have > > > >>> > > > > > > > > access to one of our environments and go > > through > > > a full > > > >>> > > > > > > > > install > > > >>> > > > > with > > > >>> > > > > > > > > us to better see where things fail. We can > > do this > > > >>> > > > > > > > > handholding > > > >>> > > > > > > > > deployment once every week/month based on > > > developers time > > > >>> > > > > > > > > availability. That way we can get a working > > > install, and > > > >>> > > > > > > > > we can > > > >>> > > > > > > > > troubleshoot real life environment > problems. > > > >>> > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > IB > > > >>> > > > > > > > > > > > >>> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa > > > >>> > > > > > > > > > > >> > > > >>> > > wrote: > > > >>> > > > > > > > > > > > >>> > > > > > > > > > Yes. I've used this, but I'll try again > as there's > > > >>> > > > > > > > > > seems to > > > >>> > > be > > > >>> > > > > new > > > >>> > > > > > > > > > updates. > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > Stable Branch Skip all repos mentioned > above, > > > other > > > >>> > > > > > > > > > than > > > >>> > > epel- > > > >>> > > > > > > > > > release which is still required. > > > >>> > > > > > > > > > Enable latest RDO Stable Delorean > repository > > > for all > > > >>> > > > > > > > > > packages > > > >>> > > > > > > > > > sudo curl -o > > > /etc/yum.repos.d/delorean-liberty.repo > > > >>> > > > > https://trunk.r > > > >>> > > > > > > > > > > > > doproject.org/centos7-liberty/current/delorean.repo > > > > > > > > >>> > > > > > > > > > Enable the Delorean Deps repository > > > >>> > > > > > > > > > sudo curl -o > > > >>> > > > > > > > > > > /etc/yum.repos.d/delorean-deps-liberty.repo > > > >>> > > > > http://tru > > > >>> > > > > > > > > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, > Christopher > > > Brown < > > > >>> > > > > cbrown2 at ocf.co > > >. > > > >>> > > > > > > > > > uk> wrote: > > > >>> > > > > > > > > > > No, Liberty deployed ok for us. > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > It suggests to me a package mismatch. > Have you > > > >>> > > > > > > > > > > completely > > > >>> > > > > rebuilt > > > >>> > > > > > > > > > > the > > > >>> > > > > > > > > > > undercloud and then the images using > Liberty? > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, > Pedro > > > Sousa wrote: > > > >>> > > > > > > > > > > > AttributeError: 'module' object has > no > > > attribute > > > >>> > > 'PortOpt' > > > >>> > > > > > > > > > > -- > > > >>> > > > > > > > > > > Regards, > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > Christopher Brown > > > >>> > > > > > > > > > > OpenStack Engineer > > > >>> > > > > > > > > > > OCF plc > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > Tel: +44 (0)114 257 2200 > > > > > > > >>> > > > > > > > > > > Web: www.ocf.co.uk > > > > > >>> > > > > > > > > > > Blog: blog.ocf.co.uk > > > > > >>> > > > > > > > > > > Twitter: @ocfplc > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > Please note, any emails relating to an > OCF > > > Support > > > >>> > > > > > > > > > > request > > > >>> > > must > > > >>> > > > > > > > > > > always > > > >>> > > > > > > > > > > be sent to support at ocf.co.uk support at ocf.co.uk> > > > > for a > > ticket number to > > > >>> > > > > > > > > > > be > > > >>> > > > > generated > > > >>> > > > > > > > > > > or > > > >>> > > > > > > > > > > existing support ticket to be updated. > > > Should this > > > >>> > > > > > > > > > > not be > > > >>> > > done > > > >>> > > > > > > > > > > then OCF > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > cannot be held responsible for > > requests not > > > dealt > > > >>> > > > > > > > > > > with in a > > > >>> > > > > > > > > > > timely > > > >>> > > > > > > > > > > manner. > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > OCF plc is a company registered in > England > > > and Wales. > > > >>> > > > > Registered > > > >>> > > > > > > > > > > number > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. > > > Registered office > > > >>> > > address: > > > >>> > > > > > > > > > > OCF plc, > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe > > Park, > > > >>> > > > > > > > > > > Chapeltown, > > > >>> > > > > > > > > > > Sheffield S35 > > > >>> > > > > > > > > > > 2PG. > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > If you have received this message in > > error, > > > please > > > >>> > > > > > > > > > > notify > > > >>> > > us > > > >>> > > > > > > > > > > immediately and remove it from your > > system. > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > > _______________________________________________ > > > >>> > > > > > > > > rdo-list mailing list > > > >>> > > > > > > > > rdo-list at redhat.com > > > > > > > >>> > > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > >>> > > > > > > > > > > > >>> > > > > > > > > To unsubscribe: > rdo-list-unsubscribe at redhat.com > > > > > > > > >>> > > > > > > > -- > > > >>> > > > > > > > Regards, > > > >>> > > > > > > > > > > >>> > > > > > > > Christopher Brown > > > >>> > > > > > > > OpenStack Engineer > > > >>> > > > > > > > OCF plc > > > >>> > > > > > > > > > > >>> > > > > > > > Tel: +44 (0)114 257 2200 > > > > > > > >>> > > > > > > > Web: www.ocf.co.uk > > > > > >>> > > > > > > > Blog: blog.ocf.co.uk > > > > > >>> > > > > > > > Twitter: @ocfplc > > > >>> > > > > > > > > > > >>> > > > > > > > Please note, any emails relating to an OCF > Support > > > request > > > >>> > > > > > > > must > > > >>> > > > > always > > > >>> > > > > > > > be sent to support at ocf.co.uk support at ocf.co.uk> > > > > for a > > ticket number to be > > > >>> > > generated or > > > >>> > > > > > > > existing support ticket to be updated. > > Should this > > > not be > > > >>> > > > > > > > done > > > >>> > > then > > > >>> > > > > OCF > > > >>> > > > > > > > > > > >>> > > > > > > > cannot be held responsible for requests not > > dealt > > > with in a > > > >>> > > timely > > > >>> > > > > > > > manner. > > > >>> > > > > > > > > > > >>> > > > > > > > OCF plc is a company registered in England > > and Wales. > > > >>> > > > > > > > Registered > > > >>> > > > > number > > > >>> > > > > > > > > > > >>> > > > > > > > 4132533, VAT number GB 780 6803 14. > > Registered office > > > >>> > > > > > > > address: > > > >>> > > OCF > > > >>> > > > > plc, > > > >>> > > > > > > > > > > >>> > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, > > > Chapeltown, > > > >>> > > Sheffield > > > >>> > > > > S35 > > > >>> > > > > > > > 2PG. > > > >>> > > > > > > > > > > >>> > > > > > > > If you have received this message in error, > > please > > > notify > > > >>> > > > > > > > us > > > >>> > > > > > > > immediately and remove it from your system. > > > >>> > > > > > > > > > > >>> > > > > > > > > _______________________________________________ > > > >>> > > > > > > > rdo-list mailing list > > > >>> > > > > > > > rdo-list at redhat.com > > > > > > > >>> > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > >>> > > > > > > > > > > >>> > > > > > > > To unsubscribe: > rdo-list-unsubscribe at redhat.com > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >> > > > >> > > > > > > > > > > > > _______________________________________________ > > > > rdo-list mailing list > > > > rdo-list at redhat.com > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com rdo-list-unsubscribe at redhat.com> > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com rdo-list-unsubscribe at redhat.com> > > > > > > > > > -- > > Graeme Gillies > > Principal Systems Administrator > > Openstack Infrastructure > > Red Hat Australia > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cbrown2 at ocf.co.uk Mon Jun 6 10:53:54 2016 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Mon, 6 Jun 2016 11:53:54 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> Message-ID: <1465210434.9673.50.camel@ocf.co.uk> Hi Graeme, Thanks for your email which is greatly appreciated. I am currently rebuilding using your instructions and will update with my findings. Once this is done I'll look at starting a basic baremetal install guide for the RDO website as one doesn't exist at the moment that I can see and I think one of the main "takeaways" from this is that stable documentation is needed urgently. I'd be very much inclined to keep it separate from the rather confusing developer documentation in use currently. This is why people seem to be heading off to Red Hat docs I guess. But I'd be really grateful if the bugs under discussion are addressed in Mitaka stable as soon as possible as curling patches is less great. As an addition, it looks like following discussion with Pedro, the overcloud deployment doesn't handle spanning tree on switches correctly as we are needing to manually delete json files and re-runs os-apply- config when the deployment stalls. This ships by default on switches these days so it would be good if the deployment could cater for links that aren't immediately in forwarding state. Happy to help out with documentation and keeping errata/workarounds up to date - I think we just need a "stable" section of the website which doesn't seem to exist at the moment. Regards On Mon, 2016-06-06 at 00:37 +0100, Graeme Gillies wrote: > Hi Everyone, > > I just wanted to say I have been following this thread quite closely > and > can sympathize with some of the pain people are going through to get > tripleO to work. > > Currently it's quite difficult and a bit opaque on how to actually > utilise the stable mitaka repos in order to build a functional > undercloud and overcloud environment. > > First I wanted to share the steps I have undergone in order to get a > functional overcloud working with RDO Mitaka utilising the RDO stable > release built by CentOS, and then I'll talk about some specific steps > I > think need to be undertaken by the RDO/TripleO team in order to > provide > a better experience in the future. > > To get a functional overcloud using RDO Mitaka, you need to do the > following > > 1) Install EPEL on your undercloud > 2) Install https://www.rdoproject.org/repos/rdo-release.rpm on your > undercloud > 3) Following the normal steps to install your undercloud (modifying > undercloud.conf, and running openstack undercloud install > 4) You will now need to manually patch ironic on the undercloud in > order > to make sure repeated introspection works. This might not be needed > if > you don't do any introspection, but I find more often than not you > end > up having to do it, so it's worthwhile. The bug you need to patch is > [1] > and I typically run the following commands to apply the patch > > # sudo su - > $ cd /usr/lib/python2.7/site-packages > $ curl > 'https://review.openstack.org/changes/306421/revisions/abd50d8438e7d3 > 71ce24f97d8f8f67052b562007/patch?download' > > > > base64 -d | patch -p1 > $ systemctl restart openstack-ironic-inspector > $ systemctl restart openstack-ironic-inspector-dnsmasq > $ exit > # > > 5) Manually patch the undercloud to build overcloud images using > rhos-release rpm only (which utilises the stable Mitaka repo from > CentOS, and nothing from RDO Trunk [delorean]). I do this by > modifying > the file > > /usr/lib/python2.7/site-packages/tripleoclient/v1/overcloud_image.py > > At around line 467 you will see a reference to epel, I add a new line > after that to include the rdo_release DIB element to the build as > well. > This typically makes the file look something like > > http://paste.openstack.org/show/508196/ > > (note like 468). Then I create a directory to store my images and > build > them specifying the mitaka version of rdo_release. I then upload > these > images > > # mkdir ~/images > # cd ~/images > # export RDO_RELEASE=mitaka > # openstack overcloud image build --all > # openstack overcloud image upload --update-existing > > 6) Because of the bug at [2] which affects the ironic agent ramdisk, > we > need to build a set of images utilising RDO Trunk for the mitaka > branch > (where the fix is applied), and then upload *only* the new ironic > ramdisk. This is done with > > # mkdir ~/images-mitaka-trunk > # cd ~/images-mitaka-trunk > # export USE_DELOREAN_TRUNK=1 > # export > DELOREAN_TRUNK_REPO="http://trunk.rdoproject.org/centos7-mitaka/curre > nt/" > # export DELOREAN_REPO_FILE="delorean.repo" > # openstack overcloud image build --type agent-ramdisk > # sudo cp ironic-python-agent.initramfs /httpboot/agent.ramdisk > > 7) Follow the rest of the documentation to deploy the overcloud > normally > > Please note that obviously your mileage may vary, and this is by all > means not an exclusive list of the problems. I have however used > these > steps to do multiple node deployments (10+ nodes) with HA over > different > hardware sets with different networking setups (single nic, multiple > nic > with bonding + vlans). > > With all the different repos floating around, all which change very > rapidly, combined with the documentation defaults targeting > developers > and CI systems (not end users), it's hard to not only get a stable > TripleO install up, but also communicate and discuss clearly with > others > what is working, what is broken, and how to compare two installations > to > see if they are experiencing the same issues. > > To this end I would like to suggest to the RDO and TripleO community > that we undertake the following > > 1) Overhaul all the TripleO documentation so that all the steps > default > to utilising/deploying using RDO Stable (that is, the releases done > by > CBS). There should be colored boxes with alt steps for those who wish > to > use RDO Trunk on the stable branch, and RDO Trunk from master. This > basically inverts the current pattern. I think anyone, Operator or > developer, who is working through the documentation for the first > time, > should be given steps that maximise the chance of success, and thus > the > most stable release we have. Once a user has gone through the process > once, they can look at the alternative steps for more aggressive > releases > > 2) Patch python-tripleoclient so that by default, when you run > "openstack overcloud image build" it builds the images utilising the > rdo_release DIB element, and sets the RDO_RELEASE environment > variable > to be 'mitaka' or whenever the current stable release is (and we > should > endevour to update it with new releases). There should be no extra > environment variables necessary to build images, and by default it > should never touch anything RDO Trunk (delorean) related > > 3) For bugs like the two I have mentioned above, we need to have some > sort of robust process for either backporting those patches to the > builds in CBS (I understand we don't do this for various reasons), or > we > need some kind of tooling or solution that allows operators to apply > only the fixes they need from RDO Trunk (delorean). We need to ensure > that when an Operator utilises TripleO they have the greatest chance > of > success, and bugs such as these which severely impact the deployment > process harm the adoption of TripleO and RDO. > > 4) We should curate and keep an up to date page on rdoproject.org > that > does highlight the outstanding issues related to TripleO on the RDO > Stable (CBS) releases. These should have links to relevant bugzillas, > clean instructions on how to work around the issue, or cleanly apply > a > patch to avoid the issue, and as new releases make it out, we should > update the page to drop off workarounds that are no longer needed. > > The goal is to push Operators/Users to working with our most stable > code > as much as possible, and track/curate issues around that. This way > everyone should be on the same page, issues are easier to discuss and > diagnose, and overall peoples experiences should be better. > > I'm interested in thoughts, feedback, and concerns, both from the RDO > and TripleO community, and from the Operator/User community. > > Regards, > > Graeme > > [1] https://bugs.launchpad.net/ironic-inspector/+bug/1570447 > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1322892 > > On 05/06/16 02:04, Pedro Sousa wrote: > > > > Thanks Marius, > > > > I can confirm that it installs fine with 3 controllers + 3 computes > > after recreating the stack > > > > Regards > > > > On Sat, Jun 4, 2016 at 4:14 PM, Marius Cornea > t > > > wrote: > > > > Hi Pedro, > > > > Scaling out controller nodes is not supported at this moment: > > https://bugzilla.redhat.com/show_bug.cgi?id=1243312 > > > > On Sat, Jun 4, 2016 at 5:05 PM, Pedro Sousa > > wrote: > > > Hi, > > > > > > some update on scaling the cloud: > > > > > > 1 controller + 1 compute -> 1 controller + 3 computes OK > > > > > > 1 controller + 3 computes -> 3 controllers + 3 compute FAILS > > > > > > Problem: The new controller nodes are "stuck" in "pscd > > start", so > > it seems > > > to be a problem joining the pacemaker cluster... Did anyone > > had this > > > problem? > > > > > > Regards > > > > > > > > > > > > > > > > > > > > > On Sat, Jun 4, 2016 at 1:50 AM, Pedro Sousa > m > > > wrote: > > >> > > >> Hi, > > >> > > >> I finally managed to install a baremetal in mitaka with 1 > > controller + 1 > > >> compute with network isolation. Thank god :) > > >> > > >> All I did was: > > >> > > >> #yum install centos-release-openstack-mitaka > > >> #sudo yum install python-tripleoclient > > >> > > >> without epel repos. > > >> > > >> Then followed instructions from Redhat Site. > > >> > > >> I downloaded the overcloud images from: > > >> > > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_image > > s/mitaka/delorean/ > > >> > > >> I do have an issue that forces me to delete a json file and > > run > > >> os-refresh-config inside my overcloud nodes other than that > > it > > installs > > >> fine. > > >> > > >> Now I'll test with more 2 controllers + 2 computes to have a > > full HA > > >> deployment. > > >> > > >> If anyone needs help to document this I'll be happy to help. > > >> > > >> Regards, > > >> Pedro Sousa > > >> > > >> > > >> On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy > .com > > > wrote: > > >>> > > >>> The report says: "Fix Released" as of 2016-05-24. > > >>> Are you installing on a clean system with the latest > > repositories? > > >>> > > >>> Might also want to check your version of rabbitmq: I have > > >>> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. > > >>> > > >>> ----- Original Message ----- > > >>> > From: "Pedro Sousa" > ail.com>> > > >>> > To: "Ronelle Landy" > hat.com>> > > >>> > Cc: "Christopher Brown" > >, "Ignacio Bravo" > > >>> > >, > > "rdo-list" > > >>> > > > > >>> > Sent: Friday, June 3, 2016 1:20:43 PM > > >>> > Subject: Re: [rdo-list] Baremetal Tripleo stable version? > > >>> > > > >>> > Anyway to workaround this? Maybe downgrade hiera? > > >>> > > > >>> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy > > > > > >>> > wrote: > > >>> > > > >>> > > I am not sure exactly where you installed from, and > > when you > > did your > > >>> > > installation, but any chance, you've hit: > > >>> > > https://bugs.launchpad.net/tripleo/+bug/1584892? > > >>> > > There is a link bugzilla record. > > >>> > > > > >>> > > ----- Original Message ----- > > >>> > > > From: "Pedro Sousa" > > > > >>> > > > To: "Ronelle Landy" > > > > >>> > > > Cc: "Christopher Brown" > >, "Ignacio Bravo" < > > >>> > > ibravo at ltgfederal.com >, > > "rdo-list" > > >>> > > > > > > >>> > > > Sent: Friday, June 3, 2016 12:26:58 PM > > >>> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable > > version? > > >>> > > > > > >>> > > > Thanks Ronelle, > > >>> > > > > > >>> > > > do you think this kind of errors can be related with > > network > > >>> > > > settings? > > >>> > > > > > >>> > > > "Could not retrieve fact='rabbitmq_nodename', > > >>> > > > resolution='': > > >>> > > > undefined method `[]' for nil:NilClass Could not > > retrieve > > >>> > > > fact='rabbitmq_nodename', resolution='': > > undefined > > >>> > > > method `[]' > > >>> > > > for nil:NilClass" > > >>> > > > > > >>> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy > > > > > >>> > > > wrote: > > >>> > > > > > >>> > > > > Hi Pedro, > > >>> > > > > > > >>> > > > > You could use the docs you referred to. > > >>> > > > > Alternatively, if you want to use a vm for the > > undercloud and > > >>> > > > > baremetal > > >>> > > > > machines for the overcloud, it is possible to use > > Tripleo > > >>> > > > > Qucikstart > > >>> > > with a > > >>> > > > > few modifications. > > >>> > > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/ > > 1571028. > > >>> > > > > > > >>> > > > > ----- Original Message ----- > > >>> > > > > > From: "Pedro Sousa" > > > > >>> > > > > > To: "Ronelle Landy" > > > > >>> > > > > > Cc: "Christopher Brown" > >, "Ignacio Bravo" < > > >>> > > > > ibravo at ltgfederal.com > >>, > > "rdo-list" > > >>> > > > > > > > > > > >>> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM > > >>> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable > > version? > > >>> > > > > > > > >>> > > > > > Hi Ronelle, > > >>> > > > > > > > >>> > > > > > maybe I understand it wrong but I thought that > > Tripleo > > >>> > > > > > Quickstart > > >>> > > was for > > >>> > > > > > deploying virtual environments? > > >>> > > > > > > > >>> > > > > > And for baremetal we should use > > >>> > > > > > > > >>> > > > > > > >>> > > > > >>> > > > > http://docs.openstack.org/developer/tripleo-docs/installation/i > > nstallation.html > > >>> > > > > > ? > > >>> > > > > > > > >>> > > > > > Thanks > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > > > >>> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy > > >>> > > > > > > > > >>> > > wrote: > > >>> > > > > > > > >>> > > > > > > Hello, > > >>> > > > > > > > > >>> > > > > > > We have had success deploying RDO (Mitaka) on > > baremetal > > >>> > > > > > > systems - > > >>> > > using > > >>> > > > > > > Tripleo Quickstart with both single-nic-vlans > > and > > >>> > > > > > > bond-with-vlans > > >>> > > > > network > > >>> > > > > > > isolation configurations. > > >>> > > > > > > > > >>> > > > > > > Baremetal can have some complicated networking > > issues but, > > >>> > > > > > > from > > >>> > > > > previous > > >>> > > > > > > experiences, if a single-controller deployment > > worked but a > > >>> > > > > > > HA > > >>> > > > > deployment > > >>> > > > > > > did not, I would check: > > >>> > > > > > > - does the HA deployment command include: -e > > >>> > > > > > > > > >>> > > > > > > >>> > > > > >>> > > > > /usr/share/openstack-tripleo-heat- > > templates/environments/puppet-pacemaker.yaml > > >>> > > > > > > - are there possible MTU issues? > > >>> > > > > > > > > >>> > > > > > > > > >>> > > > > > > ----- Original Message ----- > > >>> > > > > > > > From: "Christopher Brown" > > > > >>> > > > > > > > To: pgsousa at gmail.com > om>, > > ibravo at ltgfederal.com > > >>> > > > > > > > Cc: rdo-list at redhat.com > at.com> > > >>> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM > > >>> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo > > stable > > version? > > >>> > > > > > > > > > >>> > > > > > > > Hello Ignacio, > > >>> > > > > > > > > > >>> > > > > > > > Thanks for your response and good to know it > > isn't > > just me! > > >>> > > > > > > > > > >>> > > > > > > > I would be more than happy to provide > > developers with > > >>> > > > > > > > access to > > >>> > > our > > >>> > > > > > > > bare metal environments. I'll also file some > > bugzilla > > >>> > > > > > > > reports to > > >>> > > see > > >>> > > > > if > > >>> > > > > > > > this generates any interest. > > >>> > > > > > > > > > >>> > > > > > > > Please do let me know if you make any > > progress - I am > > >>> > > > > > > > trying to > > >>> > > > > deploy > > >>> > > > > > > > HA with network isolation, multiple nics and > > vlans. > > >>> > > > > > > > > > >>> > > > > > > > The RDO web page states: > > >>> > > > > > > > > > >>> > > > > > > > "If you want to create a production-ready > > cloud, > > you'll > > >>> > > > > > > > want to > > >>> > > use > > >>> > > > > the > > >>> > > > > > > > TripleO quickstart guide." > > >>> > > > > > > > > > >>> > > > > > > > which is a contradiction in terms really. > > >>> > > > > > > > > > >>> > > > > > > > Cheers > > >>> > > > > > > > > > >>> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio > > Bravo > > wrote: > > >>> > > > > > > > > Pedro / Christopher, > > >>> > > > > > > > > > > >>> > > > > > > > > Just wanted to share with you that I also > > had > > plenty of > > >>> > > > > > > > > issues > > >>> > > > > > > > > deploying on bare metal HA servers, and > > have > > paused the > > >>> > > deployment > > >>> > > > > > > > > using TripleO until better winds start to > > flow > > here. I > > >>> > > > > > > > > was > > >>> > > able to > > >>> > > > > > > > > deploy the QuickStart, but on bare metal > > the > > history was > > >>> > > different. > > >>> > > > > > > > > Couldn't even deploy a two server > > configuration. > > >>> > > > > > > > > > > >>> > > > > > > > > I was thinking that it would be good to > > have the > > >>> > > > > > > > > developers > > >>> > > have > > >>> > > > > > > > > access to one of our environments and go > > through > > a full > > >>> > > > > > > > > install > > >>> > > > > with > > >>> > > > > > > > > us to better see where things fail. We can > > do this > > >>> > > > > > > > > handholding > > >>> > > > > > > > > deployment once every week/month based on > > developers time > > >>> > > > > > > > > availability. That way we can get a working > > install, and > > >>> > > > > > > > > we can > > >>> > > > > > > > > troubleshoot real life environment > > problems. > > >>> > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > IB > > >>> > > > > > > > > > > >>> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa > > >>> > > > > > > > > > m>> > > >>> > > wrote: > > >>> > > > > > > > > > > >>> > > > > > > > > > Yes. I've used this, but I'll try again > > as there's > > >>> > > > > > > > > > seems to > > >>> > > be > > >>> > > > > new > > >>> > > > > > > > > > updates. > > >>> > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > Stable Branch Skip all repos mentioned > > above, > > other > > >>> > > > > > > > > > than > > >>> > > epel- > > >>> > > > > > > > > > release which is still required. > > >>> > > > > > > > > > Enable latest RDO Stable Delorean > > repository > > for all > > >>> > > > > > > > > > packages > > >>> > > > > > > > > > sudo curl -o > > /etc/yum.repos.d/delorean-liberty.repo > > >>> > > > > https://trunk.r > > >>> > > > > > > > > > > > doproject.org/centos7-liberty/current/delorean.repo > > > > >>> > > > > > > > > > Enable the Delorean Deps repository > > >>> > > > > > > > > > sudo curl -o > > >>> > > > > > > > > > /etc/yum.repos.d/delorean-deps- > > liberty.repo > > >>> > > > > http://tru > > >>> > > > > > > > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, > > Christopher > > Brown < > > >>> > > > > cbrown2 at ocf.co . > > >>> > > > > > > > > > uk> wrote: > > >>> > > > > > > > > > > No, Liberty deployed ok for us. > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > It suggests to me a package mismatch. > > Have you > > >>> > > > > > > > > > > completely > > >>> > > > > rebuilt > > >>> > > > > > > > > > > the > > >>> > > > > > > > > > > undercloud and then the images using > > Liberty? > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, > > Pedro > > Sousa wrote: > > >>> > > > > > > > > > > > AttributeError: 'module' object has > > no > > attribute > > >>> > > 'PortOpt' > > >>> > > > > > > > > > > -- > > >>> > > > > > > > > > > Regards, > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > Christopher Brown > > >>> > > > > > > > > > > OpenStack Engineer > > >>> > > > > > > > > > > OCF plc > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > Tel: +44 (0)114 257 2200 > > > > >>> > > > > > > > > > > Web: www.ocf.co.uk > > > > >>> > > > > > > > > > > Blog: blog.ocf.co.uk > o.uk> > > >>> > > > > > > > > > > Twitter: @ocfplc > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > Please note, any emails relating to an > > OCF > > Support > > >>> > > > > > > > > > > request > > >>> > > must > > >>> > > > > > > > > > > always > > >>> > > > > > > > > > > be sent to support at ocf.co.uk > > for a ticket number to > > >>> > > > > > > > > > > be > > >>> > > > > generated > > >>> > > > > > > > > > > or > > >>> > > > > > > > > > > existing support ticket to be updated. > > Should this > > >>> > > > > > > > > > > not be > > >>> > > done > > >>> > > > > > > > > > > then OCF > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > cannot be held responsible for requests > > not > > dealt > > >>> > > > > > > > > > > with in a > > >>> > > > > > > > > > > timely > > >>> > > > > > > > > > > manner. > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > OCF plc is a company registered in > > England > > and Wales. > > >>> > > > > Registered > > >>> > > > > > > > > > > number > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. > > Registered office > > >>> > > address: > > >>> > > > > > > > > > > OCF plc, > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe > > Park, > > >>> > > > > > > > > > > Chapeltown, > > >>> > > > > > > > > > > Sheffield S35 > > >>> > > > > > > > > > > 2PG. > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > If you have received this message in > > error, > > please > > >>> > > > > > > > > > > notify > > >>> > > us > > >>> > > > > > > > > > > immediately and remove it from your > > system. > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > > > _______________________________________________ > > >>> > > > > > > > > rdo-list mailing list > > >>> > > > > > > > > rdo-list at redhat.com > .com> > > >>> > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo > > -list > > >>> > > > > > > > > > > >>> > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat > > .com > > > > >>> > > > > > > > -- > > >>> > > > > > > > Regards, > > >>> > > > > > > > > > >>> > > > > > > > Christopher Brown > > >>> > > > > > > > OpenStack Engineer > > >>> > > > > > > > OCF plc > > >>> > > > > > > > > > >>> > > > > > > > Tel: +44 (0)114 257 2200 > > > > >>> > > > > > > > Web: www.ocf.co.uk > > >>> > > > > > > > Blog: blog.ocf.co.uk > > >>> > > > > > > > Twitter: @ocfplc > > >>> > > > > > > > > > >>> > > > > > > > Please note, any emails relating to an OCF > > Support > > request > > >>> > > > > > > > must > > >>> > > > > always > > >>> > > > > > > > be sent to support at ocf.co.uk > > for a ticket number to be > > >>> > > generated or > > >>> > > > > > > > existing support ticket to be updated. Should > > this > > not be > > >>> > > > > > > > done > > >>> > > then > > >>> > > > > OCF > > >>> > > > > > > > > > >>> > > > > > > > cannot be held responsible for requests not > > dealt > > with in a > > >>> > > timely > > >>> > > > > > > > manner. > > >>> > > > > > > > > > >>> > > > > > > > OCF plc is a company registered in England > > and Wales. > > >>> > > > > > > > Registered > > >>> > > > > number > > >>> > > > > > > > > > >>> > > > > > > > 4132533, VAT number GB 780 6803 14. > > Registered office > > >>> > > > > > > > address: > > >>> > > OCF > > >>> > > > > plc, > > >>> > > > > > > > > > >>> > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, > > Chapeltown, > > >>> > > Sheffield > > >>> > > > > S35 > > >>> > > > > > > > 2PG. > > >>> > > > > > > > > > >>> > > > > > > > If you have received this message in error, > > please > > notify > > >>> > > > > > > > us > > >>> > > > > > > > immediately and remove it from your system. > > >>> > > > > > > > > > >>> > > > > > > > > > _______________________________________________ > > >>> > > > > > > > rdo-list mailing list > > >>> > > > > > > > rdo-list at redhat.com > om> > > >>> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-l > > ist > > >>> > > > > > > > > > >>> > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.c > > om > > > > >>> > > > > > > > > > >>> > > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > >>> > > > >> > > >> > > > > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > -- > Graeme Gillies > Principal Systems Administrator > Openstack Infrastructure > Red Hat Australia > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Regards, Christopher Brown OpenStack Engineer OCF plc Tel: +44 (0)114 257 2200 Web: www.ocf.co.uk Blog: blog.ocf.co.uk Twitter: @ocfplc Please note, any emails relating to an OCF Support request must always be sent to support at ocf.co.uk for a ticket number to be generated or existing support ticket to be updated. Should this not be done then OCF cannot be held responsible for requests not dealt with in a timely manner. OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 2PG. If you have received this message in error, please notify us immediately and remove it from your system. From marius at remote-lab.net Mon Jun 6 11:13:40 2016 From: marius at remote-lab.net (Marius Cornea) Date: Mon, 6 Jun 2016 13:13:40 +0200 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <1465210434.9673.50.camel@ocf.co.uk> References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> <1465210434.9673.50.camel@ocf.co.uk> Message-ID: On Mon, Jun 6, 2016 at 12:53 PM, Christopher Brown wrote: > Hi Graeme, > > Thanks for your email which is greatly appreciated. > > I am currently rebuilding using your instructions and will update with > my findings. Once this is done I'll look at starting a basic baremetal > install guide for the RDO website as one doesn't exist at the moment > that I can see and I think one of the main "takeaways" from this is > that stable documentation is needed urgently. I'd be very much inclined > to keep it separate from the rather confusing developer documentation > in use currently. This is why people seem to be heading off to Red Hat > docs I guess. > > But I'd be really grateful if the bugs under discussion are addressed > in Mitaka stable as soon as possible as curling patches is less great. > > As an addition, it looks like following discussion with Pedro, the > overcloud deployment doesn't handle spanning tree on switches correctly > as we are needing to manually delete json files and re-runs os-apply- > config when the deployment stalls. This ships by default on switches > these days so it would be good if the deployment could cater for links > that aren't immediately in forwarding state. Could you describe a bit more this issue? It'd would be great if you have some steps so I can reproduce it. Thanks! > Happy to help out with documentation and keeping errata/workarounds up > to date - I think we just need a "stable" section of the website which > doesn't seem to exist at the moment. > > Regards > > > On Mon, 2016-06-06 at 00:37 +0100, Graeme Gillies wrote: >> Hi Everyone, >> >> I just wanted to say I have been following this thread quite closely >> and >> can sympathize with some of the pain people are going through to get >> tripleO to work. >> >> Currently it's quite difficult and a bit opaque on how to actually >> utilise the stable mitaka repos in order to build a functional >> undercloud and overcloud environment. >> >> First I wanted to share the steps I have undergone in order to get a >> functional overcloud working with RDO Mitaka utilising the RDO stable >> release built by CentOS, and then I'll talk about some specific steps >> I >> think need to be undertaken by the RDO/TripleO team in order to >> provide >> a better experience in the future. >> >> To get a functional overcloud using RDO Mitaka, you need to do the >> following >> >> 1) Install EPEL on your undercloud >> 2) Install https://www.rdoproject.org/repos/rdo-release.rpm on your >> undercloud >> 3) Following the normal steps to install your undercloud (modifying >> undercloud.conf, and running openstack undercloud install >> 4) You will now need to manually patch ironic on the undercloud in >> order >> to make sure repeated introspection works. This might not be needed >> if >> you don't do any introspection, but I find more often than not you >> end >> up having to do it, so it's worthwhile. The bug you need to patch is >> [1] >> and I typically run the following commands to apply the patch >> >> # sudo su - >> $ cd /usr/lib/python2.7/site-packages >> $ curl >> 'https://review.openstack.org/changes/306421/revisions/abd50d8438e7d3 >> 71ce24f97d8f8f67052b562007/patch?download' >> > >> > base64 -d | patch -p1 >> $ systemctl restart openstack-ironic-inspector >> $ systemctl restart openstack-ironic-inspector-dnsmasq >> $ exit >> # >> >> 5) Manually patch the undercloud to build overcloud images using >> rhos-release rpm only (which utilises the stable Mitaka repo from >> CentOS, and nothing from RDO Trunk [delorean]). I do this by >> modifying >> the file >> >> /usr/lib/python2.7/site-packages/tripleoclient/v1/overcloud_image.py >> >> At around line 467 you will see a reference to epel, I add a new line >> after that to include the rdo_release DIB element to the build as >> well. >> This typically makes the file look something like >> >> http://paste.openstack.org/show/508196/ >> >> (note like 468). Then I create a directory to store my images and >> build >> them specifying the mitaka version of rdo_release. I then upload >> these >> images >> >> # mkdir ~/images >> # cd ~/images >> # export RDO_RELEASE=mitaka >> # openstack overcloud image build --all >> # openstack overcloud image upload --update-existing >> >> 6) Because of the bug at [2] which affects the ironic agent ramdisk, >> we >> need to build a set of images utilising RDO Trunk for the mitaka >> branch >> (where the fix is applied), and then upload *only* the new ironic >> ramdisk. This is done with >> >> # mkdir ~/images-mitaka-trunk >> # cd ~/images-mitaka-trunk >> # export USE_DELOREAN_TRUNK=1 >> # export >> DELOREAN_TRUNK_REPO="http://trunk.rdoproject.org/centos7-mitaka/curre >> nt/" >> # export DELOREAN_REPO_FILE="delorean.repo" >> # openstack overcloud image build --type agent-ramdisk >> # sudo cp ironic-python-agent.initramfs /httpboot/agent.ramdisk >> >> 7) Follow the rest of the documentation to deploy the overcloud >> normally >> >> Please note that obviously your mileage may vary, and this is by all >> means not an exclusive list of the problems. I have however used >> these >> steps to do multiple node deployments (10+ nodes) with HA over >> different >> hardware sets with different networking setups (single nic, multiple >> nic >> with bonding + vlans). >> >> With all the different repos floating around, all which change very >> rapidly, combined with the documentation defaults targeting >> developers >> and CI systems (not end users), it's hard to not only get a stable >> TripleO install up, but also communicate and discuss clearly with >> others >> what is working, what is broken, and how to compare two installations >> to >> see if they are experiencing the same issues. >> >> To this end I would like to suggest to the RDO and TripleO community >> that we undertake the following >> >> 1) Overhaul all the TripleO documentation so that all the steps >> default >> to utilising/deploying using RDO Stable (that is, the releases done >> by >> CBS). There should be colored boxes with alt steps for those who wish >> to >> use RDO Trunk on the stable branch, and RDO Trunk from master. This >> basically inverts the current pattern. I think anyone, Operator or >> developer, who is working through the documentation for the first >> time, >> should be given steps that maximise the chance of success, and thus >> the >> most stable release we have. Once a user has gone through the process >> once, they can look at the alternative steps for more aggressive >> releases >> >> 2) Patch python-tripleoclient so that by default, when you run >> "openstack overcloud image build" it builds the images utilising the >> rdo_release DIB element, and sets the RDO_RELEASE environment >> variable >> to be 'mitaka' or whenever the current stable release is (and we >> should >> endevour to update it with new releases). There should be no extra >> environment variables necessary to build images, and by default it >> should never touch anything RDO Trunk (delorean) related >> >> 3) For bugs like the two I have mentioned above, we need to have some >> sort of robust process for either backporting those patches to the >> builds in CBS (I understand we don't do this for various reasons), or >> we >> need some kind of tooling or solution that allows operators to apply >> only the fixes they need from RDO Trunk (delorean). We need to ensure >> that when an Operator utilises TripleO they have the greatest chance >> of >> success, and bugs such as these which severely impact the deployment >> process harm the adoption of TripleO and RDO. >> >> 4) We should curate and keep an up to date page on rdoproject.org >> that >> does highlight the outstanding issues related to TripleO on the RDO >> Stable (CBS) releases. These should have links to relevant bugzillas, >> clean instructions on how to work around the issue, or cleanly apply >> a >> patch to avoid the issue, and as new releases make it out, we should >> update the page to drop off workarounds that are no longer needed. >> >> The goal is to push Operators/Users to working with our most stable >> code >> as much as possible, and track/curate issues around that. This way >> everyone should be on the same page, issues are easier to discuss and >> diagnose, and overall peoples experiences should be better. >> >> I'm interested in thoughts, feedback, and concerns, both from the RDO >> and TripleO community, and from the Operator/User community. >> >> Regards, >> >> Graeme >> >> [1] https://bugs.launchpad.net/ironic-inspector/+bug/1570447 >> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1322892 >> >> On 05/06/16 02:04, Pedro Sousa wrote: >> > >> > Thanks Marius, >> > >> > I can confirm that it installs fine with 3 controllers + 3 computes >> > after recreating the stack >> > >> > Regards >> > >> > On Sat, Jun 4, 2016 at 4:14 PM, Marius Cornea > > t >> > > wrote: >> > >> > Hi Pedro, >> > >> > Scaling out controller nodes is not supported at this moment: >> > https://bugzilla.redhat.com/show_bug.cgi?id=1243312 >> > >> > On Sat, Jun 4, 2016 at 5:05 PM, Pedro Sousa > > > wrote: >> > > Hi, >> > > >> > > some update on scaling the cloud: >> > > >> > > 1 controller + 1 compute -> 1 controller + 3 computes OK >> > > >> > > 1 controller + 3 computes -> 3 controllers + 3 compute FAILS >> > > >> > > Problem: The new controller nodes are "stuck" in "pscd >> > start", so >> > it seems >> > > to be a problem joining the pacemaker cluster... Did anyone >> > had this >> > > problem? >> > > >> > > Regards >> > > >> > > >> > > >> > > >> > > >> > > >> > > On Sat, Jun 4, 2016 at 1:50 AM, Pedro Sousa > > m >> > > wrote: >> > >> >> > >> Hi, >> > >> >> > >> I finally managed to install a baremetal in mitaka with 1 >> > controller + 1 >> > >> compute with network isolation. Thank god :) >> > >> >> > >> All I did was: >> > >> >> > >> #yum install centos-release-openstack-mitaka >> > >> #sudo yum install python-tripleoclient >> > >> >> > >> without epel repos. >> > >> >> > >> Then followed instructions from Redhat Site. >> > >> >> > >> I downloaded the overcloud images from: >> > >> >> > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_image >> > s/mitaka/delorean/ >> > >> >> > >> I do have an issue that forces me to delete a json file and >> > run >> > >> os-refresh-config inside my overcloud nodes other than that >> > it >> > installs >> > >> fine. >> > >> >> > >> Now I'll test with more 2 controllers + 2 computes to have a >> > full HA >> > >> deployment. >> > >> >> > >> If anyone needs help to document this I'll be happy to help. >> > >> >> > >> Regards, >> > >> Pedro Sousa >> > >> >> > >> >> > >> On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy > > .com >> > > wrote: >> > >>> >> > >>> The report says: "Fix Released" as of 2016-05-24. >> > >>> Are you installing on a clean system with the latest >> > repositories? >> > >>> >> > >>> Might also want to check your version of rabbitmq: I have >> > >>> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. >> > >>> >> > >>> ----- Original Message ----- >> > >>> > From: "Pedro Sousa" > > ail.com>> >> > >>> > To: "Ronelle Landy" > > hat.com>> >> > >>> > Cc: "Christopher Brown" > > >, "Ignacio Bravo" >> > >>> > >, >> > "rdo-list" >> > >>> > > >> > >>> > Sent: Friday, June 3, 2016 1:20:43 PM >> > >>> > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >> > >>> > >> > >>> > Anyway to workaround this? Maybe downgrade hiera? >> > >>> > >> > >>> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy >> > > >> > >>> > wrote: >> > >>> > >> > >>> > > I am not sure exactly where you installed from, and >> > when you >> > did your >> > >>> > > installation, but any chance, you've hit: >> > >>> > > https://bugs.launchpad.net/tripleo/+bug/1584892? >> > >>> > > There is a link bugzilla record. >> > >>> > > >> > >>> > > ----- Original Message ----- >> > >>> > > > From: "Pedro Sousa" > > > >> > >>> > > > To: "Ronelle Landy" > > > >> > >>> > > > Cc: "Christopher Brown" > > >, "Ignacio Bravo" < >> > >>> > > ibravo at ltgfederal.com >, >> > "rdo-list" >> > >>> > > > > >> > >>> > > > Sent: Friday, June 3, 2016 12:26:58 PM >> > >>> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable >> > version? >> > >>> > > > >> > >>> > > > Thanks Ronelle, >> > >>> > > > >> > >>> > > > do you think this kind of errors can be related with >> > network >> > >>> > > > settings? >> > >>> > > > >> > >>> > > > "Could not retrieve fact='rabbitmq_nodename', >> > >>> > > > resolution='': >> > >>> > > > undefined method `[]' for nil:NilClass Could not >> > retrieve >> > >>> > > > fact='rabbitmq_nodename', resolution='': >> > undefined >> > >>> > > > method `[]' >> > >>> > > > for nil:NilClass" >> > >>> > > > >> > >>> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy >> > > >> > >>> > > > wrote: >> > >>> > > > >> > >>> > > > > Hi Pedro, >> > >>> > > > > >> > >>> > > > > You could use the docs you referred to. >> > >>> > > > > Alternatively, if you want to use a vm for the >> > undercloud and >> > >>> > > > > baremetal >> > >>> > > > > machines for the overcloud, it is possible to use >> > Tripleo >> > >>> > > > > Qucikstart >> > >>> > > with a >> > >>> > > > > few modifications. >> > >>> > > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/ >> > 1571028. >> > >>> > > > > >> > >>> > > > > ----- Original Message ----- >> > >>> > > > > > From: "Pedro Sousa" > > > >> > >>> > > > > > To: "Ronelle Landy" > > > >> > >>> > > > > > Cc: "Christopher Brown" > > >, "Ignacio Bravo" < >> > >>> > > > > ibravo at ltgfederal.com > > >>, >> > "rdo-list" >> > >>> > > > > > >> > > >> > >>> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM >> > >>> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable >> > version? >> > >>> > > > > > >> > >>> > > > > > Hi Ronelle, >> > >>> > > > > > >> > >>> > > > > > maybe I understand it wrong but I thought that >> > Tripleo >> > >>> > > > > > Quickstart >> > >>> > > was for >> > >>> > > > > > deploying virtual environments? >> > >>> > > > > > >> > >>> > > > > > And for baremetal we should use >> > >>> > > > > > >> > >>> > > > > >> > >>> > > >> > >>> > > >> > http://docs.openstack.org/developer/tripleo-docs/installation/i >> > nstallation.html >> > >>> > > > > > ? >> > >>> > > > > > >> > >>> > > > > > Thanks >> > >>> > > > > > >> > >>> > > > > > >> > >>> > > > > > >> > >>> > > > > > >> > >>> > > > > > >> > >>> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy >> > >>> > > > > > > >> > >>> > > wrote: >> > >>> > > > > > >> > >>> > > > > > > Hello, >> > >>> > > > > > > >> > >>> > > > > > > We have had success deploying RDO (Mitaka) on >> > baremetal >> > >>> > > > > > > systems - >> > >>> > > using >> > >>> > > > > > > Tripleo Quickstart with both single-nic-vlans >> > and >> > >>> > > > > > > bond-with-vlans >> > >>> > > > > network >> > >>> > > > > > > isolation configurations. >> > >>> > > > > > > >> > >>> > > > > > > Baremetal can have some complicated networking >> > issues but, >> > >>> > > > > > > from >> > >>> > > > > previous >> > >>> > > > > > > experiences, if a single-controller deployment >> > worked but a >> > >>> > > > > > > HA >> > >>> > > > > deployment >> > >>> > > > > > > did not, I would check: >> > >>> > > > > > > - does the HA deployment command include: -e >> > >>> > > > > > > >> > >>> > > > > >> > >>> > > >> > >>> > > >> > /usr/share/openstack-tripleo-heat- >> > templates/environments/puppet-pacemaker.yaml >> > >>> > > > > > > - are there possible MTU issues? >> > >>> > > > > > > >> > >>> > > > > > > >> > >>> > > > > > > ----- Original Message ----- >> > >>> > > > > > > > From: "Christopher Brown" > > > >> > >>> > > > > > > > To: pgsousa at gmail.com > > om>, >> > ibravo at ltgfederal.com >> > >>> > > > > > > > Cc: rdo-list at redhat.com > > at.com> >> > >>> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM >> > >>> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo >> > stable >> > version? >> > >>> > > > > > > > >> > >>> > > > > > > > Hello Ignacio, >> > >>> > > > > > > > >> > >>> > > > > > > > Thanks for your response and good to know it >> > isn't >> > just me! >> > >>> > > > > > > > >> > >>> > > > > > > > I would be more than happy to provide >> > developers with >> > >>> > > > > > > > access to >> > >>> > > our >> > >>> > > > > > > > bare metal environments. I'll also file some >> > bugzilla >> > >>> > > > > > > > reports to >> > >>> > > see >> > >>> > > > > if >> > >>> > > > > > > > this generates any interest. >> > >>> > > > > > > > >> > >>> > > > > > > > Please do let me know if you make any >> > progress - I am >> > >>> > > > > > > > trying to >> > >>> > > > > deploy >> > >>> > > > > > > > HA with network isolation, multiple nics and >> > vlans. >> > >>> > > > > > > > >> > >>> > > > > > > > The RDO web page states: >> > >>> > > > > > > > >> > >>> > > > > > > > "If you want to create a production-ready >> > cloud, >> > you'll >> > >>> > > > > > > > want to >> > >>> > > use >> > >>> > > > > the >> > >>> > > > > > > > TripleO quickstart guide." >> > >>> > > > > > > > >> > >>> > > > > > > > which is a contradiction in terms really. >> > >>> > > > > > > > >> > >>> > > > > > > > Cheers >> > >>> > > > > > > > >> > >>> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio >> > Bravo >> > wrote: >> > >>> > > > > > > > > Pedro / Christopher, >> > >>> > > > > > > > > >> > >>> > > > > > > > > Just wanted to share with you that I also >> > had >> > plenty of >> > >>> > > > > > > > > issues >> > >>> > > > > > > > > deploying on bare metal HA servers, and >> > have >> > paused the >> > >>> > > deployment >> > >>> > > > > > > > > using TripleO until better winds start to >> > flow >> > here. I >> > >>> > > > > > > > > was >> > >>> > > able to >> > >>> > > > > > > > > deploy the QuickStart, but on bare metal >> > the >> > history was >> > >>> > > different. >> > >>> > > > > > > > > Couldn't even deploy a two server >> > configuration. >> > >>> > > > > > > > > >> > >>> > > > > > > > > I was thinking that it would be good to >> > have the >> > >>> > > > > > > > > developers >> > >>> > > have >> > >>> > > > > > > > > access to one of our environments and go >> > through >> > a full >> > >>> > > > > > > > > install >> > >>> > > > > with >> > >>> > > > > > > > > us to better see where things fail. We can >> > do this >> > >>> > > > > > > > > handholding >> > >>> > > > > > > > > deployment once every week/month based on >> > developers time >> > >>> > > > > > > > > availability. That way we can get a working >> > install, and >> > >>> > > > > > > > > we can >> > >>> > > > > > > > > troubleshoot real life environment >> > problems. >> > >>> > > > > > > > > >> > >>> > > > > > > > > >> > >>> > > > > > > > > IB >> > >>> > > > > > > > > >> > >>> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa >> > >>> > > > > > > > > > > m>> >> > >>> > > wrote: >> > >>> > > > > > > > > >> > >>> > > > > > > > > > Yes. I've used this, but I'll try again >> > as there's >> > >>> > > > > > > > > > seems to >> > >>> > > be >> > >>> > > > > new >> > >>> > > > > > > > > > updates. >> > >>> > > > > > > > > > >> > >>> > > > > > > > > > >> > >>> > > > > > > > > > >> > >>> > > > > > > > > > Stable Branch Skip all repos mentioned >> > above, >> > other >> > >>> > > > > > > > > > than >> > >>> > > epel- >> > >>> > > > > > > > > > release which is still required. >> > >>> > > > > > > > > > Enable latest RDO Stable Delorean >> > repository >> > for all >> > >>> > > > > > > > > > packages >> > >>> > > > > > > > > > sudo curl -o >> > /etc/yum.repos.d/delorean-liberty.repo >> > >>> > > > > https://trunk.r >> > >>> > > > > > > > > > >> > doproject.org/centos7-liberty/current/delorean.repo >> > >> > >>> > > > > > > > > > Enable the Delorean Deps repository >> > >>> > > > > > > > > > sudo curl -o >> > >>> > > > > > > > > > /etc/yum.repos.d/delorean-deps- >> > liberty.repo >> > >>> > > > > http://tru >> > >>> > > > > > > > > > >> > nk.rdoproject.org/centos7-liberty/delorean-deps.repo >> > >> > >>> > > > > > > > > > >> > >>> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, >> > Christopher >> > Brown < >> > >>> > > > > cbrown2 at ocf.co . >> > >>> > > > > > > > > > uk> wrote: >> > >>> > > > > > > > > > > No, Liberty deployed ok for us. >> > >>> > > > > > > > > > > >> > >>> > > > > > > > > > > It suggests to me a package mismatch. >> > Have you >> > >>> > > > > > > > > > > completely >> > >>> > > > > rebuilt >> > >>> > > > > > > > > > > the >> > >>> > > > > > > > > > > undercloud and then the images using >> > Liberty? >> > >>> > > > > > > > > > > >> > >>> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, >> > Pedro >> > Sousa wrote: >> > >>> > > > > > > > > > > > AttributeError: 'module' object has >> > no >> > attribute >> > >>> > > 'PortOpt' >> > >>> > > > > > > > > > > -- >> > >>> > > > > > > > > > > Regards, >> > >>> > > > > > > > > > > >> > >>> > > > > > > > > > > Christopher Brown >> > >>> > > > > > > > > > > OpenStack Engineer >> > >>> > > > > > > > > > > OCF plc >> > >>> > > > > > > > > > > >> > >>> > > > > > > > > > > Tel: +44 (0)114 257 2200 >> > >> > >>> > > > > > > > > > > Web: www.ocf.co.uk >> > >> > >>> > > > > > > > > > > Blog: blog.ocf.co.uk > > o.uk> >> > >>> > > > > > > > > > > Twitter: @ocfplc >> > >>> > > > > > > > > > > >> > >>> > > > > > > > > > > Please note, any emails relating to an >> > OCF >> > Support >> > >>> > > > > > > > > > > request >> > >>> > > must >> > >>> > > > > > > > > > > always >> > >>> > > > > > > > > > > be sent to support at ocf.co.uk >> > for a ticket number to >> > >>> > > > > > > > > > > be >> > >>> > > > > generated >> > >>> > > > > > > > > > > or >> > >>> > > > > > > > > > > existing support ticket to be updated. >> > Should this >> > >>> > > > > > > > > > > not be >> > >>> > > done >> > >>> > > > > > > > > > > then OCF >> > >>> > > > > > > > > > > >> > >>> > > > > > > > > > > cannot be held responsible for requests >> > not >> > dealt >> > >>> > > > > > > > > > > with in a >> > >>> > > > > > > > > > > timely >> > >>> > > > > > > > > > > manner. >> > >>> > > > > > > > > > > >> > >>> > > > > > > > > > > OCF plc is a company registered in >> > England >> > and Wales. >> > >>> > > > > Registered >> > >>> > > > > > > > > > > number >> > >>> > > > > > > > > > > >> > >>> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. >> > Registered office >> > >>> > > address: >> > >>> > > > > > > > > > > OCF plc, >> > >>> > > > > > > > > > > >> > >>> > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe >> > Park, >> > >>> > > > > > > > > > > Chapeltown, >> > >>> > > > > > > > > > > Sheffield S35 >> > >>> > > > > > > > > > > 2PG. >> > >>> > > > > > > > > > > >> > >>> > > > > > > > > > > If you have received this message in >> > error, >> > please >> > >>> > > > > > > > > > > notify >> > >>> > > us >> > >>> > > > > > > > > > > immediately and remove it from your >> > system. >> > >>> > > > > > > > > > > >> > >>> > > > > > > > > >> > >>> > > > > > > > > >> > _______________________________________________ >> > >>> > > > > > > > > rdo-list mailing list >> > >>> > > > > > > > > rdo-list at redhat.com > > .com> >> > >>> > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo >> > -list >> > >>> > > > > > > > > >> > >>> > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat >> > .com >> > >> > >>> > > > > > > > -- >> > >>> > > > > > > > Regards, >> > >>> > > > > > > > >> > >>> > > > > > > > Christopher Brown >> > >>> > > > > > > > OpenStack Engineer >> > >>> > > > > > > > OCF plc >> > >>> > > > > > > > >> > >>> > > > > > > > Tel: +44 (0)114 257 2200 >> > >> > >>> > > > > > > > Web: www.ocf.co.uk >> > >>> > > > > > > > Blog: blog.ocf.co.uk >> > >>> > > > > > > > Twitter: @ocfplc >> > >>> > > > > > > > >> > >>> > > > > > > > Please note, any emails relating to an OCF >> > Support >> > request >> > >>> > > > > > > > must >> > >>> > > > > always >> > >>> > > > > > > > be sent to support at ocf.co.uk >> > for a ticket number to be >> > >>> > > generated or >> > >>> > > > > > > > existing support ticket to be updated. Should >> > this >> > not be >> > >>> > > > > > > > done >> > >>> > > then >> > >>> > > > > OCF >> > >>> > > > > > > > >> > >>> > > > > > > > cannot be held responsible for requests not >> > dealt >> > with in a >> > >>> > > timely >> > >>> > > > > > > > manner. >> > >>> > > > > > > > >> > >>> > > > > > > > OCF plc is a company registered in England >> > and Wales. >> > >>> > > > > > > > Registered >> > >>> > > > > number >> > >>> > > > > > > > >> > >>> > > > > > > > 4132533, VAT number GB 780 6803 14. >> > Registered office >> > >>> > > > > > > > address: >> > >>> > > OCF >> > >>> > > > > plc, >> > >>> > > > > > > > >> > >>> > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, >> > Chapeltown, >> > >>> > > Sheffield >> > >>> > > > > S35 >> > >>> > > > > > > > 2PG. >> > >>> > > > > > > > >> > >>> > > > > > > > If you have received this message in error, >> > please >> > notify >> > >>> > > > > > > > us >> > >>> > > > > > > > immediately and remove it from your system. >> > >>> > > > > > > > >> > >>> > > > > > > > >> > _______________________________________________ >> > >>> > > > > > > > rdo-list mailing list >> > >>> > > > > > > > rdo-list at redhat.com > > om> >> > >>> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-l >> > ist >> > >>> > > > > > > > >> > >>> > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.c >> > om >> > >> > >>> > > > > > > > >> > >>> > > > > > > >> > >>> > > > > > >> > >>> > > > > >> > >>> > > > >> > >>> > > >> > >>> > >> > >> >> > >> >> > > >> > > >> > > _______________________________________________ >> > > rdo-list mailing list >> > > rdo-list at redhat.com >> > > https://www.redhat.com/mailman/listinfo/rdo-list >> > > >> > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > >> > >> > >> > >> > >> > _______________________________________________ >> > rdo-list mailing list >> > rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > >> >> -- >> Graeme Gillies >> Principal Systems Administrator >> Openstack Infrastructure >> Red Hat Australia >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > -- > Regards, > > Christopher Brown > OpenStack Engineer > OCF plc > > Tel: +44 (0)114 257 2200 > Web: www.ocf.co.uk > Blog: blog.ocf.co.uk > Twitter: @ocfplc > > Please note, any emails relating to an OCF Support request must always > be sent to support at ocf.co.uk for a ticket number to be generated or > existing support ticket to be updated. Should this not be done then OCF > > cannot be held responsible for requests not dealt with in a timely > manner. > > OCF plc is a company registered in England and Wales. Registered number > > 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 > 2PG. > > If you have received this message in error, please notify us > immediately and remove it from your system. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From cbrown2 at ocf.co.uk Mon Jun 6 11:53:34 2016 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Mon, 6 Jun 2016 12:53:34 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> <1465210434.9673.50.camel@ocf.co.uk> Message-ID: <1465214014.9673.59.camel@ocf.co.uk> On Mon, 2016-06-06 at 12:13 +0100, Marius Cornea wrote: > On Mon, Jun 6, 2016 at 12:53 PM, Christopher Brown > wrote: > > Could you describe a bit more this issue? It'd would be great if > > you > have some steps so I can reproduce it. Thanks! Sure. Pedro - this is from your email you sent to me - hope this is ok but its basically exactly the same issue I was seeing before disabling STP and I don't have my logs from the failed stuff any more. The problem is that when you launch the overcloud nodes the servers sit there doing nothing and you see this on journalctl: : No local metadata found (['/var/lib/os-collect-config/local-data']) Jun 04 00:58:42 overcloud-novacompute-2 os-collect-config[6749]: HTTPConnectionPool(host='192.0.2.2', port=8080): Max retries exceeded Jun 04 00:58:42 overcloud-novacompute-2 os-collect-config[6749]: Source [request] Unavailable. Jun 04 00:58:42 overcloud-novacompute-2 os-collect-config[6749]: /var/lib/os-collect-config/local-data not found. Skipping Jun 04 00:58:42 overcloud-novacompute-2 os-collect-config[6749]: No local metadata found (['/var/lib/os-collect-config/local-data']) Jun 04 00:58:50 overcloud-novacompute-2 os-collect-config[6749]: HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exce Jun 04 00:58:50 overcloud-novacompute-2 os-collect-config[6749]: Source [ec2] Unavailable. Jun 04 00:58:50 overcloud-novacompute-2 os-collect-config[6749]: HTTPConnectionPool(host='192.0.2.2', port=8080): Max retries exceeded So you have to ask the system to retry to apply the conf running: #os-refresh-config Running this command you see something like: [2016-06-04 10:57:09,195] (heat-config) [WARNING] Skipping config b28e3adb-ed18-4e74-94cf-260c3c1eefec, already deployed [2016-06-04 10:57:09,195] (heat-config) [WARNING] To force-deploy, rm /var/lib/heat-config/deployed/b28e3adb-ed18-4e74-94cf-260c3c1eefec.json Once we do this then the deployment continues. Hoep this helps - thanks to Pedro for pointing me in the right direction. > > > > Happy to help out with documentation and keeping errata/workarounds > > up > > to date - I think we just need a "stable" section of the website > > which > > doesn't seem to exist at the moment. > > > > Regards > > > > > > On Mon, 2016-06-06 at 00:37 +0100, Graeme Gillies wrote: > > > > > > Hi Everyone, > > > > > > I just wanted to say I have been following this thread quite > > > closely > > > and > > > can sympathize with some of the pain people are going through to > > > get > > > tripleO to work. > > > > > > Currently it's quite difficult and a bit opaque on how to > > > actually > > > utilise the stable mitaka repos in order to build a functional > > > undercloud and overcloud environment. > > > > > > First I wanted to share the steps I have undergone in order to > > > get a > > > functional overcloud working with RDO Mitaka utilising the RDO > > > stable > > > release built by CentOS, and then I'll talk about some specific > > > steps > > > I > > > think need to be undertaken by the RDO/TripleO team in order to > > > provide > > > a better experience in the future. > > > > > > To get a functional overcloud using RDO Mitaka, you need to do > > > the > > > following > > > > > > 1) Install EPEL on your undercloud > > > 2) Install https://www.rdoproject.org/repos/rdo-release.rpm on > > > your > > > undercloud > > > 3) Following the normal steps to install your undercloud > > > (modifying > > > undercloud.conf, and running openstack undercloud install > > > 4) You will now need to manually patch ironic on the undercloud > > > in > > > order > > > to make sure repeated introspection works. This might not be > > > needed > > > if > > > you don't do any introspection, but I find more often than not > > > you > > > end > > > up having to do it, so it's worthwhile. The bug you need to patch > > > is > > > [1] > > > and I typically run the following commands to apply the patch > > > > > > # sudo su - > > > $ cd /usr/lib/python2.7/site-packages > > > $ curl > > > 'https://review.openstack.org/changes/306421/revisions/abd50d8438 > > > e7d3 > > > 71ce24f97d8f8f67052b562007/patch?download' > > > > > > > > > > > > base64 -d | patch -p1 > > > $ systemctl restart openstack-ironic-inspector > > > $ systemctl restart openstack-ironic-inspector-dnsmasq > > > $ exit > > > # > > > > > > 5) Manually patch the undercloud to build overcloud images using > > > rhos-release rpm only (which utilises the stable Mitaka repo from > > > CentOS, and nothing from RDO Trunk [delorean]). I do this by > > > modifying > > > the file > > > > > > /usr/lib/python2.7/site- > > > packages/tripleoclient/v1/overcloud_image.py > > > > > > At around line 467 you will see a reference to epel, I add a new > > > line > > > after that to include the rdo_release DIB element to the build as > > > well. > > > This typically makes the file look something like > > > > > > http://paste.openstack.org/show/508196/ > > > > > > (note like 468). Then I create a directory to store my images and > > > build > > > them specifying the mitaka version of rdo_release. I then upload > > > these > > > images > > > > > > # mkdir ~/images > > > # cd ~/images > > > # export RDO_RELEASE=mitaka > > > # openstack overcloud image build --all > > > # openstack overcloud image upload --update-existing > > > > > > 6) Because of the bug at [2] which affects the ironic agent > > > ramdisk, > > > we > > > need to build a set of images utilising RDO Trunk for the mitaka > > > branch > > > (where the fix is applied), and then upload *only* the new ironic > > > ramdisk. This is done with > > > > > > # mkdir ~/images-mitaka-trunk > > > # cd ~/images-mitaka-trunk > > > # export USE_DELOREAN_TRUNK=1 > > > # export > > > DELOREAN_TRUNK_REPO="http://trunk.rdoproject.org/centos7-mitaka/c > > > urre > > > nt/" > > > # export DELOREAN_REPO_FILE="delorean.repo" > > > # openstack overcloud image build --type agent-ramdisk > > > # sudo cp ironic-python-agent.initramfs /httpboot/agent.ramdisk > > > > > > 7) Follow the rest of the documentation to deploy the overcloud > > > normally > > > > > > Please note that obviously your mileage may vary, and this is by > > > all > > > means not an exclusive list of the problems. I have however used > > > these > > > steps to do multiple node deployments (10+ nodes) with HA over > > > different > > > hardware sets with different networking setups (single nic, > > > multiple > > > nic > > > with bonding + vlans). > > > > > > With all the different repos floating around, all which change > > > very > > > rapidly, combined with the documentation defaults targeting > > > developers > > > and CI systems (not end users), it's hard to not only get a > > > stable > > > TripleO install up, but also communicate and discuss clearly with > > > others > > > what is working, what is broken, and how to compare two > > > installations > > > to > > > see if they are experiencing the same issues. > > > > > > To this end I would like to suggest to the RDO and TripleO > > > community > > > that we undertake the following > > > > > > 1) Overhaul all the TripleO documentation so that all the steps > > > default > > > to utilising/deploying using RDO Stable (that is, the releases > > > done > > > by > > > CBS). There should be colored boxes with alt steps for those who > > > wish > > > to > > > use RDO Trunk on the stable branch, and RDO Trunk from master. > > > This > > > basically inverts the current pattern. I think anyone, Operator > > > or > > > developer, who is working through the documentation for the first > > > time, > > > should be given steps that maximise the chance of success, and > > > thus > > > the > > > most stable release we have. Once a user has gone through the > > > process > > > once, they can look at the alternative steps for more aggressive > > > releases > > > > > > 2) Patch python-tripleoclient so that by default, when you run > > > "openstack overcloud image build" it builds the images utilising > > > the > > > rdo_release DIB element, and sets the RDO_RELEASE environment > > > variable > > > to be 'mitaka' or whenever the current stable release is (and we > > > should > > > endevour to update it with new releases). There should be no > > > extra > > > environment variables necessary to build images, and by default > > > it > > > should never touch anything RDO Trunk (delorean) related > > > > > > 3) For bugs like the two I have mentioned above, we need to have > > > some > > > sort of robust process for either backporting those patches to > > > the > > > builds in CBS (I understand we don't do this for various > > > reasons), or > > > we > > > need some kind of tooling or solution that allows operators to > > > apply > > > only the fixes they need from RDO Trunk (delorean). We need to > > > ensure > > > that when an Operator utilises TripleO they have the greatest > > > chance > > > of > > > success, and bugs such as these which severely impact the > > > deployment > > > process harm the adoption of TripleO and RDO. > > > > > > 4) We should curate and keep an up to date page on rdoproject.org > > > that > > > does highlight the outstanding issues related to TripleO on the > > > RDO > > > Stable (CBS) releases. These should have links to relevant > > > bugzillas, > > > clean instructions on how to work around the issue, or cleanly > > > apply > > > a > > > patch to avoid the issue, and as new releases make it out, we > > > should > > > update the page to drop off workarounds that are no longer > > > needed. > > > > > > The goal is to push Operators/Users to working with our most > > > stable > > > code > > > as much as possible, and track/curate issues around that. This > > > way > > > everyone should be on the same page, issues are easier to discuss > > > and > > > diagnose, and overall peoples experiences should be better. > > > > > > I'm interested in thoughts, feedback, and concerns, both from the > > > RDO > > > and TripleO community, and from the Operator/User community. > > > > > > Regards, > > > > > > Graeme > > > > > > [1] https://bugs.launchpad.net/ironic-inspector/+bug/1570447 > > > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1322892 > > > > > > On 05/06/16 02:04, Pedro Sousa wrote: > > > > > > > > > > > > Thanks Marius, > > > > > > > > I can confirm that it installs fine with 3 controllers + 3 > > > > computes > > > > after recreating the stack > > > > > > > > Regards > > > > > > > > On Sat, Jun 4, 2016 at 4:14 PM, Marius Cornea > > > b.ne > > > > t > > > > > wrote: > > > > > > > > Hi Pedro, > > > > > > > > Scaling out controller nodes is not supported at this > > > > moment: > > > > https://bugzilla.redhat.com/show_bug.cgi?id=1243312 > > > > > > > > On Sat, Jun 4, 2016 at 5:05 PM, Pedro Sousa > > > com > > > > > wrote: > > > > > Hi, > > > > > > > > > > some update on scaling the cloud: > > > > > > > > > > 1 controller + 1 compute -> 1 controller + 3 computes OK > > > > > > > > > > 1 controller + 3 computes -> 3 controllers + 3 compute > > > > FAILS > > > > > > > > > > Problem: The new controller nodes are "stuck" in "pscd > > > > start", so > > > > it seems > > > > > to be a problem joining the pacemaker cluster... Did > > > > anyone > > > > had this > > > > > problem? > > > > > > > > > > Regards > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Sat, Jun 4, 2016 at 1:50 AM, Pedro Sousa > > > l.co > > > > m > > > > > wrote: > > > > >> > > > > >> Hi, > > > > >> > > > > >> I finally managed to install a baremetal in mitaka with > > > > 1 > > > > controller + 1 > > > > >> compute with network isolation. Thank god :) > > > > >> > > > > >> All I did was: > > > > >> > > > > >> #yum install centos-release-openstack-mitaka > > > > >> #sudo yum install python-tripleoclient > > > > >> > > > > >> without epel repos. > > > > >> > > > > >> Then followed instructions from Redhat Site. > > > > >> > > > > >> I downloaded the overcloud images from: > > > > >> > > > > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_i > > > > mage > > > > s/mitaka/delorean/ > > > > >> > > > > >> I do have an issue that forces me to delete a json file > > > > and > > > > run > > > > >> os-refresh-config inside my overcloud nodes other than > > > > that > > > > it > > > > installs > > > > >> fine. > > > > >> > > > > >> Now I'll test with more 2 controllers + 2 computes to > > > > have a > > > > full HA > > > > >> deployment. > > > > >> > > > > >> If anyone needs help to document this I'll be happy to > > > > help. > > > > >> > > > > >> Regards, > > > > >> Pedro Sousa > > > > >> > > > > >> > > > > >> On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy > > > dhat > > > > .com > > > > > wrote: > > > > >>> > > > > >>> The report says: "Fix Released" as of 2016-05-24. > > > > >>> Are you installing on a clean system with the latest > > > > repositories? > > > > >>> > > > > >>> Might also want to check your version of rabbitmq: I > > > > have > > > > >>> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. > > > > >>> > > > > >>> ----- Original Message ----- > > > > >>> > From: "Pedro Sousa" > > > a at gm > > > > ail.com>> > > > > >>> > To: "Ronelle Landy" > > > @red > > > > hat.com>> > > > > >>> > Cc: "Christopher Brown" > > > >, "Ignacio Bravo" > > > > >>> > > > > > >, > > > > "rdo-list" > > > > >>> > > > > > > >>> > Sent: Friday, June 3, 2016 1:20:43 PM > > > > >>> > Subject: Re: [rdo-list] Baremetal Tripleo stable > > > > version? > > > > >>> > > > > > >>> > Anyway to workaround this? Maybe downgrade hiera? > > > > >>> > > > > > >>> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy > > > > > > > > > >>> > wrote: > > > > >>> > > > > > >>> > > I am not sure exactly where you installed from, and > > > > when you > > > > did your > > > > >>> > > installation, but any chance, you've hit: > > > > >>> > > https://bugs.launchpad.net/tripleo/+bug/1584892? > > > > >>> > > There is a link bugzilla record. > > > > >>> > > > > > > >>> > > ----- Original Message ----- > > > > >>> > > > From: "Pedro Sousa" > > > > > > > > >>> > > > To: "Ronelle Landy" > > > > > > > > >>> > > > Cc: "Christopher Brown" > > > >, "Ignacio Bravo" < > > > > >>> > > ibravo at ltgfederal.com > > > >>, > > > > "rdo-list" > > > > >>> > > > > > > > > > > > > >>> > > > Sent: Friday, June 3, 2016 12:26:58 PM > > > > >>> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable > > > > version? > > > > >>> > > > > > > > >>> > > > Thanks Ronelle, > > > > >>> > > > > > > > >>> > > > do you think this kind of errors can be related > > > > with > > > > network > > > > >>> > > > settings? > > > > >>> > > > > > > > >>> > > > "Could not retrieve fact='rabbitmq_nodename', > > > > >>> > > > resolution='': > > > > >>> > > > undefined method `[]' for nil:NilClass Could not > > > > retrieve > > > > >>> > > > fact='rabbitmq_nodename', > > > > resolution='': > > > > undefined > > > > >>> > > > method `[]' > > > > >>> > > > for nil:NilClass" > > > > >>> > > > > > > > >>> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy > > > > > > > > > >>> > > > wrote: > > > > >>> > > > > > > > >>> > > > > Hi Pedro, > > > > >>> > > > > > > > > >>> > > > > You could use the docs you referred to. > > > > >>> > > > > Alternatively, if you want to use a vm for the > > > > undercloud and > > > > >>> > > > > baremetal > > > > >>> > > > > machines for the overcloud, it is possible to > > > > use > > > > Tripleo > > > > >>> > > > > Qucikstart > > > > >>> > > with a > > > > >>> > > > > few modifications. > > > > >>> > > > > https://bugs.launchpad.net/tripleo-quickstart/+ > > > > bug/ > > > > 1571028. > > > > >>> > > > > > > > > >>> > > > > ----- Original Message ----- > > > > >>> > > > > > From: "Pedro Sousa" > > > > > > > > >>> > > > > > To: "Ronelle Landy" > > > > > > > > >>> > > > > > Cc: "Christopher Brown" > > > >, "Ignacio Bravo" < > > > > >>> > > > > ibravo at ltgfederal.com > > > .com > > > > > > > > > > > > > > > > > , > > > > "rdo-list" > > > > >>> > > > > > > > > com> > > > > > > > > > > > > > > >>> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM > > > > >>> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo > > > > stable > > > > version? > > > > >>> > > > > > > > > > >>> > > > > > Hi Ronelle, > > > > >>> > > > > > > > > > >>> > > > > > maybe I understand it wrong but I thought > > > > that > > > > Tripleo > > > > >>> > > > > > Quickstart > > > > >>> > > was for > > > > >>> > > > > > deploying virtual environments? > > > > >>> > > > > > > > > > >>> > > > > > And for baremetal we should use > > > > >>> > > > > > > > > > >>> > > > > > > > > >>> > > > > > > >>> > > > > > > http://docs.openstack.org/developer/tripleo-docs/installati > > > > on/i > > > > nstallation.html > > > > >>> > > > > > ? > > > > >>> > > > > > > > > > >>> > > > > > Thanks > > > > >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > > >>> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy > > > > >>> > > > > > > > > > > > > > > >>> > > wrote: > > > > >>> > > > > > > > > > >>> > > > > > > Hello, > > > > >>> > > > > > > > > > > >>> > > > > > > We have had success deploying RDO (Mitaka) > > > > on > > > > baremetal > > > > >>> > > > > > > systems - > > > > >>> > > using > > > > >>> > > > > > > Tripleo Quickstart with both single-nic- > > > > vlans > > > > and > > > > >>> > > > > > > bond-with-vlans > > > > >>> > > > > network > > > > >>> > > > > > > isolation configurations. > > > > >>> > > > > > > > > > > >>> > > > > > > Baremetal can have some complicated > > > > networking > > > > issues but, > > > > >>> > > > > > > from > > > > >>> > > > > previous > > > > >>> > > > > > > experiences, if a single-controller > > > > deployment > > > > worked but a > > > > >>> > > > > > > HA > > > > >>> > > > > deployment > > > > >>> > > > > > > did not, I would check: > > > > >>> > > > > > > - does the HA deployment command include: > > > > -e > > > > >>> > > > > > > > > > > >>> > > > > > > > > >>> > > > > > > >>> > > > > > > /usr/share/openstack-tripleo-heat- > > > > templates/environments/puppet-pacemaker.yaml > > > > >>> > > > > > > - are there possible MTU issues? > > > > >>> > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > ----- Original Message ----- > > > > >>> > > > > > > > From: "Christopher Brown" > > > .uk > > > > > > > > > >>> > > > > > > > To: pgsousa at gmail.com > > > il.c > > > > om>, > > > > ibravo at ltgfederal.com > > > > >>> > > > > > > > Cc: rdo-list at redhat.com > > > redh > > > > at.com> > > > > >>> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM > > > > >>> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo > > > > stable > > > > version? > > > > >>> > > > > > > > > > > > >>> > > > > > > > Hello Ignacio, > > > > >>> > > > > > > > > > > > >>> > > > > > > > Thanks for your response and good to know > > > > it > > > > isn't > > > > just me! > > > > >>> > > > > > > > > > > > >>> > > > > > > > I would be more than happy to provide > > > > developers with > > > > >>> > > > > > > > access to > > > > >>> > > our > > > > >>> > > > > > > > bare metal environments. I'll also file > > > > some > > > > bugzilla > > > > >>> > > > > > > > reports to > > > > >>> > > see > > > > >>> > > > > if > > > > >>> > > > > > > > this generates any interest. > > > > >>> > > > > > > > > > > > >>> > > > > > > > Please do let me know if you make any > > > > progress - I am > > > > >>> > > > > > > > trying to > > > > >>> > > > > deploy > > > > >>> > > > > > > > HA with network isolation, multiple nics > > > > and > > > > vlans. > > > > >>> > > > > > > > > > > > >>> > > > > > > > The RDO web page states: > > > > >>> > > > > > > > > > > > >>> > > > > > > > "If you want to create a production-ready > > > > cloud, > > > > you'll > > > > >>> > > > > > > > want to > > > > >>> > > use > > > > >>> > > > > the > > > > >>> > > > > > > > TripleO quickstart guide." > > > > >>> > > > > > > > > > > > >>> > > > > > > > which is a contradiction in terms really. > > > > >>> > > > > > > > > > > > >>> > > > > > > > Cheers > > > > >>> > > > > > > > > > > > >>> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, > > > > Ignacio > > > > Bravo > > > > wrote: > > > > >>> > > > > > > > > Pedro / Christopher, > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > Just wanted to share with you that I > > > > also > > > > had > > > > plenty of > > > > >>> > > > > > > > > issues > > > > >>> > > > > > > > > deploying on bare metal HA servers, and > > > > have > > > > paused the > > > > >>> > > deployment > > > > >>> > > > > > > > > using TripleO until better winds start > > > > to > > > > flow > > > > here. I > > > > >>> > > > > > > > > was > > > > >>> > > able to > > > > >>> > > > > > > > > deploy the QuickStart, but on bare > > > > metal > > > > the > > > > history was > > > > >>> > > different. > > > > >>> > > > > > > > > Couldn't even deploy a two server > > > > configuration. > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > I was thinking that it would be good to > > > > have the > > > > >>> > > > > > > > > developers > > > > >>> > > have > > > > >>> > > > > > > > > access to one of our environments and > > > > go > > > > through > > > > a full > > > > >>> > > > > > > > > install > > > > >>> > > > > with > > > > >>> > > > > > > > > us to better see where things fail. We > > > > can > > > > do this > > > > >>> > > > > > > > > handholding > > > > >>> > > > > > > > > deployment once every week/month based > > > > on > > > > developers time > > > > >>> > > > > > > > > availability. That way we can get a > > > > working > > > > install, and > > > > >>> > > > > > > > > we can > > > > >>> > > > > > > > > troubleshoot real life environment > > > > problems. > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > IB > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa > > > > >>> > > > > > > > > > > > l.co > > > > m>> > > > > >>> > > wrote: > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > Yes. I've used this, but I'll try > > > > again > > > > as there's > > > > >>> > > > > > > > > > seems to > > > > >>> > > be > > > > >>> > > > > new > > > > >>> > > > > > > > > > updates. > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > Stable Branch Skip all repos > > > > mentioned > > > > above, > > > > other > > > > >>> > > > > > > > > > than > > > > >>> > > epel- > > > > >>> > > > > > > > > > release which is still required. > > > > >>> > > > > > > > > > Enable latest RDO Stable Delorean > > > > repository > > > > for all > > > > >>> > > > > > > > > > packages > > > > >>> > > > > > > > > > sudo curl -o > > > > /etc/yum.repos.d/delorean-liberty.repo > > > > >>> > > > > https://trunk.r > > > > >>> > > > > > > > > > > > > > doproject.org/centos7-liberty/current/delorean.repo > > > > > > > > > > > > >>> > > > > > > > > > Enable the Delorean Deps repository > > > > >>> > > > > > > > > > sudo curl -o > > > > >>> > > > > > > > > > /etc/yum.repos.d/delorean-deps- > > > > liberty.repo > > > > >>> > > > > http://tru > > > > >>> > > > > > > > > > > > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo > > > > > > > o> > > > > >>> > > > > > > > > > > > > > >>> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, > > > > Christopher > > > > Brown < > > > > >>> > > > > cbrown2 at ocf.co . > > > > >>> > > > > > > > > > uk> wrote: > > > > >>> > > > > > > > > > > No, Liberty deployed ok for us. > > > > >>> > > > > > > > > > > > > > > >>> > > > > > > > > > > It suggests to me a package > > > > mismatch. > > > > Have you > > > > >>> > > > > > > > > > > completely > > > > >>> > > > > rebuilt > > > > >>> > > > > > > > > > > the > > > > >>> > > > > > > > > > > undercloud and then the images > > > > using > > > > Liberty? > > > > >>> > > > > > > > > > > > > > > >>> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, > > > > Pedro > > > > Sousa wrote: > > > > >>> > > > > > > > > > > > AttributeError: 'module' object > > > > has > > > > no > > > > attribute > > > > >>> > > 'PortOpt' > > > > >>> > > > > > > > > > > -- > > > > >>> > > > > > > > > > > Regards, > > > > >>> > > > > > > > > > > > > > > >>> > > > > > > > > > > Christopher Brown > > > > >>> > > > > > > > > > > OpenStack Engineer > > > > >>> > > > > > > > > > > OCF plc > > > > >>> > > > > > > > > > > > > > > >>> > > > > > > > > > > Tel: +44 (0)114 257 2200 > > > > > > > > >>> > > > > > > > > > > Web: www.ocf.co.uk > > > > > > > > >>> > > > > > > > > > > Blog: blog.ocf.co.uk > > > cf.c > > > > o.uk> > > > > >>> > > > > > > > > > > Twitter: @ocfplc > > > > >>> > > > > > > > > > > > > > > >>> > > > > > > > > > > Please note, any emails relating to > > > > an > > > > OCF > > > > Support > > > > >>> > > > > > > > > > > request > > > > >>> > > must > > > > >>> > > > > > > > > > > always > > > > >>> > > > > > > > > > > be sent to support at ocf.co.uk > > > > for a ticket number to > > > > >>> > > > > > > > > > > be > > > > >>> > > > > generated > > > > >>> > > > > > > > > > > or > > > > >>> > > > > > > > > > > existing support ticket to be > > > > updated. > > > > Should this > > > > >>> > > > > > > > > > > not be > > > > >>> > > done > > > > >>> > > > > > > > > > > then OCF > > > > >>> > > > > > > > > > > > > > > >>> > > > > > > > > > > cannot be held responsible for > > > > requests > > > > not > > > > dealt > > > > >>> > > > > > > > > > > with in a > > > > >>> > > > > > > > > > > timely > > > > >>> > > > > > > > > > > manner. > > > > >>> > > > > > > > > > > > > > > >>> > > > > > > > > > > OCF plc is a company registered in > > > > England > > > > and Wales. > > > > >>> > > > > Registered > > > > >>> > > > > > > > > > > number > > > > >>> > > > > > > > > > > > > > > >>> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. > > > > Registered office > > > > >>> > > address: > > > > >>> > > > > > > > > > > OCF plc, > > > > >>> > > > > > > > > > > > > > > >>> > > > > > > > > > > 5 Rotunda Business Centre, > > > > Thorncliffe > > > > Park, > > > > >>> > > > > > > > > > > Chapeltown, > > > > >>> > > > > > > > > > > Sheffield S35 > > > > >>> > > > > > > > > > > 2PG. > > > > >>> > > > > > > > > > > > > > > >>> > > > > > > > > > > If you have received this message > > > > in > > > > error, > > > > please > > > > >>> > > > > > > > > > > notify > > > > >>> > > us > > > > >>> > > > > > > > > > > immediately and remove it from your > > > > system. > > > > >>> > > > > > > > > > > > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > > > > > _______________________________________________ > > > > >>> > > > > > > > > rdo-list mailing list > > > > >>> > > > > > > > > rdo-list at redhat.com > > > dhat > > > > .com> > > > > >>> > > > > > > > > https://www.redhat.com/mailman/listinfo > > > > /rdo > > > > -list > > > > >>> > > > > > > > > > > > > >>> > > > > > > > > To unsubscribe: rdo-list-unsubscribe at re > > > > dhat > > > > .com > > > > > > > > >>> > > > > > > > -- > > > > >>> > > > > > > > Regards, > > > > >>> > > > > > > > > > > > >>> > > > > > > > Christopher Brown > > > > >>> > > > > > > > OpenStack Engineer > > > > >>> > > > > > > > OCF plc > > > > >>> > > > > > > > > > > > >>> > > > > > > > Tel: +44 (0)114 257 2200 > > > > > > > > >>> > > > > > > > Web: www.ocf.co.uk > > > > >>> > > > > > > > Blog: blog.ocf.co.uk > > > uk> > > > > >>> > > > > > > > Twitter: @ocfplc > > > > >>> > > > > > > > > > > > >>> > > > > > > > Please note, any emails relating to an > > > > OCF > > > > Support > > > > request > > > > >>> > > > > > > > must > > > > >>> > > > > always > > > > >>> > > > > > > > be sent to support at ocf.co.uk > > > > for a ticket number to be > > > > >>> > > generated or > > > > >>> > > > > > > > existing support ticket to be updated. > > > > Should > > > > this > > > > not be > > > > >>> > > > > > > > done > > > > >>> > > then > > > > >>> > > > > OCF > > > > >>> > > > > > > > > > > > >>> > > > > > > > cannot be held responsible for requests > > > > not > > > > dealt > > > > with in a > > > > >>> > > timely > > > > >>> > > > > > > > manner. > > > > >>> > > > > > > > > > > > >>> > > > > > > > OCF plc is a company registered in > > > > England > > > > and Wales. > > > > >>> > > > > > > > Registered > > > > >>> > > > > number > > > > >>> > > > > > > > > > > > >>> > > > > > > > 4132533, VAT number GB 780 6803 14. > > > > Registered office > > > > >>> > > > > > > > address: > > > > >>> > > OCF > > > > >>> > > > > plc, > > > > >>> > > > > > > > > > > > >>> > > > > > > > 5 Rotunda Business Centre, Thorncliffe > > > > Park, > > > > Chapeltown, > > > > >>> > > Sheffield > > > > >>> > > > > S35 > > > > >>> > > > > > > > 2PG. > > > > >>> > > > > > > > > > > > >>> > > > > > > > If you have received this message in > > > > error, > > > > please > > > > notify > > > > >>> > > > > > > > us > > > > >>> > > > > > > > immediately and remove it from your > > > > system. > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > > > _______________________________________________ > > > > >>> > > > > > > > rdo-list mailing list > > > > >>> > > > > > > > rdo-list at redhat.com > > > at.c > > > > om> > > > > >>> > > > > > > > https://www.redhat.com/mailman/listinfo/r > > > > do-l > > > > ist > > > > >>> > > > > > > > > > > > >>> > > > > > > > To unsubscribe: rdo-list-unsubscribe at redh > > > > at.c > > > > om > > > > > > > > >>> > > > > > > > > > > > >>> > > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > >>> > > > > > > > >>> > > > > > > >>> > > > > > >> > > > > >> > > > > > > > > > > > > > > > _______________________________________________ > > > > > rdo-list mailing list > > > > > rdo-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > rdo-list mailing list > > > > rdo-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > > > -- > > > Graeme Gillies > > > Principal Systems Administrator > > > Openstack Infrastructure > > > Red Hat Australia > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > -- > > Regards, > > > > Christopher Brown > > OpenStack Engineer > > OCF plc > > > > Tel: +44 (0)114 257 2200 > > Web: www.ocf.co.uk > > Blog: blog.ocf.co.uk > > Twitter: @ocfplc > > > > Please note, any emails relating to an OCF Support request must > > always > > be sent to support at ocf.co.uk for a ticket number to be generated or > > existing support ticket to be updated. Should this not be done then > > OCF > > > > cannot be held responsible for requests not dealt with in a timely > > manner. > > > > OCF plc is a company registered in England and Wales. Registered > > number > > > > 4132533, VAT number GB 780 6803 14. Registered office address: OCF > > plc, > > > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield > > S35 > > 2PG. > > > > If you have received this message in error, please notify us > > immediately and remove it from your system. > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Regards, Christopher Brown OpenStack Engineer OCF plc Tel: +44 (0)114 257 2200 Web: www.ocf.co.uk Blog: blog.ocf.co.uk Twitter: @ocfplc Please note, any emails relating to an OCF Support request must always be sent to support at ocf.co.uk for a ticket number to be generated or existing support ticket to be updated. Should this not be done then OCF cannot be held responsible for requests not dealt with in a timely manner. OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 2PG. If you have received this message in error, please notify us immediately and remove it from your system. From marius at remote-lab.net Mon Jun 6 12:19:39 2016 From: marius at remote-lab.net (Marius Cornea) Date: Mon, 6 Jun 2016 14:19:39 +0200 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <1465214014.9673.59.camel@ocf.co.uk> References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> <1465210434.9673.50.camel@ocf.co.uk> <1465214014.9673.59.camel@ocf.co.uk> Message-ID: On Mon, Jun 6, 2016 at 1:53 PM, Christopher Brown wrote: > On Mon, 2016-06-06 at 12:13 +0100, Marius Cornea wrote: >> On Mon, Jun 6, 2016 at 12:53 PM, Christopher Brown > > wrote: > [...] > > Running this command you see something like: > > [2016-06-04 10:57:09,195] (heat-config) [WARNING] Skipping config > b28e3adb-ed18-4e74-94cf-260c3c1eefec, already deployed > [2016-06-04 10:57:09,195] (heat-config) [WARNING] To force-deploy, rm > /var/lib/heat-config/deployed/b28e3adb-ed18-4e74-94cf-260c3c1eefec.json > > Once we do this then the deployment continues. > > Hoep this helps - thanks to Pedro for pointing me in the right > direction. A couple of thoughts about this: if it is STP causing timeouts due to the transitioning states I'd expect the dhcp requests to time out earlier, during pxe boot. I think we should add this to the docs, that the switch ports where the provisioning nic is connected should be configured as portfast, otherwise you might see dhcp timeouts due to the convergence time. >From what I can see the nodes got stuck later in the deployment process, unable to reach the metadata server, which runs on the undercloud. When I hit such issue I usually try to see if I get any respone by 'curl http://169.254.169.254' and then proceed to debug if it doesn't work. In the past I've seen several causes for this such as incorrect routing tables set by the nic templates ( you can check it by 'ip r get 169.254.169.254' and see that the router corresponds to the underclud IP address) or iptables rules on the undercloud that blocked access to the metadata server. If it seems stuck without no actual reason it's also worth trying to restart os-collect-config by systemctl > >> > >> > Happy to help out with documentation and keeping errata/workarounds >> > up >> > to date - I think we just need a "stable" section of the website >> > which >> > doesn't seem to exist at the moment. >> > >> > Regards >> > >> > >> > On Mon, 2016-06-06 at 00:37 +0100, Graeme Gillies wrote: >> > > >> > > Hi Everyone, >> > > >> > > I just wanted to say I have been following this thread quite >> > > closely >> > > and >> > > can sympathize with some of the pain people are going through to >> > > get >> > > tripleO to work. >> > > >> > > Currently it's quite difficult and a bit opaque on how to >> > > actually >> > > utilise the stable mitaka repos in order to build a functional >> > > undercloud and overcloud environment. >> > > >> > > First I wanted to share the steps I have undergone in order to >> > > get a >> > > functional overcloud working with RDO Mitaka utilising the RDO >> > > stable >> > > release built by CentOS, and then I'll talk about some specific >> > > steps >> > > I >> > > think need to be undertaken by the RDO/TripleO team in order to >> > > provide >> > > a better experience in the future. >> > > >> > > To get a functional overcloud using RDO Mitaka, you need to do >> > > the >> > > following >> > > >> > > 1) Install EPEL on your undercloud >> > > 2) Install https://www.rdoproject.org/repos/rdo-release.rpm on >> > > your >> > > undercloud >> > > 3) Following the normal steps to install your undercloud >> > > (modifying >> > > undercloud.conf, and running openstack undercloud install >> > > 4) You will now need to manually patch ironic on the undercloud >> > > in >> > > order >> > > to make sure repeated introspection works. This might not be >> > > needed >> > > if >> > > you don't do any introspection, but I find more often than not >> > > you >> > > end >> > > up having to do it, so it's worthwhile. The bug you need to patch >> > > is >> > > [1] >> > > and I typically run the following commands to apply the patch >> > > >> > > # sudo su - >> > > $ cd /usr/lib/python2.7/site-packages >> > > $ curl >> > > 'https://review.openstack.org/changes/306421/revisions/abd50d8438 >> > > e7d3 >> > > 71ce24f97d8f8f67052b562007/patch?download' >> > > > >> > > > >> > > > base64 -d | patch -p1 >> > > $ systemctl restart openstack-ironic-inspector >> > > $ systemctl restart openstack-ironic-inspector-dnsmasq >> > > $ exit >> > > # >> > > >> > > 5) Manually patch the undercloud to build overcloud images using >> > > rhos-release rpm only (which utilises the stable Mitaka repo from >> > > CentOS, and nothing from RDO Trunk [delorean]). I do this by >> > > modifying >> > > the file >> > > >> > > /usr/lib/python2.7/site- >> > > packages/tripleoclient/v1/overcloud_image.py >> > > >> > > At around line 467 you will see a reference to epel, I add a new >> > > line >> > > after that to include the rdo_release DIB element to the build as >> > > well. >> > > This typically makes the file look something like >> > > >> > > http://paste.openstack.org/show/508196/ >> > > >> > > (note like 468). Then I create a directory to store my images and >> > > build >> > > them specifying the mitaka version of rdo_release. I then upload >> > > these >> > > images >> > > >> > > # mkdir ~/images >> > > # cd ~/images >> > > # export RDO_RELEASE=mitaka >> > > # openstack overcloud image build --all >> > > # openstack overcloud image upload --update-existing >> > > >> > > 6) Because of the bug at [2] which affects the ironic agent >> > > ramdisk, >> > > we >> > > need to build a set of images utilising RDO Trunk for the mitaka >> > > branch >> > > (where the fix is applied), and then upload *only* the new ironic >> > > ramdisk. This is done with >> > > >> > > # mkdir ~/images-mitaka-trunk >> > > # cd ~/images-mitaka-trunk >> > > # export USE_DELOREAN_TRUNK=1 >> > > # export >> > > DELOREAN_TRUNK_REPO="http://trunk.rdoproject.org/centos7-mitaka/c >> > > urre >> > > nt/" >> > > # export DELOREAN_REPO_FILE="delorean.repo" >> > > # openstack overcloud image build --type agent-ramdisk >> > > # sudo cp ironic-python-agent.initramfs /httpboot/agent.ramdisk >> > > >> > > 7) Follow the rest of the documentation to deploy the overcloud >> > > normally >> > > >> > > Please note that obviously your mileage may vary, and this is by >> > > all >> > > means not an exclusive list of the problems. I have however used >> > > these >> > > steps to do multiple node deployments (10+ nodes) with HA over >> > > different >> > > hardware sets with different networking setups (single nic, >> > > multiple >> > > nic >> > > with bonding + vlans). >> > > >> > > With all the different repos floating around, all which change >> > > very >> > > rapidly, combined with the documentation defaults targeting >> > > developers >> > > and CI systems (not end users), it's hard to not only get a >> > > stable >> > > TripleO install up, but also communicate and discuss clearly with >> > > others >> > > what is working, what is broken, and how to compare two >> > > installations >> > > to >> > > see if they are experiencing the same issues. >> > > >> > > To this end I would like to suggest to the RDO and TripleO >> > > community >> > > that we undertake the following >> > > >> > > 1) Overhaul all the TripleO documentation so that all the steps >> > > default >> > > to utilising/deploying using RDO Stable (that is, the releases >> > > done >> > > by >> > > CBS). There should be colored boxes with alt steps for those who >> > > wish >> > > to >> > > use RDO Trunk on the stable branch, and RDO Trunk from master. >> > > This >> > > basically inverts the current pattern. I think anyone, Operator >> > > or >> > > developer, who is working through the documentation for the first >> > > time, >> > > should be given steps that maximise the chance of success, and >> > > thus >> > > the >> > > most stable release we have. Once a user has gone through the >> > > process >> > > once, they can look at the alternative steps for more aggressive >> > > releases >> > > >> > > 2) Patch python-tripleoclient so that by default, when you run >> > > "openstack overcloud image build" it builds the images utilising >> > > the >> > > rdo_release DIB element, and sets the RDO_RELEASE environment >> > > variable >> > > to be 'mitaka' or whenever the current stable release is (and we >> > > should >> > > endevour to update it with new releases). There should be no >> > > extra >> > > environment variables necessary to build images, and by default >> > > it >> > > should never touch anything RDO Trunk (delorean) related >> > > >> > > 3) For bugs like the two I have mentioned above, we need to have >> > > some >> > > sort of robust process for either backporting those patches to >> > > the >> > > builds in CBS (I understand we don't do this for various >> > > reasons), or >> > > we >> > > need some kind of tooling or solution that allows operators to >> > > apply >> > > only the fixes they need from RDO Trunk (delorean). We need to >> > > ensure >> > > that when an Operator utilises TripleO they have the greatest >> > > chance >> > > of >> > > success, and bugs such as these which severely impact the >> > > deployment >> > > process harm the adoption of TripleO and RDO. >> > > >> > > 4) We should curate and keep an up to date page on rdoproject.org >> > > that >> > > does highlight the outstanding issues related to TripleO on the >> > > RDO >> > > Stable (CBS) releases. These should have links to relevant >> > > bugzillas, >> > > clean instructions on how to work around the issue, or cleanly >> > > apply >> > > a >> > > patch to avoid the issue, and as new releases make it out, we >> > > should >> > > update the page to drop off workarounds that are no longer >> > > needed. >> > > >> > > The goal is to push Operators/Users to working with our most >> > > stable >> > > code >> > > as much as possible, and track/curate issues around that. This >> > > way >> > > everyone should be on the same page, issues are easier to discuss >> > > and >> > > diagnose, and overall peoples experiences should be better. >> > > >> > > I'm interested in thoughts, feedback, and concerns, both from the >> > > RDO >> > > and TripleO community, and from the Operator/User community. >> > > >> > > Regards, >> > > >> > > Graeme >> > > >> > > [1] https://bugs.launchpad.net/ironic-inspector/+bug/1570447 >> > > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1322892 >> > > >> > > On 05/06/16 02:04, Pedro Sousa wrote: >> > > > >> > > > >> > > > Thanks Marius, >> > > > >> > > > I can confirm that it installs fine with 3 controllers + 3 >> > > > computes >> > > > after recreating the stack >> > > > >> > > > Regards >> > > > >> > > > On Sat, Jun 4, 2016 at 4:14 PM, Marius Cornea > > > > b.ne >> > > > t >> > > > > wrote: >> > > > >> > > > Hi Pedro, >> > > > >> > > > Scaling out controller nodes is not supported at this >> > > > moment: >> > > > https://bugzilla.redhat.com/show_bug.cgi?id=1243312 >> > > > >> > > > On Sat, Jun 4, 2016 at 5:05 PM, Pedro Sousa > > > > com >> > > > > wrote: >> > > > > Hi, >> > > > > >> > > > > some update on scaling the cloud: >> > > > > >> > > > > 1 controller + 1 compute -> 1 controller + 3 computes OK >> > > > > >> > > > > 1 controller + 3 computes -> 3 controllers + 3 compute >> > > > FAILS >> > > > > >> > > > > Problem: The new controller nodes are "stuck" in "pscd >> > > > start", so >> > > > it seems >> > > > > to be a problem joining the pacemaker cluster... Did >> > > > anyone >> > > > had this >> > > > > problem? >> > > > > >> > > > > Regards >> > > > > >> > > > > >> > > > > >> > > > > >> > > > > >> > > > > >> > > > > On Sat, Jun 4, 2016 at 1:50 AM, Pedro Sousa > > > > l.co >> > > > m >> > > > > wrote: >> > > > >> >> > > > >> Hi, >> > > > >> >> > > > >> I finally managed to install a baremetal in mitaka with >> > > > 1 >> > > > controller + 1 >> > > > >> compute with network isolation. Thank god :) >> > > > >> >> > > > >> All I did was: >> > > > >> >> > > > >> #yum install centos-release-openstack-mitaka >> > > > >> #sudo yum install python-tripleoclient >> > > > >> >> > > > >> without epel repos. >> > > > >> >> > > > >> Then followed instructions from Redhat Site. >> > > > >> >> > > > >> I downloaded the overcloud images from: >> > > > >> >> > > > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_i >> > > > mage >> > > > s/mitaka/delorean/ >> > > > >> >> > > > >> I do have an issue that forces me to delete a json file >> > > > and >> > > > run >> > > > >> os-refresh-config inside my overcloud nodes other than >> > > > that >> > > > it >> > > > installs >> > > > >> fine. >> > > > >> >> > > > >> Now I'll test with more 2 controllers + 2 computes to >> > > > have a >> > > > full HA >> > > > >> deployment. >> > > > >> >> > > > >> If anyone needs help to document this I'll be happy to >> > > > help. >> > > > >> >> > > > >> Regards, >> > > > >> Pedro Sousa >> > > > >> >> > > > >> >> > > > >> On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy > > > > dhat >> > > > .com >> > > > > wrote: >> > > > >>> >> > > > >>> The report says: "Fix Released" as of 2016-05-24. >> > > > >>> Are you installing on a clean system with the latest >> > > > repositories? >> > > > >>> >> > > > >>> Might also want to check your version of rabbitmq: I >> > > > have >> > > > >>> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. >> > > > >>> >> > > > >>> ----- Original Message ----- >> > > > >>> > From: "Pedro Sousa" > > > > a at gm >> > > > ail.com>> >> > > > >>> > To: "Ronelle Landy" > > > > @red >> > > > hat.com>> >> > > > >>> > Cc: "Christopher Brown" > > > > >, "Ignacio Bravo" >> > > > >>> > >> > > > >, >> > > > "rdo-list" >> > > > >>> > > >> > > > >>> > Sent: Friday, June 3, 2016 1:20:43 PM >> > > > >>> > Subject: Re: [rdo-list] Baremetal Tripleo stable >> > > > version? >> > > > >>> > >> > > > >>> > Anyway to workaround this? Maybe downgrade hiera? >> > > > >>> > >> > > > >>> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy >> > > > > >> > > > >>> > wrote: >> > > > >>> > >> > > > >>> > > I am not sure exactly where you installed from, and >> > > > when you >> > > > did your >> > > > >>> > > installation, but any chance, you've hit: >> > > > >>> > > https://bugs.launchpad.net/tripleo/+bug/1584892? >> > > > >>> > > There is a link bugzilla record. >> > > > >>> > > >> > > > >>> > > ----- Original Message ----- >> > > > >>> > > > From: "Pedro Sousa" > > > > > >> > > > >>> > > > To: "Ronelle Landy" > > > > > >> > > > >>> > > > Cc: "Christopher Brown" > > > > >, "Ignacio Bravo" < >> > > > >>> > > ibravo at ltgfederal.com > > > > >>, >> > > > "rdo-list" >> > > > >>> > > > >> > > > > >> > > > >>> > > > Sent: Friday, June 3, 2016 12:26:58 PM >> > > > >>> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable >> > > > version? >> > > > >>> > > > >> > > > >>> > > > Thanks Ronelle, >> > > > >>> > > > >> > > > >>> > > > do you think this kind of errors can be related >> > > > with >> > > > network >> > > > >>> > > > settings? >> > > > >>> > > > >> > > > >>> > > > "Could not retrieve fact='rabbitmq_nodename', >> > > > >>> > > > resolution='': >> > > > >>> > > > undefined method `[]' for nil:NilClass Could not >> > > > retrieve >> > > > >>> > > > fact='rabbitmq_nodename', >> > > > resolution='': >> > > > undefined >> > > > >>> > > > method `[]' >> > > > >>> > > > for nil:NilClass" >> > > > >>> > > > >> > > > >>> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy >> > > > > >> > > > >>> > > > wrote: >> > > > >>> > > > >> > > > >>> > > > > Hi Pedro, >> > > > >>> > > > > >> > > > >>> > > > > You could use the docs you referred to. >> > > > >>> > > > > Alternatively, if you want to use a vm for the >> > > > undercloud and >> > > > >>> > > > > baremetal >> > > > >>> > > > > machines for the overcloud, it is possible to >> > > > use >> > > > Tripleo >> > > > >>> > > > > Qucikstart >> > > > >>> > > with a >> > > > >>> > > > > few modifications. >> > > > >>> > > > > https://bugs.launchpad.net/tripleo-quickstart/+ >> > > > bug/ >> > > > 1571028. >> > > > >>> > > > > >> > > > >>> > > > > ----- Original Message ----- >> > > > >>> > > > > > From: "Pedro Sousa" > > > > > >> > > > >>> > > > > > To: "Ronelle Landy" > > > > > >> > > > >>> > > > > > Cc: "Christopher Brown" > > > > >, "Ignacio Bravo" < >> > > > >>> > > > > ibravo at ltgfederal.com > > > > .com >> > > > > >> > > > > > >> > > > > > , >> > > > "rdo-list" >> > > > >>> > > > > > > > > > com> >> > > > > >> > > > > >> > > > >>> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM >> > > > >>> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo >> > > > stable >> > > > version? >> > > > >>> > > > > > >> > > > >>> > > > > > Hi Ronelle, >> > > > >>> > > > > > >> > > > >>> > > > > > maybe I understand it wrong but I thought >> > > > that >> > > > Tripleo >> > > > >>> > > > > > Quickstart >> > > > >>> > > was for >> > > > >>> > > > > > deploying virtual environments? >> > > > >>> > > > > > >> > > > >>> > > > > > And for baremetal we should use >> > > > >>> > > > > > >> > > > >>> > > > > >> > > > >>> > > >> > > > >>> > > >> > > > http://docs.openstack.org/developer/tripleo-docs/installati >> > > > on/i >> > > > nstallation.html >> > > > >>> > > > > > ? >> > > > >>> > > > > > >> > > > >>> > > > > > Thanks >> > > > >>> > > > > > >> > > > >>> > > > > > >> > > > >>> > > > > > >> > > > >>> > > > > > >> > > > >>> > > > > > >> > > > >>> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy >> > > > >>> > > > > > >> > > > > >> > > > >>> > > wrote: >> > > > >>> > > > > > >> > > > >>> > > > > > > Hello, >> > > > >>> > > > > > > >> > > > >>> > > > > > > We have had success deploying RDO (Mitaka) >> > > > on >> > > > baremetal >> > > > >>> > > > > > > systems - >> > > > >>> > > using >> > > > >>> > > > > > > Tripleo Quickstart with both single-nic- >> > > > vlans >> > > > and >> > > > >>> > > > > > > bond-with-vlans >> > > > >>> > > > > network >> > > > >>> > > > > > > isolation configurations. >> > > > >>> > > > > > > >> > > > >>> > > > > > > Baremetal can have some complicated >> > > > networking >> > > > issues but, >> > > > >>> > > > > > > from >> > > > >>> > > > > previous >> > > > >>> > > > > > > experiences, if a single-controller >> > > > deployment >> > > > worked but a >> > > > >>> > > > > > > HA >> > > > >>> > > > > deployment >> > > > >>> > > > > > > did not, I would check: >> > > > >>> > > > > > > - does the HA deployment command include: >> > > > -e >> > > > >>> > > > > > > >> > > > >>> > > > > >> > > > >>> > > >> > > > >>> > > >> > > > /usr/share/openstack-tripleo-heat- >> > > > templates/environments/puppet-pacemaker.yaml >> > > > >>> > > > > > > - are there possible MTU issues? >> > > > >>> > > > > > > >> > > > >>> > > > > > > >> > > > >>> > > > > > > ----- Original Message ----- >> > > > >>> > > > > > > > From: "Christopher Brown" > > > > .uk >> > > > > >> > > > >>> > > > > > > > To: pgsousa at gmail.com > > > > il.c >> > > > om>, >> > > > ibravo at ltgfederal.com >> > > > >>> > > > > > > > Cc: rdo-list at redhat.com > > > > redh >> > > > at.com> >> > > > >>> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM >> > > > >>> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo >> > > > stable >> > > > version? >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > Hello Ignacio, >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > Thanks for your response and good to know >> > > > it >> > > > isn't >> > > > just me! >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > I would be more than happy to provide >> > > > developers with >> > > > >>> > > > > > > > access to >> > > > >>> > > our >> > > > >>> > > > > > > > bare metal environments. I'll also file >> > > > some >> > > > bugzilla >> > > > >>> > > > > > > > reports to >> > > > >>> > > see >> > > > >>> > > > > if >> > > > >>> > > > > > > > this generates any interest. >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > Please do let me know if you make any >> > > > progress - I am >> > > > >>> > > > > > > > trying to >> > > > >>> > > > > deploy >> > > > >>> > > > > > > > HA with network isolation, multiple nics >> > > > and >> > > > vlans. >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > The RDO web page states: >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > "If you want to create a production-ready >> > > > cloud, >> > > > you'll >> > > > >>> > > > > > > > want to >> > > > >>> > > use >> > > > >>> > > > > the >> > > > >>> > > > > > > > TripleO quickstart guide." >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > which is a contradiction in terms really. >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > Cheers >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, >> > > > Ignacio >> > > > Bravo >> > > > wrote: >> > > > >>> > > > > > > > > Pedro / Christopher, >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > Just wanted to share with you that I >> > > > also >> > > > had >> > > > plenty of >> > > > >>> > > > > > > > > issues >> > > > >>> > > > > > > > > deploying on bare metal HA servers, and >> > > > have >> > > > paused the >> > > > >>> > > deployment >> > > > >>> > > > > > > > > using TripleO until better winds start >> > > > to >> > > > flow >> > > > here. I >> > > > >>> > > > > > > > > was >> > > > >>> > > able to >> > > > >>> > > > > > > > > deploy the QuickStart, but on bare >> > > > metal >> > > > the >> > > > history was >> > > > >>> > > different. >> > > > >>> > > > > > > > > Couldn't even deploy a two server >> > > > configuration. >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > I was thinking that it would be good to >> > > > have the >> > > > >>> > > > > > > > > developers >> > > > >>> > > have >> > > > >>> > > > > > > > > access to one of our environments and >> > > > go >> > > > through >> > > > a full >> > > > >>> > > > > > > > > install >> > > > >>> > > > > with >> > > > >>> > > > > > > > > us to better see where things fail. We >> > > > can >> > > > do this >> > > > >>> > > > > > > > > handholding >> > > > >>> > > > > > > > > deployment once every week/month based >> > > > on >> > > > developers time >> > > > >>> > > > > > > > > availability. That way we can get a >> > > > working >> > > > install, and >> > > > >>> > > > > > > > > we can >> > > > >>> > > > > > > > > troubleshoot real life environment >> > > > problems. >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > IB >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa >> > > > >>> > > > > > > > > > > > > l.co >> > > > m>> >> > > > >>> > > wrote: >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > > Yes. I've used this, but I'll try >> > > > again >> > > > as there's >> > > > >>> > > > > > > > > > seems to >> > > > >>> > > be >> > > > >>> > > > > new >> > > > >>> > > > > > > > > > updates. >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > > Stable Branch Skip all repos >> > > > mentioned >> > > > above, >> > > > other >> > > > >>> > > > > > > > > > than >> > > > >>> > > epel- >> > > > >>> > > > > > > > > > release which is still required. >> > > > >>> > > > > > > > > > Enable latest RDO Stable Delorean >> > > > repository >> > > > for all >> > > > >>> > > > > > > > > > packages >> > > > >>> > > > > > > > > > sudo curl -o >> > > > /etc/yum.repos.d/delorean-liberty.repo >> > > > >>> > > > > https://trunk.r >> > > > >>> > > > > > > > > > >> > > > doproject.org/centos7-liberty/current/delorean.repo >> > > > > > > > > >> > > > >>> > > > > > > > > > Enable the Delorean Deps repository >> > > > >>> > > > > > > > > > sudo curl -o >> > > > >>> > > > > > > > > > /etc/yum.repos.d/delorean-deps- >> > > > liberty.repo >> > > > >>> > > > > http://tru >> > > > >>> > > > > > > > > > >> > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo >> > > > > > > > o> >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, >> > > > Christopher >> > > > Brown < >> > > > >>> > > > > cbrown2 at ocf.co . >> > > > >>> > > > > > > > > > uk> wrote: >> > > > >>> > > > > > > > > > > No, Liberty deployed ok for us. >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > It suggests to me a package >> > > > mismatch. >> > > > Have you >> > > > >>> > > > > > > > > > > completely >> > > > >>> > > > > rebuilt >> > > > >>> > > > > > > > > > > the >> > > > >>> > > > > > > > > > > undercloud and then the images >> > > > using >> > > > Liberty? >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, >> > > > Pedro >> > > > Sousa wrote: >> > > > >>> > > > > > > > > > > > AttributeError: 'module' object >> > > > has >> > > > no >> > > > attribute >> > > > >>> > > 'PortOpt' >> > > > >>> > > > > > > > > > > -- >> > > > >>> > > > > > > > > > > Regards, >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > Christopher Brown >> > > > >>> > > > > > > > > > > OpenStack Engineer >> > > > >>> > > > > > > > > > > OCF plc >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > Tel: +44 (0)114 257 2200 >> > > > >> > > > >>> > > > > > > > > > > Web: www.ocf.co.uk >> > > > >> > > > >>> > > > > > > > > > > Blog: blog.ocf.co.uk > > > > cf.c >> > > > o.uk> >> > > > >>> > > > > > > > > > > Twitter: @ocfplc >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > Please note, any emails relating to >> > > > an >> > > > OCF >> > > > Support >> > > > >>> > > > > > > > > > > request >> > > > >>> > > must >> > > > >>> > > > > > > > > > > always >> > > > >>> > > > > > > > > > > be sent to support at ocf.co.uk >> > > > for a ticket number to >> > > > >>> > > > > > > > > > > be >> > > > >>> > > > > generated >> > > > >>> > > > > > > > > > > or >> > > > >>> > > > > > > > > > > existing support ticket to be >> > > > updated. >> > > > Should this >> > > > >>> > > > > > > > > > > not be >> > > > >>> > > done >> > > > >>> > > > > > > > > > > then OCF >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > cannot be held responsible for >> > > > requests >> > > > not >> > > > dealt >> > > > >>> > > > > > > > > > > with in a >> > > > >>> > > > > > > > > > > timely >> > > > >>> > > > > > > > > > > manner. >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > OCF plc is a company registered in >> > > > England >> > > > and Wales. >> > > > >>> > > > > Registered >> > > > >>> > > > > > > > > > > number >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. >> > > > Registered office >> > > > >>> > > address: >> > > > >>> > > > > > > > > > > OCF plc, >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > 5 Rotunda Business Centre, >> > > > Thorncliffe >> > > > Park, >> > > > >>> > > > > > > > > > > Chapeltown, >> > > > >>> > > > > > > > > > > Sheffield S35 >> > > > >>> > > > > > > > > > > 2PG. >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > If you have received this message >> > > > in >> > > > error, >> > > > please >> > > > >>> > > > > > > > > > > notify >> > > > >>> > > us >> > > > >>> > > > > > > > > > > immediately and remove it from your >> > > > system. >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > >> > > > _______________________________________________ >> > > > >>> > > > > > > > > rdo-list mailing list >> > > > >>> > > > > > > > > rdo-list at redhat.com > > > > dhat >> > > > .com> >> > > > >>> > > > > > > > > https://www.redhat.com/mailman/listinfo >> > > > /rdo >> > > > -list >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > To unsubscribe: rdo-list-unsubscribe at re >> > > > dhat >> > > > .com >> > > > >> > > > >>> > > > > > > > -- >> > > > >>> > > > > > > > Regards, >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > Christopher Brown >> > > > >>> > > > > > > > OpenStack Engineer >> > > > >>> > > > > > > > OCF plc >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > Tel: +44 (0)114 257 2200 >> > > > >> > > > >>> > > > > > > > Web: www.ocf.co.uk >> > > > >>> > > > > > > > Blog: blog.ocf.co.uk > > > > uk> >> > > > >>> > > > > > > > Twitter: @ocfplc >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > Please note, any emails relating to an >> > > > OCF >> > > > Support >> > > > request >> > > > >>> > > > > > > > must >> > > > >>> > > > > always >> > > > >>> > > > > > > > be sent to support at ocf.co.uk >> > > > for a ticket number to be >> > > > >>> > > generated or >> > > > >>> > > > > > > > existing support ticket to be updated. >> > > > Should >> > > > this >> > > > not be >> > > > >>> > > > > > > > done >> > > > >>> > > then >> > > > >>> > > > > OCF >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > cannot be held responsible for requests >> > > > not >> > > > dealt >> > > > with in a >> > > > >>> > > timely >> > > > >>> > > > > > > > manner. >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > OCF plc is a company registered in >> > > > England >> > > > and Wales. >> > > > >>> > > > > > > > Registered >> > > > >>> > > > > number >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > 4132533, VAT number GB 780 6803 14. >> > > > Registered office >> > > > >>> > > > > > > > address: >> > > > >>> > > OCF >> > > > >>> > > > > plc, >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > 5 Rotunda Business Centre, Thorncliffe >> > > > Park, >> > > > Chapeltown, >> > > > >>> > > Sheffield >> > > > >>> > > > > S35 >> > > > >>> > > > > > > > 2PG. >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > If you have received this message in >> > > > error, >> > > > please >> > > > notify >> > > > >>> > > > > > > > us >> > > > >>> > > > > > > > immediately and remove it from your >> > > > system. >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > >> > > > _______________________________________________ >> > > > >>> > > > > > > > rdo-list mailing list >> > > > >>> > > > > > > > rdo-list at redhat.com > > > > at.c >> > > > om> >> > > > >>> > > > > > > > https://www.redhat.com/mailman/listinfo/r >> > > > do-l >> > > > ist >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > To unsubscribe: rdo-list-unsubscribe at redh >> > > > at.c >> > > > om >> > > > >> > > > >>> > > > > > > > >> > > > >>> > > > > > > >> > > > >>> > > > > > >> > > > >>> > > > > >> > > > >>> > > > >> > > > >>> > > >> > > > >>> > >> > > > >> >> > > > >> >> > > > > >> > > > > >> > > > > _______________________________________________ >> > > > > rdo-list mailing list >> > > > > rdo-list at redhat.com >> > > > > https://www.redhat.com/mailman/listinfo/rdo-list >> > > > > >> > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > >> > > > >> > > > >> > > > >> > > > >> > > > _______________________________________________ >> > > > rdo-list mailing list >> > > > rdo-list at redhat.com >> > > > https://www.redhat.com/mailman/listinfo/rdo-list >> > > > >> > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > >> > > -- >> > > Graeme Gillies >> > > Principal Systems Administrator >> > > Openstack Infrastructure >> > > Red Hat Australia >> > > >> > > _______________________________________________ >> > > rdo-list mailing list >> > > rdo-list at redhat.com >> > > https://www.redhat.com/mailman/listinfo/rdo-list >> > > >> > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > -- >> > Regards, >> > >> > Christopher Brown >> > OpenStack Engineer >> > OCF plc >> > >> > Tel: +44 (0)114 257 2200 >> > Web: www.ocf.co.uk >> > Blog: blog.ocf.co.uk >> > Twitter: @ocfplc >> > >> > Please note, any emails relating to an OCF Support request must >> > always >> > be sent to support at ocf.co.uk for a ticket number to be generated or >> > existing support ticket to be updated. Should this not be done then >> > OCF >> > >> > cannot be held responsible for requests not dealt with in a timely >> > manner. >> > >> > OCF plc is a company registered in England and Wales. Registered >> > number >> > >> > 4132533, VAT number GB 780 6803 14. Registered office address: OCF >> > plc, >> > >> > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield >> > S35 >> > 2PG. >> > >> > If you have received this message in error, please notify us >> > immediately and remove it from your system. >> > >> > _______________________________________________ >> > rdo-list mailing list >> > rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- > Regards, > > Christopher Brown > OpenStack Engineer > OCF plc > > Tel: +44 (0)114 257 2200 > Web: www.ocf.co.uk > Blog: blog.ocf.co.uk > Twitter: @ocfplc > > Please note, any emails relating to an OCF Support request must always > be sent to support at ocf.co.uk for a ticket number to be generated or > existing support ticket to be updated. Should this not be done then OCF > > cannot be held responsible for requests not dealt with in a timely > manner. > > OCF plc is a company registered in England and Wales. Registered number > > 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 > 2PG. > > If you have received this message in error, please notify us > immediately and remove it from your system. From trown at redhat.com Mon Jun 6 12:26:28 2016 From: trown at redhat.com (John Trowbridge) Date: Mon, 6 Jun 2016 08:26:28 -0400 Subject: [rdo-list] TripleO Install Failure In-Reply-To: <749735896.1189718.1465059166336.JavaMail.yahoo@mail.yahoo.com> References: <749735896.1189718.1465059166336.JavaMail.yahoo.ref@mail.yahoo.com> <749735896.1189718.1465059166336.JavaMail.yahoo@mail.yahoo.com> Message-ID: <57556BF4.2080904@redhat.com> Could you provide a bit more detail about how you ran the quickstart? It looks like you ran it as root, pointing at 192.168.0.24, and that the undercloud it deployed was unreachable. You could try: `ssh -F /root/.quickstart/ssh.config.ansible undercloud` That would allow you to troubleshoot why the undercloud is unreachable. On 06/04/2016 12:52 PM, Prakash Kanthi wrote: > Hi There, > I am trying to install OpenStack using TripleO quickstart script on a single server. I am running following error and the script stops. Can you please tell me what is going on? > Thanks,PK > > TASK [setup/undercloud : Set_fact for undercloud ip] ***************************task path: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:180Saturday 04 June 2016 11:22:33 -0500 (0:01:35.278) 0:08:30.041 ********* ok: [192.168.0.24] => {"ansible_facts": {"undercloud_ip": "192.168.23.37"}, "changed": false} > TASK [setup/undercloud : Wait until ssh is available on undercloud node] *******task path: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:184Saturday 04 June 2016 11:22:34 -0500 (0:00:01.249) 0:08:31.291 ********* ok: [192.168.0.24] => {"changed": false, "elapsed": 0, "path": null, "port": 22, "search_regex": null, "state": "started"} > TASK [setup/undercloud : Add undercloud vm to inventory] ***********************task path: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:192Saturday 04 June 2016 11:22:36 -0500 (0:00:01.610) 0:08:32.902 ********* creating host via 'add_host': hostname=undercloudchanged: [192.168.0.24] => {"add_host": {"groups": ["undercloud"], "host_name": "undercloud", "host_vars": {"ansible_fqdn": "undercloud", "ansible_host": "undercloud", "ansible_private_key_file": "/root/.quickstart/id_rsa_undercloud", "ansible_ssh_extra_args": "-F \"/root/.quickstart/ssh.config.ansible\"", "ansible_user": "stack"}}, "changed": true} > TASK [setup/undercloud : Generate ssh configuration] ***************************task path: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:202Saturday 04 June 2016 11:22:36 -0500 (0:00:00.687) 0:08:33.590 ********* changed: [192.168.0.24 -> localhost] => {"changed": true, "checksum": "cf7f920ffcaffc8087068797ced179782cb2c167", "dest": "/root/.quickstart/ssh.config.ansible", "gid": 0, "group": "root", "md5sum": "92a43943cbdc33b719c87d7f51e5c66a", "mode": "0644", "owner": "root", "size": 813, "src": "/root/.ansible/tmp/ansible-tmp-1465057357.19-199551301253928/source", "state": "file", "uid": 0} > TASK [setup/undercloud : Configure Ironic pxe_ssh driver] **********************task path: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:211Saturday 04 June 2016 11:22:38 -0500 (0:00:01.813) 0:08:35.404 ********* fatal: [192.168.0.24]: UNREACHABLE! => {"changed": false, "msg": "[Errno -2] Name or service not known", "unreachable": true} > PLAY [Rebuild inventory] ******************************************************* > TASK [setup] *******************************************************************Saturday 04 June 2016 11:22:39 -0500 (0:00:01.065) 0:08:36.469 ********* ok: [localhost] > TASK [rebuild-inventory : Ensure local working dir exists] *********************task path: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/rebuild-inventory/tasks/main.yml:1Saturday 04 June 2016 11:22:47 -0500 (0:00:07.951) 0:08:44.421 ********* ok: [localhost -> localhost] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/root/.quickstart", "size": 4096, "state": "directory", "uid": 0} > TASK [rebuild-inventory : rebuild-inventory] ***********************************task path: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/rebuild-inventory/tasks/main.yml:11Saturday 04 June 2016 11:22:48 -0500 (0:00:01.076) 0:08:45.497 ********* changed: [localhost] => {"changed": true, "checksum": "41c0b5fb2439a528d0b0c6b0f979d3159c3446ca", "dest": "/root/.quickstart/hosts", "gid": 0, "group": "root", "md5sum": "0358d6b476bc5993eb9f31c57d234be6", "mode": "0644", "owner": "root", "size": 410, "src": "/root/.ansible/tmp/ansible-tmp-1465057369.05-223284897662661/source", "state": "file", "uid": 0} > PLAY [Install undercloud and deploy overcloud] ********************************* > TASK [tripleo/undercloud : include] ********************************************task path: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/undercloud/tasks/main.yml:1Saturday 04 June 2016 11:22:50 -0500 (0:00:01.630) 0:08:47.130 ********* included: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/undercloud/tasks/create-scripts.yml for undercloud > TASK [tripleo/undercloud : Create undercloud configuration] ********************task path: /root/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/undercloud/tasks/create-scripts.yml:3Saturday 04 June 2016 11:22:51 -0500 (0:00:01.053) 0:08:48.184 ********* fatal: [undercloud]: UNREACHABLE! => {"changed": false, "msg": "[Errno -2] Name or service not known", "unreachable": true} > PLAY RECAP *********************************************************************192.168.0.24 : ok=92 changed=46 unreachable=1 failed=0 localhost : ok=10 changed=4 unreachable=0 failed=0 undercloud : ok=1 changed=0 unreachable=1 failed=0 > Saturday 04 June 2016 11:22:53 -0500 (0:00:01.470) 0:08:49.655 ********* =============================================================================== TASK: setup/undercloud : Get undercloud vm ip address ------------------ 95.28sTASK: setup/undercloud : Resize undercloud image (call virt-resize) ---- 93.98sTASK: setup/undercloud : Upload undercloud volume to storage pool ------ 66.46sTASK: setup/undercloud : Get qcow2 image from cache -------------------- 58.57sTASK: setup/undercloud : Copy instackenv.json to appliance ------------- 14.46sTASK: setup ------------------------------------------------------------- 8.64sTASK: setup/undercloud : Inject undercloud ssh public key to appliance --- 8.36sTASK: setup ------------------------------------------------------------- 8.32sTASK: setup ------------------------------------------------------------- 7.95sTASK: setup/undercloud : Perform selinux relabel on undercloud image ---- 4.34sTASK: setup/overcloud : Check if ove! rcloud vol umes exist ---------------- 2.88sTASK: environment/setup : Whitelist bridges for unprivileged access ----- 2.73sTASK: environment/setup : Start libvirt networks ------------------------ 2.61sTASK: setup ------------------------------------------------------------- 2.48sTASK: environment/teardown : Undefine libvirt networks ------------------ 2.46sTASK: parts/libvirt : Install packages for libvirt ---------------------- 2.46sTASK: provision/teardown : Remove non-root user account ----------------- 2.43sTASK: teardown/nodes : Delete baremetal vm storage ---------------------- 2.39sTASK: setup/undercloud : Start undercloud vm ---------------------------- 2.36sTASK: environment/setup : Mark libvirt networks as autostarted --------- 2.34s[root at sightApps65 ostest]# > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From pgsousa at gmail.com Mon Jun 6 13:30:56 2016 From: pgsousa at gmail.com (Pedro Sousa) Date: Mon, 6 Jun 2016 14:30:56 +0100 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> <1465210434.9673.50.camel@ocf.co.uk> <1465214014.9673.59.camel@ocf.co.uk> Message-ID: Hi Marius, In cisco switches I have to activate spt portfast on the provisioning network otherwise introspection doesnt work, I get dhcp timeouts. interface FastEthernet0/3 *description F=I, E=MTQ-itcCTRL-01, P=Eth1, X=Management Openstack switchport trunk native vlan 2210 switchport trunk allowed vlan 2210,2014,2015 switchport mode trunk spanning-tree portfast* I also have dell switches and didn't have this problem. The metadata problem seems related with some timeout from all the interfaces that get ip from dhcp for the first time and not iptables or routing, because as soon as I run the curl command from command line works fine, so I force it deleting json file from deployed folder, running "os-refresh-config" and the installation resumes. Regards Em 06/06/2016 13:20, "Marius Cornea" escreveu: On Mon, Jun 6, 2016 at 1:53 PM, Christopher Brown wrote: > On Mon, 2016-06-06 at 12:13 +0100, Marius Cornea wrote: >> On Mon, Jun 6, 2016 at 12:53 PM, Christopher Brown > > wrote: > [...] > > Running this command you see something like: > > [2016-06-04 10:57:09,195] (heat-config) [WARNING] Skipping config > b28e3adb-ed18-4e74-94cf-260c3c1eefec, already deployed > [2016-06-04 10:57:09,195] (heat-config) [WARNING] To force-deploy, rm > /var/lib/heat-config/deployed/b28e3adb-ed18-4e74-94cf-260c3c1eefec.json > > Once we do this then the deployment continues. > > Hoep this helps - thanks to Pedro for pointing me in the right > direction. A couple of thoughts about this: if it is STP causing timeouts due to the transitioning states I'd expect the dhcp requests to time out earlier, during pxe boot. I think we should add this to the docs, that the switch ports where the provisioning nic is connected should be configured as portfast, otherwise you might see dhcp timeouts due to the convergence time. >From what I can see the nodes got stuck later in the deployment process, unable to reach the metadata server, which runs on the undercloud. When I hit such issue I usually try to see if I get any respone by 'curl http://169.254.169.254' and then proceed to debug if it doesn't work. In the past I've seen several causes for this such as incorrect routing tables set by the nic templates ( you can check it by 'ip r get 169.254.169.254' and see that the router corresponds to the underclud IP address) or iptables rules on the undercloud that blocked access to the metadata server. If it seems stuck without no actual reason it's also worth trying to restart os-collect-config by systemctl > >> > >> > Happy to help out with documentation and keeping errata/workarounds >> > up >> > to date - I think we just need a "stable" section of the website >> > which >> > doesn't seem to exist at the moment. >> > >> > Regards >> > >> > >> > On Mon, 2016-06-06 at 00:37 +0100, Graeme Gillies wrote: >> > > >> > > Hi Everyone, >> > > >> > > I just wanted to say I have been following this thread quite >> > > closely >> > > and >> > > can sympathize with some of the pain people are going through to >> > > get >> > > tripleO to work. >> > > >> > > Currently it's quite difficult and a bit opaque on how to >> > > actually >> > > utilise the stable mitaka repos in order to build a functional >> > > undercloud and overcloud environment. >> > > >> > > First I wanted to share the steps I have undergone in order to >> > > get a >> > > functional overcloud working with RDO Mitaka utilising the RDO >> > > stable >> > > release built by CentOS, and then I'll talk about some specific >> > > steps >> > > I >> > > think need to be undertaken by the RDO/TripleO team in order to >> > > provide >> > > a better experience in the future. >> > > >> > > To get a functional overcloud using RDO Mitaka, you need to do >> > > the >> > > following >> > > >> > > 1) Install EPEL on your undercloud >> > > 2) Install https://www.rdoproject.org/repos/rdo-release.rpm on >> > > your >> > > undercloud >> > > 3) Following the normal steps to install your undercloud >> > > (modifying >> > > undercloud.conf, and running openstack undercloud install >> > > 4) You will now need to manually patch ironic on the undercloud >> > > in >> > > order >> > > to make sure repeated introspection works. This might not be >> > > needed >> > > if >> > > you don't do any introspection, but I find more often than not >> > > you >> > > end >> > > up having to do it, so it's worthwhile. The bug you need to patch >> > > is >> > > [1] >> > > and I typically run the following commands to apply the patch >> > > >> > > # sudo su - >> > > $ cd /usr/lib/python2.7/site-packages >> > > $ curl >> > > 'https://review.openstack.org/changes/306421/revisions/abd50d8438 >> > > e7d3 >> > > 71ce24f97d8f8f67052b562007/patch?download' >> > > > >> > > > >> > > > base64 -d | patch -p1 >> > > $ systemctl restart openstack-ironic-inspector >> > > $ systemctl restart openstack-ironic-inspector-dnsmasq >> > > $ exit >> > > # >> > > >> > > 5) Manually patch the undercloud to build overcloud images using >> > > rhos-release rpm only (which utilises the stable Mitaka repo from >> > > CentOS, and nothing from RDO Trunk [delorean]). I do this by >> > > modifying >> > > the file >> > > >> > > /usr/lib/python2.7/site- >> > > packages/tripleoclient/v1/overcloud_image.py >> > > >> > > At around line 467 you will see a reference to epel, I add a new >> > > line >> > > after that to include the rdo_release DIB element to the build as >> > > well. >> > > This typically makes the file look something like >> > > >> > > http://paste.openstack.org/show/508196/ >> > > >> > > (note like 468). Then I create a directory to store my images and >> > > build >> > > them specifying the mitaka version of rdo_release. I then upload >> > > these >> > > images >> > > >> > > # mkdir ~/images >> > > # cd ~/images >> > > # export RDO_RELEASE=mitaka >> > > # openstack overcloud image build --all >> > > # openstack overcloud image upload --update-existing >> > > >> > > 6) Because of the bug at [2] which affects the ironic agent >> > > ramdisk, >> > > we >> > > need to build a set of images utilising RDO Trunk for the mitaka >> > > branch >> > > (where the fix is applied), and then upload *only* the new ironic >> > > ramdisk. This is done with >> > > >> > > # mkdir ~/images-mitaka-trunk >> > > # cd ~/images-mitaka-trunk >> > > # export USE_DELOREAN_TRUNK=1 >> > > # export >> > > DELOREAN_TRUNK_REPO="http://trunk.rdoproject.org/centos7-mitaka/c >> > > urre >> > > nt/" >> > > # export DELOREAN_REPO_FILE="delorean.repo" >> > > # openstack overcloud image build --type agent-ramdisk >> > > # sudo cp ironic-python-agent.initramfs /httpboot/agent.ramdisk >> > > >> > > 7) Follow the rest of the documentation to deploy the overcloud >> > > normally >> > > >> > > Please note that obviously your mileage may vary, and this is by >> > > all >> > > means not an exclusive list of the problems. I have however used >> > > these >> > > steps to do multiple node deployments (10+ nodes) with HA over >> > > different >> > > hardware sets with different networking setups (single nic, >> > > multiple >> > > nic >> > > with bonding + vlans). >> > > >> > > With all the different repos floating around, all which change >> > > very >> > > rapidly, combined with the documentation defaults targeting >> > > developers >> > > and CI systems (not end users), it's hard to not only get a >> > > stable >> > > TripleO install up, but also communicate and discuss clearly with >> > > others >> > > what is working, what is broken, and how to compare two >> > > installations >> > > to >> > > see if they are experiencing the same issues. >> > > >> > > To this end I would like to suggest to the RDO and TripleO >> > > community >> > > that we undertake the following >> > > >> > > 1) Overhaul all the TripleO documentation so that all the steps >> > > default >> > > to utilising/deploying using RDO Stable (that is, the releases >> > > done >> > > by >> > > CBS). There should be colored boxes with alt steps for those who >> > > wish >> > > to >> > > use RDO Trunk on the stable branch, and RDO Trunk from master. >> > > This >> > > basically inverts the current pattern. I think anyone, Operator >> > > or >> > > developer, who is working through the documentation for the first >> > > time, >> > > should be given steps that maximise the chance of success, and >> > > thus >> > > the >> > > most stable release we have. Once a user has gone through the >> > > process >> > > once, they can look at the alternative steps for more aggressive >> > > releases >> > > >> > > 2) Patch python-tripleoclient so that by default, when you run >> > > "openstack overcloud image build" it builds the images utilising >> > > the >> > > rdo_release DIB element, and sets the RDO_RELEASE environment >> > > variable >> > > to be 'mitaka' or whenever the current stable release is (and we >> > > should >> > > endevour to update it with new releases). There should be no >> > > extra >> > > environment variables necessary to build images, and by default >> > > it >> > > should never touch anything RDO Trunk (delorean) related >> > > >> > > 3) For bugs like the two I have mentioned above, we need to have >> > > some >> > > sort of robust process for either backporting those patches to >> > > the >> > > builds in CBS (I understand we don't do this for various >> > > reasons), or >> > > we >> > > need some kind of tooling or solution that allows operators to >> > > apply >> > > only the fixes they need from RDO Trunk (delorean). We need to >> > > ensure >> > > that when an Operator utilises TripleO they have the greatest >> > > chance >> > > of >> > > success, and bugs such as these which severely impact the >> > > deployment >> > > process harm the adoption of TripleO and RDO. >> > > >> > > 4) We should curate and keep an up to date page on rdoproject.org >> > > that >> > > does highlight the outstanding issues related to TripleO on the >> > > RDO >> > > Stable (CBS) releases. These should have links to relevant >> > > bugzillas, >> > > clean instructions on how to work around the issue, or cleanly >> > > apply >> > > a >> > > patch to avoid the issue, and as new releases make it out, we >> > > should >> > > update the page to drop off workarounds that are no longer >> > > needed. >> > > >> > > The goal is to push Operators/Users to working with our most >> > > stable >> > > code >> > > as much as possible, and track/curate issues around that. This >> > > way >> > > everyone should be on the same page, issues are easier to discuss >> > > and >> > > diagnose, and overall peoples experiences should be better. >> > > >> > > I'm interested in thoughts, feedback, and concerns, both from the >> > > RDO >> > > and TripleO community, and from the Operator/User community. >> > > >> > > Regards, >> > > >> > > Graeme >> > > >> > > [1] https://bugs.launchpad.net/ironic-inspector/+bug/1570447 >> > > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1322892 >> > > >> > > On 05/06/16 02:04, Pedro Sousa wrote: >> > > > >> > > > >> > > > Thanks Marius, >> > > > >> > > > I can confirm that it installs fine with 3 controllers + 3 >> > > > computes >> > > > after recreating the stack >> > > > >> > > > Regards >> > > > >> > > > On Sat, Jun 4, 2016 at 4:14 PM, Marius Cornea > > > > b.ne >> > > > t >> > > > > wrote: >> > > > >> > > > Hi Pedro, >> > > > >> > > > Scaling out controller nodes is not supported at this >> > > > moment: >> > > > https://bugzilla.redhat.com/show_bug.cgi?id=1243312 >> > > > >> > > > On Sat, Jun 4, 2016 at 5:05 PM, Pedro Sousa > > > > com >> > > > > wrote: >> > > > > Hi, >> > > > > >> > > > > some update on scaling the cloud: >> > > > > >> > > > > 1 controller + 1 compute -> 1 controller + 3 computes OK >> > > > > >> > > > > 1 controller + 3 computes -> 3 controllers + 3 compute >> > > > FAILS >> > > > > >> > > > > Problem: The new controller nodes are "stuck" in "pscd >> > > > start", so >> > > > it seems >> > > > > to be a problem joining the pacemaker cluster... Did >> > > > anyone >> > > > had this >> > > > > problem? >> > > > > >> > > > > Regards >> > > > > >> > > > > >> > > > > >> > > > > >> > > > > >> > > > > >> > > > > On Sat, Jun 4, 2016 at 1:50 AM, Pedro Sousa > > > > l.co >> > > > m >> > > > > wrote: >> > > > >> >> > > > >> Hi, >> > > > >> >> > > > >> I finally managed to install a baremetal in mitaka with >> > > > 1 >> > > > controller + 1 >> > > > >> compute with network isolation. Thank god :) >> > > > >> >> > > > >> All I did was: >> > > > >> >> > > > >> #yum install centos-release-openstack-mitaka >> > > > >> #sudo yum install python-tripleoclient >> > > > >> >> > > > >> without epel repos. >> > > > >> >> > > > >> Then followed instructions from Redhat Site. >> > > > >> >> > > > >> I downloaded the overcloud images from: >> > > > >> >> > > > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_i >> > > > mage >> > > > s/mitaka/delorean/ >> > > > >> >> > > > >> I do have an issue that forces me to delete a json file >> > > > and >> > > > run >> > > > >> os-refresh-config inside my overcloud nodes other than >> > > > that >> > > > it >> > > > installs >> > > > >> fine. >> > > > >> >> > > > >> Now I'll test with more 2 controllers + 2 computes to >> > > > have a >> > > > full HA >> > > > >> deployment. >> > > > >> >> > > > >> If anyone needs help to document this I'll be happy to >> > > > help. >> > > > >> >> > > > >> Regards, >> > > > >> Pedro Sousa >> > > > >> >> > > > >> >> > > > >> On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy > > > > dhat >> > > > .com >> > > > > wrote: >> > > > >>> >> > > > >>> The report says: "Fix Released" as of 2016-05-24. >> > > > >>> Are you installing on a clean system with the latest >> > > > repositories? >> > > > >>> >> > > > >>> Might also want to check your version of rabbitmq: I >> > > > have >> > > > >>> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. >> > > > >>> >> > > > >>> ----- Original Message ----- >> > > > >>> > From: "Pedro Sousa" > > > > a at gm >> > > > ail.com>> >> > > > >>> > To: "Ronelle Landy" > > > > @red >> > > > hat.com>> >> > > > >>> > Cc: "Christopher Brown" > > > > >, "Ignacio Bravo" >> > > > >>> > >> > > > >, >> > > > "rdo-list" >> > > > >>> > > >> > > > >>> > Sent: Friday, June 3, 2016 1:20:43 PM >> > > > >>> > Subject: Re: [rdo-list] Baremetal Tripleo stable >> > > > version? >> > > > >>> > >> > > > >>> > Anyway to workaround this? Maybe downgrade hiera? >> > > > >>> > >> > > > >>> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy >> > > > > >> > > > >>> > wrote: >> > > > >>> > >> > > > >>> > > I am not sure exactly where you installed from, and >> > > > when you >> > > > did your >> > > > >>> > > installation, but any chance, you've hit: >> > > > >>> > > https://bugs.launchpad.net/tripleo/+bug/1584892? >> > > > >>> > > There is a link bugzilla record. >> > > > >>> > > >> > > > >>> > > ----- Original Message ----- >> > > > >>> > > > From: "Pedro Sousa" > > > > > >> > > > >>> > > > To: "Ronelle Landy" > > > > > >> > > > >>> > > > Cc: "Christopher Brown" > > > > >, "Ignacio Bravo" < >> > > > >>> > > ibravo at ltgfederal.com > > > > >>, >> > > > "rdo-list" >> > > > >>> > > > >> > > > > >> > > > >>> > > > Sent: Friday, June 3, 2016 12:26:58 PM >> > > > >>> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable >> > > > version? >> > > > >>> > > > >> > > > >>> > > > Thanks Ronelle, >> > > > >>> > > > >> > > > >>> > > > do you think this kind of errors can be related >> > > > with >> > > > network >> > > > >>> > > > settings? >> > > > >>> > > > >> > > > >>> > > > "Could not retrieve fact='rabbitmq_nodename', >> > > > >>> > > > resolution='': >> > > > >>> > > > undefined method `[]' for nil:NilClass Could not >> > > > retrieve >> > > > >>> > > > fact='rabbitmq_nodename', >> > > > resolution='': >> > > > undefined >> > > > >>> > > > method `[]' >> > > > >>> > > > for nil:NilClass" >> > > > >>> > > > >> > > > >>> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy >> > > > > >> > > > >>> > > > wrote: >> > > > >>> > > > >> > > > >>> > > > > Hi Pedro, >> > > > >>> > > > > >> > > > >>> > > > > You could use the docs you referred to. >> > > > >>> > > > > Alternatively, if you want to use a vm for the >> > > > undercloud and >> > > > >>> > > > > baremetal >> > > > >>> > > > > machines for the overcloud, it is possible to >> > > > use >> > > > Tripleo >> > > > >>> > > > > Qucikstart >> > > > >>> > > with a >> > > > >>> > > > > few modifications. >> > > > >>> > > > > https://bugs.launchpad.net/tripleo-quickstart/+ >> > > > bug/ >> > > > 1571028. >> > > > >>> > > > > >> > > > >>> > > > > ----- Original Message ----- >> > > > >>> > > > > > From: "Pedro Sousa" > > > > > >> > > > >>> > > > > > To: "Ronelle Landy" > > > > > >> > > > >>> > > > > > Cc: "Christopher Brown" > > > > >, "Ignacio Bravo" < >> > > > >>> > > > > ibravo at ltgfederal.com > > > > .com >> > > > > >> > > > > > >> > > > > > , >> > > > "rdo-list" >> > > > >>> > > > > > > > > > com> >> > > > > >> > > > > >> > > > >>> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM >> > > > >>> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo >> > > > stable >> > > > version? >> > > > >>> > > > > > >> > > > >>> > > > > > Hi Ronelle, >> > > > >>> > > > > > >> > > > >>> > > > > > maybe I understand it wrong but I thought >> > > > that >> > > > Tripleo >> > > > >>> > > > > > Quickstart >> > > > >>> > > was for >> > > > >>> > > > > > deploying virtual environments? >> > > > >>> > > > > > >> > > > >>> > > > > > And for baremetal we should use >> > > > >>> > > > > > >> > > > >>> > > > > >> > > > >>> > > >> > > > >>> > > >> > > > http://docs.openstack.org/developer/tripleo-docs/installati >> > > > on/i >> > > > nstallation.html >> > > > >>> > > > > > ? >> > > > >>> > > > > > >> > > > >>> > > > > > Thanks >> > > > >>> > > > > > >> > > > >>> > > > > > >> > > > >>> > > > > > >> > > > >>> > > > > > >> > > > >>> > > > > > >> > > > >>> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy >> > > > >>> > > > > > >> > > > > >> > > > >>> > > wrote: >> > > > >>> > > > > > >> > > > >>> > > > > > > Hello, >> > > > >>> > > > > > > >> > > > >>> > > > > > > We have had success deploying RDO (Mitaka) >> > > > on >> > > > baremetal >> > > > >>> > > > > > > systems - >> > > > >>> > > using >> > > > >>> > > > > > > Tripleo Quickstart with both single-nic- >> > > > vlans >> > > > and >> > > > >>> > > > > > > bond-with-vlans >> > > > >>> > > > > network >> > > > >>> > > > > > > isolation configurations. >> > > > >>> > > > > > > >> > > > >>> > > > > > > Baremetal can have some complicated >> > > > networking >> > > > issues but, >> > > > >>> > > > > > > from >> > > > >>> > > > > previous >> > > > >>> > > > > > > experiences, if a single-controller >> > > > deployment >> > > > worked but a >> > > > >>> > > > > > > HA >> > > > >>> > > > > deployment >> > > > >>> > > > > > > did not, I would check: >> > > > >>> > > > > > > - does the HA deployment command include: >> > > > -e >> > > > >>> > > > > > > >> > > > >>> > > > > >> > > > >>> > > >> > > > >>> > > >> > > > /usr/share/openstack-tripleo-heat- >> > > > templates/environments/puppet-pacemaker.yaml >> > > > >>> > > > > > > - are there possible MTU issues? >> > > > >>> > > > > > > >> > > > >>> > > > > > > >> > > > >>> > > > > > > ----- Original Message ----- >> > > > >>> > > > > > > > From: "Christopher Brown" > > > > .uk >> > > > > >> > > > >>> > > > > > > > To: pgsousa at gmail.com > > > > il.c >> > > > om>, >> > > > ibravo at ltgfederal.com >> > > > >>> > > > > > > > Cc: rdo-list at redhat.com > > > > redh >> > > > at.com> >> > > > >>> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM >> > > > >>> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo >> > > > stable >> > > > version? >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > Hello Ignacio, >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > Thanks for your response and good to know >> > > > it >> > > > isn't >> > > > just me! >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > I would be more than happy to provide >> > > > developers with >> > > > >>> > > > > > > > access to >> > > > >>> > > our >> > > > >>> > > > > > > > bare metal environments. I'll also file >> > > > some >> > > > bugzilla >> > > > >>> > > > > > > > reports to >> > > > >>> > > see >> > > > >>> > > > > if >> > > > >>> > > > > > > > this generates any interest. >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > Please do let me know if you make any >> > > > progress - I am >> > > > >>> > > > > > > > trying to >> > > > >>> > > > > deploy >> > > > >>> > > > > > > > HA with network isolation, multiple nics >> > > > and >> > > > vlans. >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > The RDO web page states: >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > "If you want to create a production-ready >> > > > cloud, >> > > > you'll >> > > > >>> > > > > > > > want to >> > > > >>> > > use >> > > > >>> > > > > the >> > > > >>> > > > > > > > TripleO quickstart guide." >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > which is a contradiction in terms really. >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > Cheers >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, >> > > > Ignacio >> > > > Bravo >> > > > wrote: >> > > > >>> > > > > > > > > Pedro / Christopher, >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > Just wanted to share with you that I >> > > > also >> > > > had >> > > > plenty of >> > > > >>> > > > > > > > > issues >> > > > >>> > > > > > > > > deploying on bare metal HA servers, and >> > > > have >> > > > paused the >> > > > >>> > > deployment >> > > > >>> > > > > > > > > using TripleO until better winds start >> > > > to >> > > > flow >> > > > here. I >> > > > >>> > > > > > > > > was >> > > > >>> > > able to >> > > > >>> > > > > > > > > deploy the QuickStart, but on bare >> > > > metal >> > > > the >> > > > history was >> > > > >>> > > different. >> > > > >>> > > > > > > > > Couldn't even deploy a two server >> > > > configuration. >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > I was thinking that it would be good to >> > > > have the >> > > > >>> > > > > > > > > developers >> > > > >>> > > have >> > > > >>> > > > > > > > > access to one of our environments and >> > > > go >> > > > through >> > > > a full >> > > > >>> > > > > > > > > install >> > > > >>> > > > > with >> > > > >>> > > > > > > > > us to better see where things fail. We >> > > > can >> > > > do this >> > > > >>> > > > > > > > > handholding >> > > > >>> > > > > > > > > deployment once every week/month based >> > > > on >> > > > developers time >> > > > >>> > > > > > > > > availability. That way we can get a >> > > > working >> > > > install, and >> > > > >>> > > > > > > > > we can >> > > > >>> > > > > > > > > troubleshoot real life environment >> > > > problems. >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > IB >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa >> > > > >>> > > > > > > > > > > > > l.co >> > > > m>> >> > > > >>> > > wrote: >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > > Yes. I've used this, but I'll try >> > > > again >> > > > as there's >> > > > >>> > > > > > > > > > seems to >> > > > >>> > > be >> > > > >>> > > > > new >> > > > >>> > > > > > > > > > updates. >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > > Stable Branch Skip all repos >> > > > mentioned >> > > > above, >> > > > other >> > > > >>> > > > > > > > > > than >> > > > >>> > > epel- >> > > > >>> > > > > > > > > > release which is still required. >> > > > >>> > > > > > > > > > Enable latest RDO Stable Delorean >> > > > repository >> > > > for all >> > > > >>> > > > > > > > > > packages >> > > > >>> > > > > > > > > > sudo curl -o >> > > > /etc/yum.repos.d/delorean-liberty.repo >> > > > >>> > > > > https://trunk.r >> > > > >>> > > > > > > > > > >> > > > doproject.org/centos7-liberty/current/delorean.repo >> > > > > > > > > >> > > > >>> > > > > > > > > > Enable the Delorean Deps repository >> > > > >>> > > > > > > > > > sudo curl -o >> > > > >>> > > > > > > > > > /etc/yum.repos.d/delorean-deps- >> > > > liberty.repo >> > > > >>> > > > > http://tru >> > > > >>> > > > > > > > > > >> > > > nk.rdoproject.org/centos7-liberty/delorean-deps.repo >> > > > > > > > o> >> > > > >>> > > > > > > > > > >> > > > >>> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, >> > > > Christopher >> > > > Brown < >> > > > >>> > > > > cbrown2 at ocf.co . >> > > > >>> > > > > > > > > > uk> wrote: >> > > > >>> > > > > > > > > > > No, Liberty deployed ok for us. >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > It suggests to me a package >> > > > mismatch. >> > > > Have you >> > > > >>> > > > > > > > > > > completely >> > > > >>> > > > > rebuilt >> > > > >>> > > > > > > > > > > the >> > > > >>> > > > > > > > > > > undercloud and then the images >> > > > using >> > > > Liberty? >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, >> > > > Pedro >> > > > Sousa wrote: >> > > > >>> > > > > > > > > > > > AttributeError: 'module' object >> > > > has >> > > > no >> > > > attribute >> > > > >>> > > 'PortOpt' >> > > > >>> > > > > > > > > > > -- >> > > > >>> > > > > > > > > > > Regards, >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > Christopher Brown >> > > > >>> > > > > > > > > > > OpenStack Engineer >> > > > >>> > > > > > > > > > > OCF plc >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > Tel: +44 (0)114 257 2200 >> > > > >> > > > >>> > > > > > > > > > > Web: www.ocf.co.uk >> > > > >> > > > >>> > > > > > > > > > > Blog: blog.ocf.co.uk > > > > cf.c >> > > > o.uk> >> > > > >>> > > > > > > > > > > Twitter: @ocfplc >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > Please note, any emails relating to >> > > > an >> > > > OCF >> > > > Support >> > > > >>> > > > > > > > > > > request >> > > > >>> > > must >> > > > >>> > > > > > > > > > > always >> > > > >>> > > > > > > > > > > be sent to support at ocf.co.uk >> > > > for a ticket number to >> > > > >>> > > > > > > > > > > be >> > > > >>> > > > > generated >> > > > >>> > > > > > > > > > > or >> > > > >>> > > > > > > > > > > existing support ticket to be >> > > > updated. >> > > > Should this >> > > > >>> > > > > > > > > > > not be >> > > > >>> > > done >> > > > >>> > > > > > > > > > > then OCF >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > cannot be held responsible for >> > > > requests >> > > > not >> > > > dealt >> > > > >>> > > > > > > > > > > with in a >> > > > >>> > > > > > > > > > > timely >> > > > >>> > > > > > > > > > > manner. >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > OCF plc is a company registered in >> > > > England >> > > > and Wales. >> > > > >>> > > > > Registered >> > > > >>> > > > > > > > > > > number >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. >> > > > Registered office >> > > > >>> > > address: >> > > > >>> > > > > > > > > > > OCF plc, >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > 5 Rotunda Business Centre, >> > > > Thorncliffe >> > > > Park, >> > > > >>> > > > > > > > > > > Chapeltown, >> > > > >>> > > > > > > > > > > Sheffield S35 >> > > > >>> > > > > > > > > > > 2PG. >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > > > If you have received this message >> > > > in >> > > > error, >> > > > please >> > > > >>> > > > > > > > > > > notify >> > > > >>> > > us >> > > > >>> > > > > > > > > > > immediately and remove it from your >> > > > system. >> > > > >>> > > > > > > > > > > >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > >> > > > _______________________________________________ >> > > > >>> > > > > > > > > rdo-list mailing list >> > > > >>> > > > > > > > > rdo-list at redhat.com > > > > dhat >> > > > .com> >> > > > >>> > > > > > > > > https://www.redhat.com/mailman/listinfo >> > > > /rdo >> > > > -list >> > > > >>> > > > > > > > > >> > > > >>> > > > > > > > > To unsubscribe: rdo-list-unsubscribe at re >> > > > dhat >> > > > .com >> > > > >> > > > >>> > > > > > > > -- >> > > > >>> > > > > > > > Regards, >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > Christopher Brown >> > > > >>> > > > > > > > OpenStack Engineer >> > > > >>> > > > > > > > OCF plc >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > Tel: +44 (0)114 257 2200 >> > > > >> > > > >>> > > > > > > > Web: www.ocf.co.uk >> > > > >>> > > > > > > > Blog: blog.ocf.co.uk > > > > uk> >> > > > >>> > > > > > > > Twitter: @ocfplc >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > Please note, any emails relating to an >> > > > OCF >> > > > Support >> > > > request >> > > > >>> > > > > > > > must >> > > > >>> > > > > always >> > > > >>> > > > > > > > be sent to support at ocf.co.uk >> > > > for a ticket number to be >> > > > >>> > > generated or >> > > > >>> > > > > > > > existing support ticket to be updated. >> > > > Should >> > > > this >> > > > not be >> > > > >>> > > > > > > > done >> > > > >>> > > then >> > > > >>> > > > > OCF >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > cannot be held responsible for requests >> > > > not >> > > > dealt >> > > > with in a >> > > > >>> > > timely >> > > > >>> > > > > > > > manner. >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > OCF plc is a company registered in >> > > > England >> > > > and Wales. >> > > > >>> > > > > > > > Registered >> > > > >>> > > > > number >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > 4132533, VAT number GB 780 6803 14. >> > > > Registered office >> > > > >>> > > > > > > > address: >> > > > >>> > > OCF >> > > > >>> > > > > plc, >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > 5 Rotunda Business Centre, Thorncliffe >> > > > Park, >> > > > Chapeltown, >> > > > >>> > > Sheffield >> > > > >>> > > > > S35 >> > > > >>> > > > > > > > 2PG. >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > If you have received this message in >> > > > error, >> > > > please >> > > > notify >> > > > >>> > > > > > > > us >> > > > >>> > > > > > > > immediately and remove it from your >> > > > system. >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > >> > > > _______________________________________________ >> > > > >>> > > > > > > > rdo-list mailing list >> > > > >>> > > > > > > > rdo-list at redhat.com > > > > at.c >> > > > om> >> > > > >>> > > > > > > > https://www.redhat.com/mailman/listinfo/r >> > > > do-l >> > > > ist >> > > > >>> > > > > > > > >> > > > >>> > > > > > > > To unsubscribe: rdo-list-unsubscribe at redh >> > > > at.c >> > > > om >> > > > >> > > > >>> > > > > > > > >> > > > >>> > > > > > > >> > > > >>> > > > > > >> > > > >>> > > > > >> > > > >>> > > > >> > > > >>> > > >> > > > >>> > >> > > > >> >> > > > >> >> > > > > >> > > > > >> > > > > _______________________________________________ >> > > > > rdo-list mailing list >> > > > > rdo-list at redhat.com >> > > > > https://www.redhat.com/mailman/listinfo/rdo-list >> > > > > >> > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > >> > > > >> > > > >> > > > >> > > > >> > > > _______________________________________________ >> > > > rdo-list mailing list >> > > > rdo-list at redhat.com >> > > > https://www.redhat.com/mailman/listinfo/rdo-list >> > > > >> > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > > >> > > -- >> > > Graeme Gillies >> > > Principal Systems Administrator >> > > Openstack Infrastructure >> > > Red Hat Australia >> > > >> > > _______________________________________________ >> > > rdo-list mailing list >> > > rdo-list at redhat.com >> > > https://www.redhat.com/mailman/listinfo/rdo-list >> > > >> > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> > -- >> > Regards, >> > >> > Christopher Brown >> > OpenStack Engineer >> > OCF plc >> > >> > Tel: +44 (0)114 257 2200 >> > Web: www.ocf.co.uk >> > Blog: blog.ocf.co.uk >> > Twitter: @ocfplc >> > >> > Please note, any emails relating to an OCF Support request must >> > always >> > be sent to support at ocf.co.uk for a ticket number to be generated or >> > existing support ticket to be updated. Should this not be done then >> > OCF >> > >> > cannot be held responsible for requests not dealt with in a timely >> > manner. >> > >> > OCF plc is a company registered in England and Wales. Registered >> > number >> > >> > 4132533, VAT number GB 780 6803 14. Registered office address: OCF >> > plc, >> > >> > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield >> > S35 >> > 2PG. >> > >> > If you have received this message in error, please notify us >> > immediately and remove it from your system. >> > >> > _______________________________________________ >> > rdo-list mailing list >> > rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- > Regards, > > Christopher Brown > OpenStack Engineer > OCF plc > > Tel: +44 (0)114 257 2200 > Web: www.ocf.co.uk > Blog: blog.ocf.co.uk > Twitter: @ocfplc > > Please note, any emails relating to an OCF Support request must always > be sent to support at ocf.co.uk for a ticket number to be generated or > existing support ticket to be updated. Should this not be done then OCF > > cannot be held responsible for requests not dealt with in a timely > manner. > > OCF plc is a company registered in England and Wales. Registered number > > 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 > 2PG. > > If you have received this message in error, please notify us > immediately and remove it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From trown at redhat.com Mon Jun 6 13:34:32 2016 From: trown at redhat.com (John Trowbridge) Date: Mon, 6 Jun 2016 09:34:32 -0400 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> Message-ID: <57557BE8.3080305@redhat.com> Hola RDOistas, First, thanks Graeme for the great write-up of how you use the RDO stable Mitaka release. I have some specific thoughts I will put in line, but I also have a couple more general responses to this whole thread. RDO is a distribution of OpenStack. One of the projects we distribute is TripleO. In doing so, we provide a lot of feedback into the upstream project to improve it, however improvements and feedback need to go upstream. It is totally fine to use rdo-list to confer with other RDO users about whether an issue is expected behavior, or some issue with how things are setup. However, up until this post, this thread has been a bit of a pile-on of all the problems people have with TripleO, without anything actionable at the RDO level. What do I mean by actionable at the RDO level? I think the bare minimum here would be a bugzilla. Even better would be a launchpad bug for upstream TripleO. Even better would be some patch that resolves the issue for you. If the issue is one of those three things not getting attention, then the mailing list is a totally valid avenue to reach out for some help to drive those. After all, while RDO does provide a lot of free as in free beer. The true benefit of open source is the Freedom to help improve things. - trown On 06/05/2016 07:37 PM, Graeme Gillies wrote: > Hi Everyone, > > I just wanted to say I have been following this thread quite closely and > can sympathize with some of the pain people are going through to get > tripleO to work. > > Currently it's quite difficult and a bit opaque on how to actually > utilise the stable mitaka repos in order to build a functional > undercloud and overcloud environment. > > First I wanted to share the steps I have undergone in order to get a > functional overcloud working with RDO Mitaka utilising the RDO stable > release built by CentOS, and then I'll talk about some specific steps I > think need to be undertaken by the RDO/TripleO team in order to provide > a better experience in the future. > > To get a functional overcloud using RDO Mitaka, you need to do the following > > 1) Install EPEL on your undercloud > 2) Install https://www.rdoproject.org/repos/rdo-release.rpm on your > undercloud > 3) Following the normal steps to install your undercloud (modifying > undercloud.conf, and running openstack undercloud install > 4) You will now need to manually patch ironic on the undercloud in order > to make sure repeated introspection works. This might not be needed if > you don't do any introspection, but I find more often than not you end > up having to do it, so it's worthwhile. The bug you need to patch is [1] > and I typically run the following commands to apply the patch > > # sudo su - > $ cd /usr/lib/python2.7/site-packages > $ curl > 'https://review.openstack.org/changes/306421/revisions/abd50d8438e7d371ce24f97d8f8f67052b562007/patch?download' > | base64 -d | patch -p1 > $ systemctl restart openstack-ironic-inspector > $ systemctl restart openstack-ironic-inspector-dnsmasq > $ exit > # > This is actually a good example of something actionable at the RDO level. The fix for this is already backported to the stable/mitaka branch upstream, and just requires a rebase of the mitaka ironic-inspector package. The only thing that could be improved here is an open bugzilla so that there was some RDO level visibility that this package needed a rebase for a critical bug. I will take an action to file such a bugzilla, and do the rebase. > 5) Manually patch the undercloud to build overcloud images using > rhos-release rpm only (which utilises the stable Mitaka repo from > CentOS, and nothing from RDO Trunk [delorean]). I do this by modifying > the file > > /usr/lib/python2.7/site-packages/tripleoclient/v1/overcloud_image.py > > At around line 467 you will see a reference to epel, I add a new line > after that to include the rdo_release DIB element to the build as well. > This typically makes the file look something like > > http://paste.openstack.org/show/508196/ > > (note like 468). Then I create a directory to store my images and build > them specifying the mitaka version of rdo_release. I then upload these > images > > # mkdir ~/images > # cd ~/images > # export RDO_RELEASE=mitaka > # openstack overcloud image build --all > # openstack overcloud image upload --update-existing > This is an example of something that needs to go into TripleO. I personally never recommend folks in RDO build images themselves, mostly because the tripleoclient wrapper around DIB is very opinionated and difficult to make even simple changes like this. The image building process in RDO is actually not even using tripleoclient, but the replacement library in tripleo-common that allows building images from a declarative YAML: https://github.com/openstack/tripleo-common/blob/master/scripts/tripleo-build-images https://github.com/redhat-openstack/ansible-role-tripleo-image-build/blob/master/library/tripleo_build_images.py What needs to go to TripleO here is a launchpad bug about switching tripleoclient to use this new image building library. That is not something we can do at the RDO level. Also, note there are stable release images published as well as the DLRN ones: http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/cbs/ > 6) Because of the bug at [2] which affects the ironic agent ramdisk, we > need to build a set of images utilising RDO Trunk for the mitaka branch > (where the fix is applied), and then upload *only* the new ironic > ramdisk. This is done with > > # mkdir ~/images-mitaka-trunk > # cd ~/images-mitaka-trunk > # export USE_DELOREAN_TRUNK=1 > # export > DELOREAN_TRUNK_REPO="http://trunk.rdoproject.org/centos7-mitaka/current/" > # export DELOREAN_REPO_FILE="delorean.repo" > # openstack overcloud image build --type agent-ramdisk > # sudo cp ironic-python-agent.initramfs /httpboot/agent.ramdisk > This is another example of something actionable at the RDO level, and has it all (Bugzilla with links to launchpad and gerrit). I will take an action to rebase the ironic-python-agent package to pull in that fix. > 7) Follow the rest of the documentation to deploy the overcloud normally > > Please note that obviously your mileage may vary, and this is by all > means not an exclusive list of the problems. I have however used these > steps to do multiple node deployments (10+ nodes) with HA over different > hardware sets with different networking setups (single nic, multiple nic > with bonding + vlans). > > With all the different repos floating around, all which change very > rapidly, combined with the documentation defaults targeting developers > and CI systems (not end users), it's hard to not only get a stable > TripleO install up, but also communicate and discuss clearly with others > what is working, what is broken, and how to compare two installations to > see if they are experiencing the same issues. > > To this end I would like to suggest to the RDO and TripleO community > that we undertake the following > > 1) Overhaul all the TripleO documentation so that all the steps default > to utilising/deploying using RDO Stable (that is, the releases done by > CBS). There should be colored boxes with alt steps for those who wish to > use RDO Trunk on the stable branch, and RDO Trunk from master. This > basically inverts the current pattern. I think anyone, Operator or > developer, who is working through the documentation for the first time, > should be given steps that maximise the chance of success, and thus the > most stable release we have. Once a user has gone through the process > once, they can look at the alternative steps for more aggressive releases > First, I am in 100% agreement that the TripleO documentation could use a major overhaul. That said, this is actually a fairly difficult problem. Your proposal is perfect for the RDO case, but it does not seem right for TripleO upstream docs to default to a stable release. In any case, this is something to be solved in TripleO and not in RDO. I did try to solve this by forking the TripleO docs and modifying them to be more RDO centric. However, this was pretty difficult to maintain, and so I abandoned that effort. If there were some group of RDO community members dedicated to doing that, it might be a possible solution. These would need to be net new contributions though, as I personally do not have bandwidth for that. > 2) Patch python-tripleoclient so that by default, when you run > "openstack overcloud image build" it builds the images utilising the > rdo_release DIB element, and sets the RDO_RELEASE environment variable > to be 'mitaka' or whenever the current stable release is (and we should > endevour to update it with new releases). There should be no extra > environment variables necessary to build images, and by default it > should never touch anything RDO Trunk (delorean) related > I think in the short term, it is really best to use the pre-built images for RDO, using virt-customize where needed to modify them. In the medium term, I think this would be a pretty benign patch to carry on the stable release of python-tripleoclient, but we need to start with a bugzilla for that. Once upstream TripleO switches tripleoclient to use the declarative YAML library in tripleo-common, I think carrying a patch on the stable release that makes it default to building the stable images makes a lot of sense. > 3) For bugs like the two I have mentioned above, we need to have some > sort of robust process for either backporting those patches to the > builds in CBS (I understand we don't do this for various reasons), or we > need some kind of tooling or solution that allows operators to apply > only the fixes they need from RDO Trunk (delorean). We need to ensure > that when an Operator utilises TripleO they have the greatest chance of > success, and bugs such as these which severely impact the deployment > process harm the adoption of TripleO and RDO. > For the ironic bugs I have taken an action to rebase and pick up the changes from the upstream stable branch. In general, this is only done when there is some critical issue, and not on a periodic basis. I was not aware of the two critical issues posted, but thanks to this detailed write-up now I am. As far as tooling to apply only fixes needed from RDO Trunk, that is just yum. Downloading the delorean.repo and modifying it to exclude all but the packages that need hotfixes would get the same result without manual patching. > 4) We should curate and keep an up to date page on rdoproject.org that > does highlight the outstanding issues related to TripleO on the RDO > Stable (CBS) releases. These should have links to relevant bugzillas, > clean instructions on how to work around the issue, or cleanly apply a > patch to avoid the issue, and as new releases make it out, we should > update the page to drop off workarounds that are no longer needed. > I like this idea. It suffers from the same problem as the TripleO docs issue though. That being, that it requires net new community members to step up and take ownership of it. > The goal is to push Operators/Users to working with our most stable code > as much as possible, and track/curate issues around that. This way > everyone should be on the same page, issues are easier to discuss and > diagnose, and overall peoples experiences should be better. > > I'm interested in thoughts, feedback, and concerns, both from the RDO > and TripleO community, and from the Operator/User community. > > Regards, > > Graeme > > [1] https://bugs.launchpad.net/ironic-inspector/+bug/1570447 > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1322892 > > On 05/06/16 02:04, Pedro Sousa wrote: >> Thanks Marius, >> >> I can confirm that it installs fine with 3 controllers + 3 computes >> after recreating the stack >> >> Regards >> >> On Sat, Jun 4, 2016 at 4:14 PM, Marius Cornea > > wrote: >> >> Hi Pedro, >> >> Scaling out controller nodes is not supported at this moment: >> https://bugzilla.redhat.com/show_bug.cgi?id=1243312 >> >> On Sat, Jun 4, 2016 at 5:05 PM, Pedro Sousa > > wrote: >> > Hi, >> > >> > some update on scaling the cloud: >> > >> > 1 controller + 1 compute -> 1 controller + 3 computes OK >> > >> > 1 controller + 3 computes -> 3 controllers + 3 compute FAILS >> > >> > Problem: The new controller nodes are "stuck" in "pscd start", so >> it seems >> > to be a problem joining the pacemaker cluster... Did anyone had this >> > problem? >> > >> > Regards >> > >> > >> > >> > >> > >> > >> > On Sat, Jun 4, 2016 at 1:50 AM, Pedro Sousa > > wrote: >> >> >> >> Hi, >> >> >> >> I finally managed to install a baremetal in mitaka with 1 >> controller + 1 >> >> compute with network isolation. Thank god :) >> >> >> >> All I did was: >> >> >> >> #yum install centos-release-openstack-mitaka >> >> #sudo yum install python-tripleoclient >> >> >> >> without epel repos. >> >> >> >> Then followed instructions from Redhat Site. >> >> >> >> I downloaded the overcloud images from: >> >> >> http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/ >> >> >> >> I do have an issue that forces me to delete a json file and run >> >> os-refresh-config inside my overcloud nodes other than that it >> installs >> >> fine. >> >> >> >> Now I'll test with more 2 controllers + 2 computes to have a full HA >> >> deployment. >> >> >> >> If anyone needs help to document this I'll be happy to help. >> >> >> >> Regards, >> >> Pedro Sousa >> >> >> >> >> >> On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy > > wrote: >> >>> >> >>> The report says: "Fix Released" as of 2016-05-24. >> >>> Are you installing on a clean system with the latest repositories? >> >>> >> >>> Might also want to check your version of rabbitmq: I have >> >>> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. >> >>> >> >>> ----- Original Message ----- >> >>> > From: "Pedro Sousa" > >> >>> > To: "Ronelle Landy" > >> >>> > Cc: "Christopher Brown" > >, "Ignacio Bravo" >> >>> > >, "rdo-list" >> >>> > > >> >>> > Sent: Friday, June 3, 2016 1:20:43 PM >> >>> > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >> >>> > >> >>> > Anyway to workaround this? Maybe downgrade hiera? >> >>> > >> >>> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy >> > >> >>> > wrote: >> >>> > >> >>> > > I am not sure exactly where you installed from, and when you >> did your >> >>> > > installation, but any chance, you've hit: >> >>> > > https://bugs.launchpad.net/tripleo/+bug/1584892? >> >>> > > There is a link bugzilla record. >> >>> > > >> >>> > > ----- Original Message ----- >> >>> > > > From: "Pedro Sousa" > > >> >>> > > > To: "Ronelle Landy" > > >> >>> > > > Cc: "Christopher Brown" > >, "Ignacio Bravo" < >> >>> > > ibravo at ltgfederal.com >, >> "rdo-list" >> >>> > > > > >> >>> > > > Sent: Friday, June 3, 2016 12:26:58 PM >> >>> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >> >>> > > > >> >>> > > > Thanks Ronelle, >> >>> > > > >> >>> > > > do you think this kind of errors can be related with network >> >>> > > > settings? >> >>> > > > >> >>> > > > "Could not retrieve fact='rabbitmq_nodename', >> >>> > > > resolution='': >> >>> > > > undefined method `[]' for nil:NilClass Could not retrieve >> >>> > > > fact='rabbitmq_nodename', resolution='': undefined >> >>> > > > method `[]' >> >>> > > > for nil:NilClass" >> >>> > > > >> >>> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy >> > >> >>> > > > wrote: >> >>> > > > >> >>> > > > > Hi Pedro, >> >>> > > > > >> >>> > > > > You could use the docs you referred to. >> >>> > > > > Alternatively, if you want to use a vm for the >> undercloud and >> >>> > > > > baremetal >> >>> > > > > machines for the overcloud, it is possible to use Tripleo >> >>> > > > > Qucikstart >> >>> > > with a >> >>> > > > > few modifications. >> >>> > > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. >> >>> > > > > >> >>> > > > > ----- Original Message ----- >> >>> > > > > > From: "Pedro Sousa" > > >> >>> > > > > > To: "Ronelle Landy" > > >> >>> > > > > > Cc: "Christopher Brown" > >, "Ignacio Bravo" < >> >>> > > > > ibravo at ltgfederal.com >, >> "rdo-list" >> >>> > > > > > > >> >>> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM >> >>> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >> >>> > > > > > >> >>> > > > > > Hi Ronelle, >> >>> > > > > > >> >>> > > > > > maybe I understand it wrong but I thought that Tripleo >> >>> > > > > > Quickstart >> >>> > > was for >> >>> > > > > > deploying virtual environments? >> >>> > > > > > >> >>> > > > > > And for baremetal we should use >> >>> > > > > > >> >>> > > > > >> >>> > > >> >>> > > >> http://docs.openstack.org/developer/tripleo-docs/installation/installation.html >> >>> > > > > > ? >> >>> > > > > > >> >>> > > > > > Thanks >> >>> > > > > > >> >>> > > > > > >> >>> > > > > > >> >>> > > > > > >> >>> > > > > > >> >>> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy >> >>> > > > > > > >> >>> > > wrote: >> >>> > > > > > >> >>> > > > > > > Hello, >> >>> > > > > > > >> >>> > > > > > > We have had success deploying RDO (Mitaka) on baremetal >> >>> > > > > > > systems - >> >>> > > using >> >>> > > > > > > Tripleo Quickstart with both single-nic-vlans and >> >>> > > > > > > bond-with-vlans >> >>> > > > > network >> >>> > > > > > > isolation configurations. >> >>> > > > > > > >> >>> > > > > > > Baremetal can have some complicated networking >> issues but, >> >>> > > > > > > from >> >>> > > > > previous >> >>> > > > > > > experiences, if a single-controller deployment >> worked but a >> >>> > > > > > > HA >> >>> > > > > deployment >> >>> > > > > > > did not, I would check: >> >>> > > > > > > - does the HA deployment command include: -e >> >>> > > > > > > >> >>> > > > > >> >>> > > >> >>> > > >> /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >> >>> > > > > > > - are there possible MTU issues? >> >>> > > > > > > >> >>> > > > > > > >> >>> > > > > > > ----- Original Message ----- >> >>> > > > > > > > From: "Christopher Brown" > > >> >>> > > > > > > > To: pgsousa at gmail.com , >> ibravo at ltgfederal.com >> >>> > > > > > > > Cc: rdo-list at redhat.com >> >>> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM >> >>> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable >> version? >> >>> > > > > > > > >> >>> > > > > > > > Hello Ignacio, >> >>> > > > > > > > >> >>> > > > > > > > Thanks for your response and good to know it isn't >> just me! >> >>> > > > > > > > >> >>> > > > > > > > I would be more than happy to provide developers with >> >>> > > > > > > > access to >> >>> > > our >> >>> > > > > > > > bare metal environments. I'll also file some bugzilla >> >>> > > > > > > > reports to >> >>> > > see >> >>> > > > > if >> >>> > > > > > > > this generates any interest. >> >>> > > > > > > > >> >>> > > > > > > > Please do let me know if you make any progress - I am >> >>> > > > > > > > trying to >> >>> > > > > deploy >> >>> > > > > > > > HA with network isolation, multiple nics and vlans. >> >>> > > > > > > > >> >>> > > > > > > > The RDO web page states: >> >>> > > > > > > > >> >>> > > > > > > > "If you want to create a production-ready cloud, >> you'll >> >>> > > > > > > > want to >> >>> > > use >> >>> > > > > the >> >>> > > > > > > > TripleO quickstart guide." >> >>> > > > > > > > >> >>> > > > > > > > which is a contradiction in terms really. >> >>> > > > > > > > >> >>> > > > > > > > Cheers >> >>> > > > > > > > >> >>> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo >> wrote: >> >>> > > > > > > > > Pedro / Christopher, >> >>> > > > > > > > > >> >>> > > > > > > > > Just wanted to share with you that I also had >> plenty of >> >>> > > > > > > > > issues >> >>> > > > > > > > > deploying on bare metal HA servers, and have >> paused the >> >>> > > deployment >> >>> > > > > > > > > using TripleO until better winds start to flow >> here. I >> >>> > > > > > > > > was >> >>> > > able to >> >>> > > > > > > > > deploy the QuickStart, but on bare metal the >> history was >> >>> > > different. >> >>> > > > > > > > > Couldn't even deploy a two server configuration. >> >>> > > > > > > > > >> >>> > > > > > > > > I was thinking that it would be good to have the >> >>> > > > > > > > > developers >> >>> > > have >> >>> > > > > > > > > access to one of our environments and go through >> a full >> >>> > > > > > > > > install >> >>> > > > > with >> >>> > > > > > > > > us to better see where things fail. We can do this >> >>> > > > > > > > > handholding >> >>> > > > > > > > > deployment once every week/month based on >> developers time >> >>> > > > > > > > > availability. That way we can get a working >> install, and >> >>> > > > > > > > > we can >> >>> > > > > > > > > troubleshoot real life environment problems. >> >>> > > > > > > > > >> >>> > > > > > > > > >> >>> > > > > > > > > IB >> >>> > > > > > > > > >> >>> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa >> >>> > > > > > > > > > >> >>> > > wrote: >> >>> > > > > > > > > >> >>> > > > > > > > > > Yes. I've used this, but I'll try again as there's >> >>> > > > > > > > > > seems to >> >>> > > be >> >>> > > > > new >> >>> > > > > > > > > > updates. >> >>> > > > > > > > > > >> >>> > > > > > > > > > >> >>> > > > > > > > > > >> >>> > > > > > > > > > Stable Branch Skip all repos mentioned above, >> other >> >>> > > > > > > > > > than >> >>> > > epel- >> >>> > > > > > > > > > release which is still required. >> >>> > > > > > > > > > Enable latest RDO Stable Delorean repository >> for all >> >>> > > > > > > > > > packages >> >>> > > > > > > > > > sudo curl -o >> /etc/yum.repos.d/delorean-liberty.repo >> >>> > > > > https://trunk.r >> >>> > > > > > > > > > >> doproject.org/centos7-liberty/current/delorean.repo >> >> >>> > > > > > > > > > Enable the Delorean Deps repository >> >>> > > > > > > > > > sudo curl -o >> >>> > > > > > > > > > /etc/yum.repos.d/delorean-deps-liberty.repo >> >>> > > > > http://tru >> >>> > > > > > > > > > >> nk.rdoproject.org/centos7-liberty/delorean-deps.repo >> >> >>> > > > > > > > > > >> >>> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher >> Brown < >> >>> > > > > cbrown2 at ocf.co . >> >>> > > > > > > > > > uk> wrote: >> >>> > > > > > > > > > > No, Liberty deployed ok for us. >> >>> > > > > > > > > > > >> >>> > > > > > > > > > > It suggests to me a package mismatch. Have you >> >>> > > > > > > > > > > completely >> >>> > > > > rebuilt >> >>> > > > > > > > > > > the >> >>> > > > > > > > > > > undercloud and then the images using Liberty? >> >>> > > > > > > > > > > >> >>> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro >> Sousa wrote: >> >>> > > > > > > > > > > > AttributeError: 'module' object has no >> attribute >> >>> > > 'PortOpt' >> >>> > > > > > > > > > > -- >> >>> > > > > > > > > > > Regards, >> >>> > > > > > > > > > > >> >>> > > > > > > > > > > Christopher Brown >> >>> > > > > > > > > > > OpenStack Engineer >> >>> > > > > > > > > > > OCF plc >> >>> > > > > > > > > > > >> >>> > > > > > > > > > > Tel: +44 (0)114 257 2200 >> >> >>> > > > > > > > > > > Web: www.ocf.co.uk >> >>> > > > > > > > > > > Blog: blog.ocf.co.uk >> >>> > > > > > > > > > > Twitter: @ocfplc >> >>> > > > > > > > > > > >> >>> > > > > > > > > > > Please note, any emails relating to an OCF >> Support >> >>> > > > > > > > > > > request >> >>> > > must >> >>> > > > > > > > > > > always >> >>> > > > > > > > > > > be sent to support at ocf.co.uk >> for a ticket number to >> >>> > > > > > > > > > > be >> >>> > > > > generated >> >>> > > > > > > > > > > or >> >>> > > > > > > > > > > existing support ticket to be updated. >> Should this >> >>> > > > > > > > > > > not be >> >>> > > done >> >>> > > > > > > > > > > then OCF >> >>> > > > > > > > > > > >> >>> > > > > > > > > > > cannot be held responsible for requests not >> dealt >> >>> > > > > > > > > > > with in a >> >>> > > > > > > > > > > timely >> >>> > > > > > > > > > > manner. >> >>> > > > > > > > > > > >> >>> > > > > > > > > > > OCF plc is a company registered in England >> and Wales. >> >>> > > > > Registered >> >>> > > > > > > > > > > number >> >>> > > > > > > > > > > >> >>> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. >> Registered office >> >>> > > address: >> >>> > > > > > > > > > > OCF plc, >> >>> > > > > > > > > > > >> >>> > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, >> >>> > > > > > > > > > > Chapeltown, >> >>> > > > > > > > > > > Sheffield S35 >> >>> > > > > > > > > > > 2PG. >> >>> > > > > > > > > > > >> >>> > > > > > > > > > > If you have received this message in error, >> please >> >>> > > > > > > > > > > notify >> >>> > > us >> >>> > > > > > > > > > > immediately and remove it from your system. >> >>> > > > > > > > > > > >> >>> > > > > > > > > >> >>> > > > > > > > > _______________________________________________ >> >>> > > > > > > > > rdo-list mailing list >> >>> > > > > > > > > rdo-list at redhat.com >> >>> > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list >> >>> > > > > > > > > >> >>> > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >>> > > > > > > > -- >> >>> > > > > > > > Regards, >> >>> > > > > > > > >> >>> > > > > > > > Christopher Brown >> >>> > > > > > > > OpenStack Engineer >> >>> > > > > > > > OCF plc >> >>> > > > > > > > >> >>> > > > > > > > Tel: +44 (0)114 257 2200 >> >> >>> > > > > > > > Web: www.ocf.co.uk >> >>> > > > > > > > Blog: blog.ocf.co.uk >> >>> > > > > > > > Twitter: @ocfplc >> >>> > > > > > > > >> >>> > > > > > > > Please note, any emails relating to an OCF Support >> request >> >>> > > > > > > > must >> >>> > > > > always >> >>> > > > > > > > be sent to support at ocf.co.uk >> for a ticket number to be >> >>> > > generated or >> >>> > > > > > > > existing support ticket to be updated. Should this >> not be >> >>> > > > > > > > done >> >>> > > then >> >>> > > > > OCF >> >>> > > > > > > > >> >>> > > > > > > > cannot be held responsible for requests not dealt >> with in a >> >>> > > timely >> >>> > > > > > > > manner. >> >>> > > > > > > > >> >>> > > > > > > > OCF plc is a company registered in England and Wales. >> >>> > > > > > > > Registered >> >>> > > > > number >> >>> > > > > > > > >> >>> > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office >> >>> > > > > > > > address: >> >>> > > OCF >> >>> > > > > plc, >> >>> > > > > > > > >> >>> > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, >> Chapeltown, >> >>> > > Sheffield >> >>> > > > > S35 >> >>> > > > > > > > 2PG. >> >>> > > > > > > > >> >>> > > > > > > > If you have received this message in error, please >> notify >> >>> > > > > > > > us >> >>> > > > > > > > immediately and remove it from your system. >> >>> > > > > > > > >> >>> > > > > > > > _______________________________________________ >> >>> > > > > > > > rdo-list mailing list >> >>> > > > > > > > rdo-list at redhat.com >> >>> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list >> >>> > > > > > > > >> >>> > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >>> > > > > > > > >> >>> > > > > > > >> >>> > > > > > >> >>> > > > > >> >>> > > > >> >>> > > >> >>> > >> >> >> >> >> > >> > >> > _______________________________________________ >> > rdo-list mailing list >> > rdo-list at redhat.com >> > https://www.redhat.com/mailman/listinfo/rdo-list >> > >> > To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> >> >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > From apevec at redhat.com Mon Jun 6 14:25:59 2016 From: apevec at redhat.com (Alan Pevec) Date: Mon, 6 Jun 2016 16:25:59 +0200 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <57557BE8.3080305@redhat.com> References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> <57557BE8.3080305@redhat.com> Message-ID: >> 4) We should curate and keep an up to date page on rdoproject.org that >> does highlight the outstanding issues related to TripleO on the RDO >> Stable (CBS) releases. These should have links to relevant bugzillas, >> clean instructions on how to work around the issue, or cleanly apply a >> patch to avoid the issue, and as new releases make it out, we should >> update the page to drop off workarounds that are no longer needed. There's placeholder page exactly for that: https://www.rdoproject.org/testday/workarounds/ (shortcut https://www.rdoproject.org/workarounds ) Cheers, Alan From rbowen at redhat.com Mon Jun 6 14:37:13 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 6 Jun 2016 10:37:13 -0400 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> <57557BE8.3080305@redhat.com> Message-ID: <6e8295ec-7111-be30-e969-3075a22e9e21@redhat.com> On 06/06/2016 10:25 AM, Alan Pevec wrote: >>> 4) We should curate and keep an up to date page on rdoproject.org that >>> does highlight the outstanding issues related to TripleO on the RDO >>> Stable (CBS) releases. These should have links to relevant bugzillas, >>> clean instructions on how to work around the issue, or cleanly apply a >>> patch to avoid the issue, and as new releases make it out, we should >>> update the page to drop off workarounds that are no longer needed. > There's placeholder page exactly for that: > https://www.rdoproject.org/testday/workarounds/ > (shortcut https://www.rdoproject.org/workarounds ) > This page tends to get flushed each test day, and so we haven't done a great job of keeping it up to date *between* test days. Right now, there's nothing there. It would indeed be really helpful to have this page updated on a regular basis, with no-longer-relevant workarounds removed and new ones added. --Rich From rbowen at redhat.com Mon Jun 6 14:48:46 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 6 Jun 2016 10:48:46 -0400 Subject: [rdo-list] Reminder: Newton 1 test day, Thursday and Friday Message-ID: <88f78c9b-fd9d-bc87-986c-f7cb63d5ee35@redhat.com> Reminder: with Newton Milestone 1 out, we're planning to hold an RDO test day later this week, June 9-10. Details are at https://www.rdoproject.org/testday/newton/milestone1/ As always we greatly appreciate your help in any way that you can: Getting the word out, testing, documenting test case instructions, helping others who show up for test day, and writing up successes and failures. Thanks for helping us make Newton the best RDO yet. --Rich From hguemar at fedoraproject.org Mon Jun 6 15:00:03 2016 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 6 Jun 2016 15:00:03 +0000 (UTC) Subject: [rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20160606150003.3C61060A4009@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2016-06-08 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Meeting ](https://etherpad.openstack.org/p/RDO-Meeting) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From rbowen at redhat.com Mon Jun 6 18:13:13 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 6 Jun 2016 14:13:13 -0400 Subject: [rdo-list] [Rdo-newsletter] June 2016 RDO Community Newsletter Message-ID: <748f1b47-d22b-0813-9a45-9d54fa8839be@redhat.com> June 2016, RDO Community Newsletter See the newsletter in your browser at https://www.rdoproject.org/newsletter/2016-june/ Quick links: * Quick Start * Mailing Lists * RDO release packages * review.RDOProject.org * RDO blog * Q&A * Open Tickets * Twitter * Newton release schedule Thanks for being part of the RDO community! The Newton cycle is moving along quickly, and Milestone 1 is already out. June is going to be another busy month, with lots of exciting events happening around the world. Upcoming Events Some of the upcoming events in RDO, and the larger OpenStack ecosystem, are: *Newton 1 Test Day* With Newton 1 out, it's time to test the first RDO Newton packages. We're planning a test day on *June 9th and 10th*. Details may be found on the test day website . Drop by the #rdo channel on the Freenode IRC network for help and discussion. As always we greatly appreciate your help in any way that you can: Getting the word out, testing, documenting test case instructions, helping others who show up for test day, and writing up successes and failures. *Newton Doc Day* We need your help with the RDO website. As each new upstream release happens, we need to update the RDO website to reflect the new reality. For example, there are still pages on the site that refer to Liberty as the current stable release, and Mitaka as the upcoming one. To address this, we're planning a doc day on *June 16th and 17th*. As with the test day, we'll be hanging out on #rdo on Freenode for questions and discussion, and we'll be identifying documents that need to be updated, removed, or added, in the website issue tracker . You can help by identifying outdated pages or new pages that need to be written, or by updating and writing those pages. *OpenStack Days* There are numerous upcoming OpenStack Days where RDO engineers will be attending, or speaking, in the coming days. As I write this, OpenStack Days Budapest is happening - it'll be over by the time you read this. Later this week (June 8), OpenStack Days Prague will be happening at the DOX Centre for Contemporary Art. Other upcoming OpenStack Day events include Dublin (June 10), Mexico City (June 14), Tokyo (July 6 and 7), and Beijing (July 14 and 15). If you're going to attend any of these events, please consider writing up your experience and sending the report to rdo-list for all of us. *Red Hat Summit* We're just three weeks away from Red Hat Summit and I want to highlight two reasons that you should be there. K Rain Leander will be giving a presentation titled Become an OpenStack TripleO ATC, easy as ABC in which she'll teach you how to set up a developer environment necessary to work on improving TripleO. Along the way, you'll learn a lot about TripleO, and the larger OpenStack developer ecosystem. Scott Suehle will be giving a talk titled Use Linux on your whole rack with RDO and open networking , in which he'll demo setting up an RDO deployment, and best networking practices for taming the complexity of OpenStack networking. Of course, there's lots of other reasons to come to Red Hat Summit. While you're there, stop by the RDO booth in Community Central for your RDO tshirt, and your copy of TripleO on a USB drive. *And ?* Other RDO events, including the many OpenStack meetups around the world, are always listed on the RDO Events page . If you have an RDO-related event, please feel free to add it by submitting a pull request to the RDO community events calendar . Blog Posts As always, there have been some great RDO blog posts in the last month, but I want to point out two in particular. Last week, Alfredo Moralejo wrote two excellent blog posts explaining how the RDO project works, what it produces, who does what in that process, and how you can help. The first of these two posts, Newbie in RDO: one size doesn't fit all , he gives the newbie's perspective on the project, explaining how it fits into the upstream and downstream communities. In the second post, Newbie in RDO (2): RDO Trunk from a bird's eye view , he shows in more detail how all the parts of the RDO process fit together, and where you need to go to work with each component. Packaging meetings Every Wednesday at 15:00 UTC, we have the weekly RDO community meeting on the #RDO channel on Freenode IRC. This is where the business of running the project is discussed and decided. If there's something you'd like to see happen, you need to be at this meeting so that your voice can be heard. The agenda for this meeting is always at https://etherpad.openstack.org/p/RDO-Meeting and you are encouraged to add the items that you care about to that document. Notes from past meetings are usually posted on rdo-list and on the website . At 15:00 UTC every Thursday, we have the CentOS Cloud SIG Meeting on #centos-devel. The agenda for that meeting is at https://etherpad.openstack.org/p/centos-cloud-sig, and the minutes from past meetings are posted at https://www.rdoproject.org/community/cloud-sig-meeting/. The purpose of the CentOS Cloud SIG (Special Interest Group) is to coordinate the various cloud infrastructure projects that are packages for CentOS, and whatever common resources they all need. Bug Statistics and Bug Triage Although we try to do a bug triage on the third Tuesday of every month, the triage in May was something special, as we tried to clean up all of the End Of Life (EOL) bugs - that is, bugs that refer to versions of OpenStack that have been declared EOL upstream , and thus are unlikely to ever be fixed. Chandan Kumar has written a report from that effort on the RDO blog. Highlights include the closing of almost 400 bugs by an automated process. Thanks to Chandan for putting together that event, and to everyone that participated. Keep in touch There's lots of ways to stay in in touch with what's going on in the RDO community. The best ways are ? WWW * RDO * OpenStack Q&A Mailing Lists: * rdo-list mailing list * This newsletter IRC * IRC - #rdo on Freenode.irc.net * Puppet module development - #rdo-puppet Social Media * Follow us on Twitter * Google+ * Facebook Thanks again for being part of the RDO community! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ Rdo-newsletter mailing list Rdo-newsletter at redhat.com https://www.redhat.com/mailman/listinfo/rdo-newsletter From rbowen at redhat.com Mon Jun 6 18:15:39 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 6 Jun 2016 14:15:39 -0400 Subject: [rdo-list] Fwd: [openstack-community] Call for Presentations NOW OPEN- OpenStack Summit Barcelona October 2016 In-Reply-To: <57553859.5050605@openstack.org> References: <57553859.5050605@openstack.org> Message-ID: Just in case you haven't seen it yet ... -------- Forwarded Message -------- Subject: [openstack-community] Call for Presentations NOW OPEN- OpenStack Summit Barcelona October 2016 Date: Mon, 06 Jun 2016 03:46:17 -0500 From: Jimmy McArthur To: community at lists.openstack.org Hi Everyone- The Call for Presentations is NOW OPEN for the upcoming OpenStack Summit in Barcelona, October 25-28! Hurry - the submission deadline is July 13th at 11:59pm PST (July 14th at 6:59 UTC). Details on the selection process and track chair information can be found here . June 17th is the deadline for track chair nominations. New: Proposed sessions must indicate a format: Panel or Presentation. Each format has a maximum number of speakers associated. Panels are allowed a total of four speakers plus one moderator, whereas Presentations are limited to two speakers. As a reminder, speakers are limited to a maximum of three presentation submissions total. Contact speakersupport at openstack.org with any submission questions. REGISTRATION Registration is now open . Purchase your discounted early bird passes now. Prices will increase in early September. SPONSORSHIP SALES Sponsorship sales will open Wednesday, June 15, at 8:00am PST (15:00 UTC). At that time the executable electronic contract will be made available for signature HERE . All sponsorships will be sold on a first-come basis determined by the time stamp on completed electronic agreements. The top level sponsorships and limited quantity add-ons always sell out quickly so please be mindful of that. Full details of the sponsorship signing process are outlined here and on page 4 of the Prospectus . Once you execute the Barcelona Summit contract you will receive a confirmation email from Echosign, you must click the link in that email to complete the signing process. Don?t forget that last step! If you have any overdue balances owed to the Foundation then these must be paid in full before you sign the Summit contract. AUSTIN SESSION FEEDBACK During the OpenStack Summit in Austin we did not receive as many sessions ratings as we'd have liked via the mobile app. Here?s another chance to provide feedback to speakers. You'll find that the session list is based on your personal schedule. If you didn't attend one of the sessions listed, just skip it. We will also be improving the in-app feedback mechanism for future Summits. Please note that all feedback will be publicly viewable including your name. Click here to rate the Summit sessions that you attended in Austin. If you have any general Summit questions, contact us at summit at openstack.org . Erin Disney OpenStack Marketing erin at openstack.org -------------- next part -------------- _______________________________________________ Community mailing list Community at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/community From bkero at redhat.com Mon Jun 6 18:54:33 2016 From: bkero at redhat.com (Ben Kero) Date: Mon, 6 Jun 2016 11:54:33 -0700 Subject: [rdo-list] OpenStack Puppet Module format change (Newton-forward) Message-ID: Hello all, This should not come has a great surprise for some, but I realized recently that i should probably reach a larger audience. We're changing the way that we handle puppet module packages in Newton (and future releases). The previous way of handling Puppet modules for RDO was to bundle them all together into a single "OpenStack-Puppet-Modules" package. This involved a lot of thankless manual work to merge all the modules into a single repository [1]. This will still be the case for Kilo, Liberty, and Mitaka releases. However, for upcoming releases such as Newton, we will be splitting out the OpenStack Puppet Modules into individual packages (example: puppet-nova [2]) and accompanying -distgit repository [3]. There will still be a 'openstack-puppet-modules' metapackage [4] that will install all the dependent modules for folks that need it. It is entirely possible to select only the modules that you need for your particular installation though. This message should also serve as notice to RDO contributors still doing ports on the mono-repo. The changes made to the master branch will no longer be picked up and used. Please let me know if you have any questions or concerns about the shift to this new format. I would be happy to help clarify or respond to your concerns. I can be reached as 'bkero' on IRC. -- Ben Kero RedHat, Engineer, OPM-CI [1] https://github.com/redhat-openstack/openstack-puppet-modules [2] https://review.rdoproject.org/r/gitweb?p=puppet/puppet-nova.git [3] https://review.rdoproject.org/r/gitweb?p=puppet/puppet-nova-distgit.git [4] https://review.rdoproject.org/r/gitweb?p=openstack/openstack-puppet-modules-distgit.git;a=blob;f=openstack-puppet-modules.spec;h=1e650414a94617a6c57384d3d17fb603b88249ba;hb=HEAD -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Jun 6 19:11:07 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 6 Jun 2016 15:11:07 -0400 Subject: [rdo-list] Unanswered "RDO" questions on ask.openstack.org Message-ID: 58 unanswered questions: Need To Download Older Kilo https://ask.openstack.org/en/question/92658/need-to-download-older-kilo/ Tags: kilo-openstack AWS Ec2 inst Eth port loses IP when attached to linux bridge in Openstack https://ask.openstack.org/en/question/92271/aws-ec2-inst-eth-port-loses-ip-when-attached-to-linux-bridge-in-openstack/ Tags: openstack, networking, aws ceilometer: I've installed openstack mitaka. but swift stops working when i configured the pipeline and ceilometer filter https://ask.openstack.org/en/question/92035/ceilometer-ive-installed-openstack-mitaka-but-swift-stops-working-when-i-configured-the-pipeline-and-ceilometer-filter/ Tags: ceilometer, openstack-swift, mitaka Fail on installing the controller on Cent OS 7 https://ask.openstack.org/en/question/92025/fail-on-installing-the-controller-on-cent-os-7/ Tags: installation, centos7, controller the error of service entity and API endpoints https://ask.openstack.org/en/question/91702/the-error-of-service-entity-and-api-endpoints/ Tags: service, entity, and, api, endpoints Running delorean fails: Git won't fetch sources https://ask.openstack.org/en/question/91600/running-delorean-fails-git-wont-fetch-sources/ Tags: delorean, rdo RDO Manager install issue - can't resolve trunk-mgt.rdoproject.org https://ask.openstack.org/en/question/91533/rdo-manager-install-issue-cant-resolve-trunk-mgtrdoprojectorg/ Tags: rdo-manager Keystone authentication: Failed to contact the endpoint. https://ask.openstack.org/en/question/91517/keystone-authentication-failed-to-contact-the-endpoint/ Tags: keystone, authenticate, endpoint, murano adding computer node. https://ask.openstack.org/en/question/91417/adding-computer-node/ Tags: rdo, openstack Liberty RDO: stack resource topology icons are pink https://ask.openstack.org/en/question/91347/liberty-rdo-stack-resource-topology-icons-are-pink/ Tags: stack, resource, topology, dashboard Build of instance aborted: Block Device Mapping is Invalid. https://ask.openstack.org/en/question/91205/build-of-instance-aborted-block-device-mapping-is-invalid/ Tags: cinder, lvm, centos7 No handlers could be found for logger "oslo_config.cfg" while syncing the glance database https://ask.openstack.org/en/question/91169/no-handlers-could-be-found-for-logger-oslo_configcfg-while-syncing-the-glance-database/ Tags: liberty, glance, install-openstack how to use chef auto manage openstack in RDO? https://ask.openstack.org/en/question/90992/how-to-use-chef-auto-manage-openstack-in-rdo/ Tags: chef, rdo Cinder error issues on Liberty https://ask.openstack.org/en/question/90606/cinder-error-issues-on-liberty/ Tags: cinder-volume, liberty Separate Cinder storage traffic from management https://ask.openstack.org/en/question/90405/separate-cinder-storage-traffic-from-management/ Tags: cinder, separate, nic, iscsi Openstack installation fails using packstack, failure is in installation of openstack-nova-compute. Error: Dependency Package[nova-compute] has failures https://ask.openstack.org/en/question/88993/openstack-installation-fails-using-packstack-failure-is-in-installation-of-openstack-nova-compute-error-dependency-packagenova-compute-has-failures/ Tags: novacompute, rdo, packstack, dependency, failure CentOS OpenStack - compute node can't talk https://ask.openstack.org/en/question/88989/centos-openstack-compute-node-cant-talk/ Tags: rdo How to setup SWIFT_PROXY_NODE and SWIFT_STORAGE_NODEs separately on RDO Liberty ? https://ask.openstack.org/en/question/88897/how-to-setup-swift_proxy_node-and-swift_storage_nodes-separately-on-rdo-liberty/ Tags: rdo, liberty, swift, ha VM and container can't download anything from internet https://ask.openstack.org/en/question/88338/vm-and-container-cant-download-anything-from-internet/ Tags: rdo, neutron, network, connectivity Fedora22, Liberty, horizon VNC console and keymap=sv with ; and/ https://ask.openstack.org/en/question/87451/fedora22-liberty-horizon-vnc-console-and-keymapsv-with-and/ Tags: keyboard, map, keymap, vncproxy, novnc OpenStack-Docker driver failed https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/ Tags: docker, openstack, liberty Can't create volume with cinder https://ask.openstack.org/en/question/86670/cant-create-volume-with-cinder/ Tags: cinder, glusterfs, nfs Sahara SSHException: Error reading SSH protocol banner https://ask.openstack.org/en/question/84710/sahara-sshexception-error-reading-ssh-protocol-banner/ Tags: sahara, icehouse, ssh, vanila Error Sahara create cluster: 'Error attach volume to instance https://ask.openstack.org/en/question/84651/error-sahara-create-cluster-error-attach-volume-to-instance/ Tags: sahara, attach-volume, vanila, icehouse Creating Sahara cluster: Error attach volume to instance https://ask.openstack.org/en/question/84650/creating-sahara-cluster-error-attach-volume-to-instance/ Tags: sahara, attach-volume, hadoop, icehouse, vanilla Routing between two tenants https://ask.openstack.org/en/question/84645/routing-between-two-tenants/ Tags: kilo, fuel, rdo, routing RDO kilo installation metadata widget doesn't work https://ask.openstack.org/en/question/83870/rdo-kilo-installation-metadata-widget-doesnt-work/ Tags: kilo, flavor, metadata Not able to ssh into RDO Kilo instance https://ask.openstack.org/en/question/83707/not-able-to-ssh-into-rdo-kilo-instance/ Tags: rdo, instance-ssh redhat RDO enable access to swift via S3 https://ask.openstack.org/en/question/83607/redhat-rdo-enable-access-to-swift-via-s3/ Tags: swift, s3 openstack baremetal introspection internal server error https://ask.openstack.org/en/question/82790/openstack-baremetal-introspection-internal-server-error/ Tags: rdo, ironic-inspector, tripleo From rbowen at redhat.com Mon Jun 6 19:25:37 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 6 Jun 2016 15:25:37 -0400 Subject: [rdo-list] Red Hat Summit: Demo volunteers wanted Message-ID: <9b62e5dd-0b7d-d46f-248a-0ac9a303388c@redhat.com> At OpenStack Summit, we had a number of people volunteer to present demos at the RDO booth and/or answer attendee questions. This was a big success, with almost every time slot being filled by very helpful people. We'd like to do the same thing at Red Hat Summit, which will be held in 3 weeks in San Francisco. If you plan to attend, and if you have a free time slot, I would appreciate it if you'd be willing to do a shift in the booth, and possibly bring a demo along with you. Demos *can* be a "live demo", but typically, unless it's completely self-contained on your laptop, you're better off doing a video, since network conditions can't be guaranteed. (We usually have a hard-wire network in the booth, but even that can be flakey at peak times.) If you're willing to participate, please claim a slot in the schedule etherpad, HERE: https://etherpad.openstack.org/p/rhsummit-rdo-booth Time slots are mostly 60 minutes. If some other time slot works better for you, please do feel free to modify the start/end times. Please indicate what you'll be demoing. Thanks! --Rich From apevec at redhat.com Mon Jun 6 19:41:56 2016 From: apevec at redhat.com (Alan Pevec) Date: Mon, 6 Jun 2016 21:41:56 +0200 Subject: [rdo-list] OpenStack Puppet Module format change (Newton-forward) In-Reply-To: References: Message-ID: > The previous way of handling Puppet modules for RDO was to bundle them all > together into a single "OpenStack-Puppet-Modules" package. This involved a > lot of thankless manual work to merge all the modules into a single > repository [1]. This will still be the case for Kilo, Liberty, and Mitaka > releases. ... > [1] https://github.com/redhat-openstack/openstack-puppet-modules Important to note is that OPM starting Liberty is currently _without_ downstream patches, thanks to the work done by EmilienM and jayg ! Updates on stable/mitaka[1] and stable/liberty[2] branches are pure upstream merges. Cheers, Alan [1] https://github.com/redhat-openstack/openstack-puppet-modules/commits/stable/mitaka [2] https://github.com/redhat-openstack/openstack-puppet-modules/commits/stable/liberty From rbowen at redhat.com Mon Jun 6 19:54:09 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 6 Jun 2016 15:54:09 -0400 Subject: [rdo-list] Upcoming RDO/OpenStack Meetups Message-ID: <150e904a-8ba8-5cab-44d3-615bfa7eb0fb@redhat.com> The following are the meetups I'm aware of in the coming week where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Monday June 06 in Paris, FR: Discutons OpenStack et containers - http://www.meetup.com/Meetup-SUSE-Linux-Paris/events/231095109/ * Tuesday June 07 in Sydney, AU: June Sydney Meetup - SDN 101 and Gnocchi - http://www.meetup.com/Australian-OpenStack-User-Group/events/229602105/ * Tuesday June 07 in San Jose, CA, US: Come and talk about Openstack Project Romana and Datera Storage - http://www.meetup.com/Silicon-Valley-OpenStack-Ops-Meetup/events/231210364/ * Tuesday June 07 in Fort Collins, CO, US: Heat usage - http://www.meetup.com/OpenStack-Colorado/events/231434361/ * Wednesday June 08 in Prague, CZ: OpenStack Day Prague - http://www.meetup.com/OpenStack-Czech-User-Group-Meetup/events/228029462/ * Wednesday June 08 in Houston, TX, US: OpenStack & Cisco UCS - http://www.meetup.com/Houston-Cisco-UCS-Meetup/events/230853127/ * Thursday June 09 in San Antonio, TX, US: Passing the Certified OpenStack Administrator Test Part 1: OpenStack Overview - http://www.meetup.com/SA-Open-Stackers/events/231626701/ * Thursday June 09 in San Francisco, CA, US: SF Bay OpenStack Meetup: Data-Driven, Cost-Based OpenStack Capacity Management - http://www.meetup.com/openstack/events/231297777/ * Thursday June 09 in San Diego, CA, US: OpenStack LAMP & Load Balancing as a Service - http://www.meetup.com/San-Diego-Cloud-Computing-Meetup/events/231422642/ * Thursday June 09 in Montevideo, UY: El 13 no es mala suerte - http://www.meetup.com/OpenStack-Uruguay/events/231426806/ * Friday June 10 in Dublin, IE: OpenStack Ireland Day - June 10th 2016 - http://www.meetup.com/OpenStack-Ireland/events/229221735/ * Friday June 10 in Houston, TX, US: Arista, Neutron, and Docker Oh My! Did I mention Docker!? - http://www.meetup.com/openstackhoustonmeetup/events/231594293/ From cbrown2 at ocf.co.uk Mon Jun 6 21:08:58 2016 From: cbrown2 at ocf.co.uk (Christopher Brown) Date: Mon, 6 Jun 2016 22:08:58 +0100 Subject: [rdo-list] Documenting RDO Stable and workarounds Message-ID: <1465247338.16318.11.camel@ocf.co.uk> Hello, I'm happy to get involved in this - my intention is to move the documented workarounds we have internally onto the RDO docs website. I started doing this earlier in the year: https://www.rdoproject.org/tripleo/troubleshooting/ However I need to clarify where TripleO QuickStart fits into the picture. I haven't used this but it doesn't appear to be a tool that produces a production-capable deployment - by this I mean a minimum of three controllers in HA configuration on separate hardware. So it appears we have four offerings: 1. Trystack - Research 2. Packstack - PoC 3. TripleO quickstart - Dev 4. TripleO baremetal - Prod Have I pigeon-holed these correctly? -- Regards, Christopher Brown OpenStack Engineer OCF plc Tel: +44 (0)114 257 2200 Web: www.ocf.co.uk Blog: blog.ocf.co.uk Twitter: @ocfplc OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 2PG. From ak at cloudssky.com Mon Jun 6 21:33:42 2016 From: ak at cloudssky.com (Arash Kaffamanesh) Date: Mon, 6 Jun 2016 23:33:42 +0200 Subject: [rdo-list] Documenting RDO Stable and workarounds In-Reply-To: <1465247338.16318.11.camel@ocf.co.uk> References: <1465247338.16318.11.camel@ocf.co.uk> Message-ID: Hi, Kolla is using also RDO packages and it works somehow :-) Regards, Arash On Mon, Jun 6, 2016 at 11:08 PM, Christopher Brown wrote: > Hello, > > I'm happy to get involved in this - my intention is to move the > documented workarounds we have internally onto the RDO docs website. > > I started doing this earlier in the year: > > https://www.rdoproject.org/tripleo/troubleshooting/ > > However I need to clarify where TripleO QuickStart fits into the > picture. I haven't used this but it doesn't appear to be a tool that > produces a production-capable deployment - by this I mean a minimum of > three controllers in HA configuration on separate hardware. > > So it appears we have four offerings: > > 1. Trystack - Research > 2. Packstack - PoC > 3. TripleO quickstart - Dev > 4. TripleO baremetal - Prod > > Have I pigeon-holed these correctly? > > -- > Regards, > > Christopher Brown > OpenStack Engineer > OCF plc > > Tel: +44 (0)114 257 2200 > Web: www.ocf.co.uk > Blog: blog.ocf.co.uk > Twitter: @ocfplc > > OCF plc is a company registered in England and Wales. Registered number > 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 > 2PG. > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggillies at redhat.com Mon Jun 6 23:25:00 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Tue, 7 Jun 2016 09:25:00 +1000 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <1465210434.9673.50.camel@ocf.co.uk> References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> <1465210434.9673.50.camel@ocf.co.uk> Message-ID: On 06/06/16 20:53, Christopher Brown wrote: > Hi Graeme, > > Thanks for your email which is greatly appreciated. > > I am currently rebuilding using your instructions and will update with > my findings. Once this is done I'll look at starting a basic baremetal > install guide for the RDO website as one doesn't exist at the moment > that I can see and I think one of the main "takeaways" from this is > that stable documentation is needed urgently. I'd be very much inclined > to keep it separate from the rather confusing developer documentation > in use currently. This is why people seem to be heading off to Red Hat > docs I guess. > > But I'd be really grateful if the bugs under discussion are addressed > in Mitaka stable as soon as possible as curling patches is less great. > > As an addition, it looks like following discussion with Pedro, the > overcloud deployment doesn't handle spanning tree on switches correctly > as we are needing to manually delete json files and re-runs os-apply- > config when the deployment stalls. This ships by default on switches > these days so it would be good if the deployment could cater for links > that aren't immediately in forwarding state. > > Happy to help out with documentation and keeping errata/workarounds up > to date - I think we just need a "stable" section of the website which > doesn't seem to exist at the moment. > > Regards I am still of the opinion that the documentation related to the stable usage workflow should still be upstream at the tripleo.org docs, rather than a separate document that is maintained by out of tree and perhaps won't get as much input from the tripleo developers. The workarounds for particular versions of tripleo and rdo should be stored in the RDO wiki however. > > > On Mon, 2016-06-06 at 00:37 +0100, Graeme Gillies wrote: >> Hi Everyone, >> >> I just wanted to say I have been following this thread quite closely >> and >> can sympathize with some of the pain people are going through to get >> tripleO to work. >> >> Currently it's quite difficult and a bit opaque on how to actually >> utilise the stable mitaka repos in order to build a functional >> undercloud and overcloud environment. >> >> First I wanted to share the steps I have undergone in order to get a >> functional overcloud working with RDO Mitaka utilising the RDO stable >> release built by CentOS, and then I'll talk about some specific steps >> I >> think need to be undertaken by the RDO/TripleO team in order to >> provide >> a better experience in the future. >> >> To get a functional overcloud using RDO Mitaka, you need to do the >> following >> >> 1) Install EPEL on your undercloud >> 2) Install https://www.rdoproject.org/repos/rdo-release.rpm on your >> undercloud >> 3) Following the normal steps to install your undercloud (modifying >> undercloud.conf, and running openstack undercloud install >> 4) You will now need to manually patch ironic on the undercloud in >> order >> to make sure repeated introspection works. This might not be needed >> if >> you don't do any introspection, but I find more often than not you >> end >> up having to do it, so it's worthwhile. The bug you need to patch is >> [1] >> and I typically run the following commands to apply the patch >> >> # sudo su - >> $ cd /usr/lib/python2.7/site-packages >> $ curl >> 'https://review.openstack.org/changes/306421/revisions/abd50d8438e7d3 >> 71ce24f97d8f8f67052b562007/patch?download' >>> >>> base64 -d | patch -p1 >> $ systemctl restart openstack-ironic-inspector >> $ systemctl restart openstack-ironic-inspector-dnsmasq >> $ exit >> # >> >> 5) Manually patch the undercloud to build overcloud images using >> rhos-release rpm only (which utilises the stable Mitaka repo from >> CentOS, and nothing from RDO Trunk [delorean]). I do this by >> modifying >> the file >> >> /usr/lib/python2.7/site-packages/tripleoclient/v1/overcloud_image.py >> >> At around line 467 you will see a reference to epel, I add a new line >> after that to include the rdo_release DIB element to the build as >> well. >> This typically makes the file look something like >> >> http://paste.openstack.org/show/508196/ >> >> (note like 468). Then I create a directory to store my images and >> build >> them specifying the mitaka version of rdo_release. I then upload >> these >> images >> >> # mkdir ~/images >> # cd ~/images >> # export RDO_RELEASE=mitaka >> # openstack overcloud image build --all >> # openstack overcloud image upload --update-existing >> >> 6) Because of the bug at [2] which affects the ironic agent ramdisk, >> we >> need to build a set of images utilising RDO Trunk for the mitaka >> branch >> (where the fix is applied), and then upload *only* the new ironic >> ramdisk. This is done with >> >> # mkdir ~/images-mitaka-trunk >> # cd ~/images-mitaka-trunk >> # export USE_DELOREAN_TRUNK=1 >> # export >> DELOREAN_TRUNK_REPO="http://trunk.rdoproject.org/centos7-mitaka/curre >> nt/" >> # export DELOREAN_REPO_FILE="delorean.repo" >> # openstack overcloud image build --type agent-ramdisk >> # sudo cp ironic-python-agent.initramfs /httpboot/agent.ramdisk >> >> 7) Follow the rest of the documentation to deploy the overcloud >> normally >> >> Please note that obviously your mileage may vary, and this is by all >> means not an exclusive list of the problems. I have however used >> these >> steps to do multiple node deployments (10+ nodes) with HA over >> different >> hardware sets with different networking setups (single nic, multiple >> nic >> with bonding + vlans). >> >> With all the different repos floating around, all which change very >> rapidly, combined with the documentation defaults targeting >> developers >> and CI systems (not end users), it's hard to not only get a stable >> TripleO install up, but also communicate and discuss clearly with >> others >> what is working, what is broken, and how to compare two installations >> to >> see if they are experiencing the same issues. >> >> To this end I would like to suggest to the RDO and TripleO community >> that we undertake the following >> >> 1) Overhaul all the TripleO documentation so that all the steps >> default >> to utilising/deploying using RDO Stable (that is, the releases done >> by >> CBS). There should be colored boxes with alt steps for those who wish >> to >> use RDO Trunk on the stable branch, and RDO Trunk from master. This >> basically inverts the current pattern. I think anyone, Operator or >> developer, who is working through the documentation for the first >> time, >> should be given steps that maximise the chance of success, and thus >> the >> most stable release we have. Once a user has gone through the process >> once, they can look at the alternative steps for more aggressive >> releases >> >> 2) Patch python-tripleoclient so that by default, when you run >> "openstack overcloud image build" it builds the images utilising the >> rdo_release DIB element, and sets the RDO_RELEASE environment >> variable >> to be 'mitaka' or whenever the current stable release is (and we >> should >> endevour to update it with new releases). There should be no extra >> environment variables necessary to build images, and by default it >> should never touch anything RDO Trunk (delorean) related >> >> 3) For bugs like the two I have mentioned above, we need to have some >> sort of robust process for either backporting those patches to the >> builds in CBS (I understand we don't do this for various reasons), or >> we >> need some kind of tooling or solution that allows operators to apply >> only the fixes they need from RDO Trunk (delorean). We need to ensure >> that when an Operator utilises TripleO they have the greatest chance >> of >> success, and bugs such as these which severely impact the deployment >> process harm the adoption of TripleO and RDO. >> >> 4) We should curate and keep an up to date page on rdoproject.org >> that >> does highlight the outstanding issues related to TripleO on the RDO >> Stable (CBS) releases. These should have links to relevant bugzillas, >> clean instructions on how to work around the issue, or cleanly apply >> a >> patch to avoid the issue, and as new releases make it out, we should >> update the page to drop off workarounds that are no longer needed. >> >> The goal is to push Operators/Users to working with our most stable >> code >> as much as possible, and track/curate issues around that. This way >> everyone should be on the same page, issues are easier to discuss and >> diagnose, and overall peoples experiences should be better. >> >> I'm interested in thoughts, feedback, and concerns, both from the RDO >> and TripleO community, and from the Operator/User community. >> >> Regards, >> >> Graeme >> >> [1] https://bugs.launchpad.net/ironic-inspector/+bug/1570447 >> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1322892 >> >> On 05/06/16 02:04, Pedro Sousa wrote: >>> >>> Thanks Marius, >>> >>> I can confirm that it installs fine with 3 controllers + 3 computes >>> after recreating the stack >>> >>> Regards >>> >>> On Sat, Jun 4, 2016 at 4:14 PM, Marius Cornea >> t >>> > wrote: >>> >>> Hi Pedro, >>> >>> Scaling out controller nodes is not supported at this moment: >>> https://bugzilla.redhat.com/show_bug.cgi?id=1243312 >>> >>> On Sat, Jun 4, 2016 at 5:05 PM, Pedro Sousa >> > wrote: >>> > Hi, >>> > >>> > some update on scaling the cloud: >>> > >>> > 1 controller + 1 compute -> 1 controller + 3 computes OK >>> > >>> > 1 controller + 3 computes -> 3 controllers + 3 compute FAILS >>> > >>> > Problem: The new controller nodes are "stuck" in "pscd >>> start", so >>> it seems >>> > to be a problem joining the pacemaker cluster... Did anyone >>> had this >>> > problem? >>> > >>> > Regards >>> > >>> > >>> > >>> > >>> > >>> > >>> > On Sat, Jun 4, 2016 at 1:50 AM, Pedro Sousa >> m >>> > wrote: >>> >> >>> >> Hi, >>> >> >>> >> I finally managed to install a baremetal in mitaka with 1 >>> controller + 1 >>> >> compute with network isolation. Thank god :) >>> >> >>> >> All I did was: >>> >> >>> >> #yum install centos-release-openstack-mitaka >>> >> #sudo yum install python-tripleoclient >>> >> >>> >> without epel repos. >>> >> >>> >> Then followed instructions from Redhat Site. >>> >> >>> >> I downloaded the overcloud images from: >>> >> >>> http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_image >>> s/mitaka/delorean/ >>> >> >>> >> I do have an issue that forces me to delete a json file and >>> run >>> >> os-refresh-config inside my overcloud nodes other than that >>> it >>> installs >>> >> fine. >>> >> >>> >> Now I'll test with more 2 controllers + 2 computes to have a >>> full HA >>> >> deployment. >>> >> >>> >> If anyone needs help to document this I'll be happy to help. >>> >> >>> >> Regards, >>> >> Pedro Sousa >>> >> >>> >> >>> >> On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy >> .com >>> > wrote: >>> >>> >>> >>> The report says: "Fix Released" as of 2016-05-24. >>> >>> Are you installing on a clean system with the latest >>> repositories? >>> >>> >>> >>> Might also want to check your version of rabbitmq: I have >>> >>> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. >>> >>> >>> >>> ----- Original Message ----- >>> >>> > From: "Pedro Sousa" >> ail.com>> >>> >>> > To: "Ronelle Landy" >> hat.com>> >>> >>> > Cc: "Christopher Brown" >> >, "Ignacio Bravo" >>> >>> > >, >>> "rdo-list" >>> >>> > > >>> >>> > Sent: Friday, June 3, 2016 1:20:43 PM >>> >>> > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>> >>> > >>> >>> > Anyway to workaround this? Maybe downgrade hiera? >>> >>> > >>> >>> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy >>> > >>> >>> > wrote: >>> >>> > >>> >>> > > I am not sure exactly where you installed from, and >>> when you >>> did your >>> >>> > > installation, but any chance, you've hit: >>> >>> > > https://bugs.launchpad.net/tripleo/+bug/1584892? >>> >>> > > There is a link bugzilla record. >>> >>> > > >>> >>> > > ----- Original Message ----- >>> >>> > > > From: "Pedro Sousa" >> > >>> >>> > > > To: "Ronelle Landy" >> > >>> >>> > > > Cc: "Christopher Brown" >> >, "Ignacio Bravo" < >>> >>> > > ibravo at ltgfederal.com >, >>> "rdo-list" >>> >>> > > > > >>> >>> > > > Sent: Friday, June 3, 2016 12:26:58 PM >>> >>> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable >>> version? >>> >>> > > > >>> >>> > > > Thanks Ronelle, >>> >>> > > > >>> >>> > > > do you think this kind of errors can be related with >>> network >>> >>> > > > settings? >>> >>> > > > >>> >>> > > > "Could not retrieve fact='rabbitmq_nodename', >>> >>> > > > resolution='': >>> >>> > > > undefined method `[]' for nil:NilClass Could not >>> retrieve >>> >>> > > > fact='rabbitmq_nodename', resolution='': >>> undefined >>> >>> > > > method `[]' >>> >>> > > > for nil:NilClass" >>> >>> > > > >>> >>> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy >>> > >>> >>> > > > wrote: >>> >>> > > > >>> >>> > > > > Hi Pedro, >>> >>> > > > > >>> >>> > > > > You could use the docs you referred to. >>> >>> > > > > Alternatively, if you want to use a vm for the >>> undercloud and >>> >>> > > > > baremetal >>> >>> > > > > machines for the overcloud, it is possible to use >>> Tripleo >>> >>> > > > > Qucikstart >>> >>> > > with a >>> >>> > > > > few modifications. >>> >>> > > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/ >>> 1571028. >>> >>> > > > > >>> >>> > > > > ----- Original Message ----- >>> >>> > > > > > From: "Pedro Sousa" >> > >>> >>> > > > > > To: "Ronelle Landy" >> > >>> >>> > > > > > Cc: "Christopher Brown" >> >, "Ignacio Bravo" < >>> >>> > > > > ibravo at ltgfederal.com >>>> , >>> "rdo-list" >>> >>> > > > > > >>>> >>> >>> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM >>> >>> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable >>> version? >>> >>> > > > > > >>> >>> > > > > > Hi Ronelle, >>> >>> > > > > > >>> >>> > > > > > maybe I understand it wrong but I thought that >>> Tripleo >>> >>> > > > > > Quickstart >>> >>> > > was for >>> >>> > > > > > deploying virtual environments? >>> >>> > > > > > >>> >>> > > > > > And for baremetal we should use >>> >>> > > > > > >>> >>> > > > > >>> >>> > > >>> >>> > > >>> http://docs.openstack.org/developer/tripleo-docs/installation/i >>> nstallation.html >>> >>> > > > > > ? >>> >>> > > > > > >>> >>> > > > > > Thanks >>> >>> > > > > > >>> >>> > > > > > >>> >>> > > > > > >>> >>> > > > > > >>> >>> > > > > > >>> >>> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy >>> >>> > > > > > > >>> >>> > > wrote: >>> >>> > > > > > >>> >>> > > > > > > Hello, >>> >>> > > > > > > >>> >>> > > > > > > We have had success deploying RDO (Mitaka) on >>> baremetal >>> >>> > > > > > > systems - >>> >>> > > using >>> >>> > > > > > > Tripleo Quickstart with both single-nic-vlans >>> and >>> >>> > > > > > > bond-with-vlans >>> >>> > > > > network >>> >>> > > > > > > isolation configurations. >>> >>> > > > > > > >>> >>> > > > > > > Baremetal can have some complicated networking >>> issues but, >>> >>> > > > > > > from >>> >>> > > > > previous >>> >>> > > > > > > experiences, if a single-controller deployment >>> worked but a >>> >>> > > > > > > HA >>> >>> > > > > deployment >>> >>> > > > > > > did not, I would check: >>> >>> > > > > > > - does the HA deployment command include: -e >>> >>> > > > > > > >>> >>> > > > > >>> >>> > > >>> >>> > > >>> /usr/share/openstack-tripleo-heat- >>> templates/environments/puppet-pacemaker.yaml >>> >>> > > > > > > - are there possible MTU issues? >>> >>> > > > > > > >>> >>> > > > > > > >>> >>> > > > > > > ----- Original Message ----- >>> >>> > > > > > > > From: "Christopher Brown" >> > >>> >>> > > > > > > > To: pgsousa at gmail.com >> om>, >>> ibravo at ltgfederal.com >>> >>> > > > > > > > Cc: rdo-list at redhat.com >> at.com> >>> >>> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM >>> >>> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo >>> stable >>> version? >>> >>> > > > > > > > >>> >>> > > > > > > > Hello Ignacio, >>> >>> > > > > > > > >>> >>> > > > > > > > Thanks for your response and good to know it >>> isn't >>> just me! >>> >>> > > > > > > > >>> >>> > > > > > > > I would be more than happy to provide >>> developers with >>> >>> > > > > > > > access to >>> >>> > > our >>> >>> > > > > > > > bare metal environments. I'll also file some >>> bugzilla >>> >>> > > > > > > > reports to >>> >>> > > see >>> >>> > > > > if >>> >>> > > > > > > > this generates any interest. >>> >>> > > > > > > > >>> >>> > > > > > > > Please do let me know if you make any >>> progress - I am >>> >>> > > > > > > > trying to >>> >>> > > > > deploy >>> >>> > > > > > > > HA with network isolation, multiple nics and >>> vlans. >>> >>> > > > > > > > >>> >>> > > > > > > > The RDO web page states: >>> >>> > > > > > > > >>> >>> > > > > > > > "If you want to create a production-ready >>> cloud, >>> you'll >>> >>> > > > > > > > want to >>> >>> > > use >>> >>> > > > > the >>> >>> > > > > > > > TripleO quickstart guide." >>> >>> > > > > > > > >>> >>> > > > > > > > which is a contradiction in terms really. >>> >>> > > > > > > > >>> >>> > > > > > > > Cheers >>> >>> > > > > > > > >>> >>> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio >>> Bravo >>> wrote: >>> >>> > > > > > > > > Pedro / Christopher, >>> >>> > > > > > > > > >>> >>> > > > > > > > > Just wanted to share with you that I also >>> had >>> plenty of >>> >>> > > > > > > > > issues >>> >>> > > > > > > > > deploying on bare metal HA servers, and >>> have >>> paused the >>> >>> > > deployment >>> >>> > > > > > > > > using TripleO until better winds start to >>> flow >>> here. I >>> >>> > > > > > > > > was >>> >>> > > able to >>> >>> > > > > > > > > deploy the QuickStart, but on bare metal >>> the >>> history was >>> >>> > > different. >>> >>> > > > > > > > > Couldn't even deploy a two server >>> configuration. >>> >>> > > > > > > > > >>> >>> > > > > > > > > I was thinking that it would be good to >>> have the >>> >>> > > > > > > > > developers >>> >>> > > have >>> >>> > > > > > > > > access to one of our environments and go >>> through >>> a full >>> >>> > > > > > > > > install >>> >>> > > > > with >>> >>> > > > > > > > > us to better see where things fail. We can >>> do this >>> >>> > > > > > > > > handholding >>> >>> > > > > > > > > deployment once every week/month based on >>> developers time >>> >>> > > > > > > > > availability. That way we can get a working >>> install, and >>> >>> > > > > > > > > we can >>> >>> > > > > > > > > troubleshoot real life environment >>> problems. >>> >>> > > > > > > > > >>> >>> > > > > > > > > >>> >>> > > > > > > > > IB >>> >>> > > > > > > > > >>> >>> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa >>> >>> > > > > > > > > >> m>> >>> >>> > > wrote: >>> >>> > > > > > > > > >>> >>> > > > > > > > > > Yes. I've used this, but I'll try again >>> as there's >>> >>> > > > > > > > > > seems to >>> >>> > > be >>> >>> > > > > new >>> >>> > > > > > > > > > updates. >>> >>> > > > > > > > > > >>> >>> > > > > > > > > > >>> >>> > > > > > > > > > >>> >>> > > > > > > > > > Stable Branch Skip all repos mentioned >>> above, >>> other >>> >>> > > > > > > > > > than >>> >>> > > epel- >>> >>> > > > > > > > > > release which is still required. >>> >>> > > > > > > > > > Enable latest RDO Stable Delorean >>> repository >>> for all >>> >>> > > > > > > > > > packages >>> >>> > > > > > > > > > sudo curl -o >>> /etc/yum.repos.d/delorean-liberty.repo >>> >>> > > > > https://trunk.r >>> >>> > > > > > > > > > >>> doproject.org/centos7-liberty/current/delorean.repo >>> >>> >>> > > > > > > > > > Enable the Delorean Deps repository >>> >>> > > > > > > > > > sudo curl -o >>> >>> > > > > > > > > > /etc/yum.repos.d/delorean-deps- >>> liberty.repo >>> >>> > > > > http://tru >>> >>> > > > > > > > > > >>> nk.rdoproject.org/centos7-liberty/delorean-deps.repo >>> >>> >>> > > > > > > > > > >>> >>> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, >>> Christopher >>> Brown < >>> >>> > > > > cbrown2 at ocf.co . >>> >>> > > > > > > > > > uk> wrote: >>> >>> > > > > > > > > > > No, Liberty deployed ok for us. >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > It suggests to me a package mismatch. >>> Have you >>> >>> > > > > > > > > > > completely >>> >>> > > > > rebuilt >>> >>> > > > > > > > > > > the >>> >>> > > > > > > > > > > undercloud and then the images using >>> Liberty? >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, >>> Pedro >>> Sousa wrote: >>> >>> > > > > > > > > > > > AttributeError: 'module' object has >>> no >>> attribute >>> >>> > > 'PortOpt' >>> >>> > > > > > > > > > > -- >>> >>> > > > > > > > > > > Regards, >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > Christopher Brown >>> >>> > > > > > > > > > > OpenStack Engineer >>> >>> > > > > > > > > > > OCF plc >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > Tel: +44 (0)114 257 2200 >>> >>> >>> > > > > > > > > > > Web: www.ocf.co.uk >>> >>> >>> > > > > > > > > > > Blog: blog.ocf.co.uk >> o.uk> >>> >>> > > > > > > > > > > Twitter: @ocfplc >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > Please note, any emails relating to an >>> OCF >>> Support >>> >>> > > > > > > > > > > request >>> >>> > > must >>> >>> > > > > > > > > > > always >>> >>> > > > > > > > > > > be sent to support at ocf.co.uk >>> for a ticket number to >>> >>> > > > > > > > > > > be >>> >>> > > > > generated >>> >>> > > > > > > > > > > or >>> >>> > > > > > > > > > > existing support ticket to be updated. >>> Should this >>> >>> > > > > > > > > > > not be >>> >>> > > done >>> >>> > > > > > > > > > > then OCF >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > cannot be held responsible for requests >>> not >>> dealt >>> >>> > > > > > > > > > > with in a >>> >>> > > > > > > > > > > timely >>> >>> > > > > > > > > > > manner. >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > OCF plc is a company registered in >>> England >>> and Wales. >>> >>> > > > > Registered >>> >>> > > > > > > > > > > number >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. >>> Registered office >>> >>> > > address: >>> >>> > > > > > > > > > > OCF plc, >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe >>> Park, >>> >>> > > > > > > > > > > Chapeltown, >>> >>> > > > > > > > > > > Sheffield S35 >>> >>> > > > > > > > > > > 2PG. >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > If you have received this message in >>> error, >>> please >>> >>> > > > > > > > > > > notify >>> >>> > > us >>> >>> > > > > > > > > > > immediately and remove it from your >>> system. >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > >>> >>> > > > > > > > > >>> _______________________________________________ >>> >>> > > > > > > > > rdo-list mailing list >>> >>> > > > > > > > > rdo-list at redhat.com >> .com> >>> >>> > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo >>> -list >>> >>> > > > > > > > > >>> >>> > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat >>> .com >>> >>> >>> > > > > > > > -- >>> >>> > > > > > > > Regards, >>> >>> > > > > > > > >>> >>> > > > > > > > Christopher Brown >>> >>> > > > > > > > OpenStack Engineer >>> >>> > > > > > > > OCF plc >>> >>> > > > > > > > >>> >>> > > > > > > > Tel: +44 (0)114 257 2200 >>> >>> >>> > > > > > > > Web: www.ocf.co.uk >>> >>> > > > > > > > Blog: blog.ocf.co.uk >>> >>> > > > > > > > Twitter: @ocfplc >>> >>> > > > > > > > >>> >>> > > > > > > > Please note, any emails relating to an OCF >>> Support >>> request >>> >>> > > > > > > > must >>> >>> > > > > always >>> >>> > > > > > > > be sent to support at ocf.co.uk >>> for a ticket number to be >>> >>> > > generated or >>> >>> > > > > > > > existing support ticket to be updated. Should >>> this >>> not be >>> >>> > > > > > > > done >>> >>> > > then >>> >>> > > > > OCF >>> >>> > > > > > > > >>> >>> > > > > > > > cannot be held responsible for requests not >>> dealt >>> with in a >>> >>> > > timely >>> >>> > > > > > > > manner. >>> >>> > > > > > > > >>> >>> > > > > > > > OCF plc is a company registered in England >>> and Wales. >>> >>> > > > > > > > Registered >>> >>> > > > > number >>> >>> > > > > > > > >>> >>> > > > > > > > 4132533, VAT number GB 780 6803 14. >>> Registered office >>> >>> > > > > > > > address: >>> >>> > > OCF >>> >>> > > > > plc, >>> >>> > > > > > > > >>> >>> > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, >>> Chapeltown, >>> >>> > > Sheffield >>> >>> > > > > S35 >>> >>> > > > > > > > 2PG. >>> >>> > > > > > > > >>> >>> > > > > > > > If you have received this message in error, >>> please >>> notify >>> >>> > > > > > > > us >>> >>> > > > > > > > immediately and remove it from your system. >>> >>> > > > > > > > >>> >>> > > > > > > > >>> _______________________________________________ >>> >>> > > > > > > > rdo-list mailing list >>> >>> > > > > > > > rdo-list at redhat.com >> om> >>> >>> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-l >>> ist >>> >>> > > > > > > > >>> >>> > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.c >>> om >>> >>> >>> > > > > > > > >>> >>> > > > > > > >>> >>> > > > > > >>> >>> > > > > >>> >>> > > > >>> >>> > > >>> >>> > >>> >> >>> >> >>> > >>> > >>> > _______________________________________________ >>> > rdo-list mailing list >>> > rdo-list at redhat.com >>> > https://www.redhat.com/mailman/listinfo/rdo-list >>> > >>> > To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >>> >>> >>> >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> -- >> Graeme Gillies >> Principal Systems Administrator >> Openstack Infrastructure >> Red Hat Australia >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > -- > Regards, > > Christopher Brown > OpenStack Engineer > OCF plc > > Tel: +44 (0)114 257 2200 > Web: www.ocf.co.uk > Blog: blog.ocf.co.uk > Twitter: @ocfplc > > Please note, any emails relating to an OCF Support request must always > be sent to support at ocf.co.uk for a ticket number to be generated or > existing support ticket to be updated. Should this not be done then OCF > > cannot be held responsible for requests not dealt with in a timely > manner. > > OCF plc is a company registered in England and Wales. Registered number > > 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, > > 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield S35 > 2PG. > > If you have received this message in error, please notify us > immediately and remove it from your system. > -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From ggillies at redhat.com Mon Jun 6 23:39:47 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Tue, 7 Jun 2016 09:39:47 +1000 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <57557BE8.3080305@redhat.com> References: <1464964179.9673.30.camel@ocf.co.uk> <1463990834.19635670.1464968582104.JavaMail.zimbra@redhat.com> <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> <57557BE8.3080305@redhat.com> Message-ID: <6831d0e7-ff82-c7cc-c44d-e40964056c0a@redhat.com> On 06/06/16 23:34, John Trowbridge wrote: > Hola RDOistas, > > First, thanks Graeme for the great write-up of how you use the RDO > stable Mitaka release. I have some specific thoughts I will put in line, > but I also have a couple more general responses to this whole thread. > > RDO is a distribution of OpenStack. One of the projects we distribute is > TripleO. In doing so, we provide a lot of feedback into the upstream > project to improve it, however improvements and feedback need to go > upstream. It is totally fine to use rdo-list to confer with other RDO > users about whether an issue is expected behavior, or some issue with > how things are setup. However, up until this post, this thread has been > a bit of a pile-on of all the problems people have with TripleO, without > anything actionable at the RDO level. > > What do I mean by actionable at the RDO level? I think the bare minimum > here would be a bugzilla. Even better would be a launchpad bug for > upstream TripleO. Even better would be some patch that resolves the > issue for you. If the issue is one of those three things not getting > attention, then the mailing list is a totally valid avenue to reach out > for some help to drive those. > > After all, while RDO does provide a lot of free as in free beer. The > true benefit of open source is the Freedom to help improve things. So to be clear here, are you saying that with regards to things that directly relate to tripleo (changing documentation focus, etc) we should be discussing this in the upstream tripleo communication channels? And those channels would be openstack-dev mailing list (with the tripleo tag)? > > - trown > > On 06/05/2016 07:37 PM, Graeme Gillies wrote: >> Hi Everyone, >> >> I just wanted to say I have been following this thread quite closely and >> can sympathize with some of the pain people are going through to get >> tripleO to work. >> >> Currently it's quite difficult and a bit opaque on how to actually >> utilise the stable mitaka repos in order to build a functional >> undercloud and overcloud environment. >> >> First I wanted to share the steps I have undergone in order to get a >> functional overcloud working with RDO Mitaka utilising the RDO stable >> release built by CentOS, and then I'll talk about some specific steps I >> think need to be undertaken by the RDO/TripleO team in order to provide >> a better experience in the future. >> >> To get a functional overcloud using RDO Mitaka, you need to do the following >> >> 1) Install EPEL on your undercloud >> 2) Install https://www.rdoproject.org/repos/rdo-release.rpm on your >> undercloud >> 3) Following the normal steps to install your undercloud (modifying >> undercloud.conf, and running openstack undercloud install >> 4) You will now need to manually patch ironic on the undercloud in order >> to make sure repeated introspection works. This might not be needed if >> you don't do any introspection, but I find more often than not you end >> up having to do it, so it's worthwhile. The bug you need to patch is [1] >> and I typically run the following commands to apply the patch >> >> # sudo su - >> $ cd /usr/lib/python2.7/site-packages >> $ curl >> 'https://review.openstack.org/changes/306421/revisions/abd50d8438e7d371ce24f97d8f8f67052b562007/patch?download' >> | base64 -d | patch -p1 >> $ systemctl restart openstack-ironic-inspector >> $ systemctl restart openstack-ironic-inspector-dnsmasq >> $ exit >> # >> > > This is actually a good example of something actionable at the RDO > level. The fix for this is already backported to the stable/mitaka > branch upstream, and just requires a rebase of the mitaka > ironic-inspector package. > > The only thing that could be improved here is an open bugzilla so that > there was some RDO level visibility that this package needed a rebase > for a critical bug. I will take an action to file such a bugzilla, and > do the rebase. Thanks for doing that. There is still no clear documented policy (at least as far as I can tell) on how/when RDO does rebases of the stable (CBS) packages from stable upstream releases. I know currently the plan is to just follow when upstream does releases, but it sounds like there is potential for RDO to release a bit quicker than that? > >> 5) Manually patch the undercloud to build overcloud images using >> rhos-release rpm only (which utilises the stable Mitaka repo from >> CentOS, and nothing from RDO Trunk [delorean]). I do this by modifying >> the file >> >> /usr/lib/python2.7/site-packages/tripleoclient/v1/overcloud_image.py >> >> At around line 467 you will see a reference to epel, I add a new line >> after that to include the rdo_release DIB element to the build as well. >> This typically makes the file look something like >> >> http://paste.openstack.org/show/508196/ >> >> (note like 468). Then I create a directory to store my images and build >> them specifying the mitaka version of rdo_release. I then upload these >> images >> >> # mkdir ~/images >> # cd ~/images >> # export RDO_RELEASE=mitaka >> # openstack overcloud image build --all >> # openstack overcloud image upload --update-existing >> > > This is an example of something that needs to go into TripleO. I > personally never recommend folks in RDO build images themselves, mostly > because the tripleoclient wrapper around DIB is very opinionated and > difficult to make even simple changes like this. The image building > process in RDO is actually not even using tripleoclient, but the > replacement library in tripleo-common that allows building images from a > declarative YAML: > > https://github.com/openstack/tripleo-common/blob/master/scripts/tripleo-build-images > > https://github.com/redhat-openstack/ansible-role-tripleo-image-build/blob/master/library/tripleo_build_images.py > > What needs to go to TripleO here is a launchpad bug about switching > tripleoclient to use this new image building library. That is not > something we can do at the RDO level. > > Also, note there are stable release images published as well as the DLRN > ones: > > http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/cbs/ Thanks for pointing out the all of this. I was not aware that there were stable release images published (how is this not referenced in any documentation anywhere?) nor that there is a different way to build images from the standard tooling. Like it or not there will always be people (myself included) who wish to build their own images, either to customise and extend them, or to simply verify that the code they are deploying matches what they are given (for escrow and security purposes). The current image build process is quite awkward to modify, and the fact that we already have a method which seems better (at least, we have a declarative file we can adjust and customise) seems like a no brainer to me to move all of tripleo to using that ASAP. Having different methods of building images just adds more confusion, which provides more uncertainty around what you are getting and how to extend and customise that. > >> 6) Because of the bug at [2] which affects the ironic agent ramdisk, we >> need to build a set of images utilising RDO Trunk for the mitaka branch >> (where the fix is applied), and then upload *only* the new ironic >> ramdisk. This is done with >> >> # mkdir ~/images-mitaka-trunk >> # cd ~/images-mitaka-trunk >> # export USE_DELOREAN_TRUNK=1 >> # export >> DELOREAN_TRUNK_REPO="http://trunk.rdoproject.org/centos7-mitaka/current/" >> # export DELOREAN_REPO_FILE="delorean.repo" >> # openstack overcloud image build --type agent-ramdisk >> # sudo cp ironic-python-agent.initramfs /httpboot/agent.ramdisk >> > > This is another example of something actionable at the RDO level, and > has it all (Bugzilla with links to launchpad and gerrit). I will take an > action to rebase the ironic-python-agent package to pull in that fix. > >> 7) Follow the rest of the documentation to deploy the overcloud normally >> >> Please note that obviously your mileage may vary, and this is by all >> means not an exclusive list of the problems. I have however used these >> steps to do multiple node deployments (10+ nodes) with HA over different >> hardware sets with different networking setups (single nic, multiple nic >> with bonding + vlans). >> >> With all the different repos floating around, all which change very >> rapidly, combined with the documentation defaults targeting developers >> and CI systems (not end users), it's hard to not only get a stable >> TripleO install up, but also communicate and discuss clearly with others >> what is working, what is broken, and how to compare two installations to >> see if they are experiencing the same issues. >> >> To this end I would like to suggest to the RDO and TripleO community >> that we undertake the following >> >> 1) Overhaul all the TripleO documentation so that all the steps default >> to utilising/deploying using RDO Stable (that is, the releases done by >> CBS). There should be colored boxes with alt steps for those who wish to >> use RDO Trunk on the stable branch, and RDO Trunk from master. This >> basically inverts the current pattern. I think anyone, Operator or >> developer, who is working through the documentation for the first time, >> should be given steps that maximise the chance of success, and thus the >> most stable release we have. Once a user has gone through the process >> once, they can look at the alternative steps for more aggressive releases >> > > First, I am in 100% agreement that the TripleO documentation could use a > major overhaul. > > That said, this is actually a fairly difficult problem. Your proposal is > perfect for the RDO case, but it does not seem right for TripleO > upstream docs to default to a stable release. In any case, this is > something to be solved in TripleO and not in RDO. > > I did try to solve this by forking the TripleO docs and modifying them > to be more RDO centric. However, this was pretty difficult to maintain, > and so I abandoned that effort. If there were some group of RDO > community members dedicated to doing that, it might be a possible > solution. These would need to be net new contributions though, as I > personally do not have bandwidth for that. Ok I guess I understand the need for the tripleO docs to be stable/rdo agnostic, as techincally it's a purely upstream project that can be utilised by many different distros/communities, but this kind of leaves us in a gap at the moment where we don't really have any satisfactory user docs for people deploying with RDO and want the best stable experience we can give. I'm wondering do we need to take a copy of the downstream (docs.redhat.com) and use that as a basis for the TripleO/RDO user guide on rdoproject.org, and then keep that as the "upstream reference" for all downstream docs? Or do we take the tripleO docs, then modify them to be RDO specific, then further modify them to be downstream specific (tripleo.org -> rdoproject.org -> docs.redhat.com). I guess I'm trying to get a clear workflow where we can maximise reuse as much as possible, and avoid duplication of effort and the potential for doc updates to get "lost" in different places. > >> 2) Patch python-tripleoclient so that by default, when you run >> "openstack overcloud image build" it builds the images utilising the >> rdo_release DIB element, and sets the RDO_RELEASE environment variable >> to be 'mitaka' or whenever the current stable release is (and we should >> endevour to update it with new releases). There should be no extra >> environment variables necessary to build images, and by default it >> should never touch anything RDO Trunk (delorean) related >> > > I think in the short term, it is really best to use the pre-built images > for RDO, using virt-customize where needed to modify them. > > In the medium term, I think this would be a pretty benign patch to carry > on the stable release of python-tripleoclient, but we need to start with > a bugzilla for that. > > Once upstream TripleO switches tripleoclient to use the declarative YAML > library in tripleo-common, I think carrying a patch on the stable > release that makes it default to building the stable images makes a lot > of sense. > >> 3) For bugs like the two I have mentioned above, we need to have some >> sort of robust process for either backporting those patches to the >> builds in CBS (I understand we don't do this for various reasons), or we >> need some kind of tooling or solution that allows operators to apply >> only the fixes they need from RDO Trunk (delorean). We need to ensure >> that when an Operator utilises TripleO they have the greatest chance of >> success, and bugs such as these which severely impact the deployment >> process harm the adoption of TripleO and RDO. >> > > For the ironic bugs I have taken an action to rebase and pick up the > changes from the upstream stable branch. In general, this is only done > when there is some critical issue, and not on a periodic basis. I was > not aware of the two critical issues posted, but thanks to this detailed > write-up now I am. Excellent thanks > > As far as tooling to apply only fixes needed from RDO Trunk, that is > just yum. Downloading the delorean.repo and modifying it to exclude all > but the packages that need hotfixes would get the same result without > manual patching. Yep agreed, I was just thinking it would be great if there was even a simple python tool where you could do something like rdo-apply-trunk And it would automate all that for you. > >> 4) We should curate and keep an up to date page on rdoproject.org that >> does highlight the outstanding issues related to TripleO on the RDO >> Stable (CBS) releases. These should have links to relevant bugzillas, >> clean instructions on how to work around the issue, or cleanly apply a >> patch to avoid the issue, and as new releases make it out, we should >> update the page to drop off workarounds that are no longer needed. >> > > I like this idea. It suffers from the same problem as the TripleO docs > issue though. That being, that it requires net new community members to > step up and take ownership of it. Well perhaps I am wrong, but surely the developers who are maintaining packages for tripleo can help out? When they see a bugzilla that they have triaged and looks like something that deserves a workaround note, they could add it themselves, removing it when the bz is closed. > >> The goal is to push Operators/Users to working with our most stable code >> as much as possible, and track/curate issues around that. This way >> everyone should be on the same page, issues are easier to discuss and >> diagnose, and overall peoples experiences should be better. >> >> I'm interested in thoughts, feedback, and concerns, both from the RDO >> and TripleO community, and from the Operator/User community. >> >> Regards, >> >> Graeme >> >> [1] https://bugs.launchpad.net/ironic-inspector/+bug/1570447 >> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1322892 >> >> On 05/06/16 02:04, Pedro Sousa wrote: >>> Thanks Marius, >>> >>> I can confirm that it installs fine with 3 controllers + 3 computes >>> after recreating the stack >>> >>> Regards >>> >>> On Sat, Jun 4, 2016 at 4:14 PM, Marius Cornea >> > wrote: >>> >>> Hi Pedro, >>> >>> Scaling out controller nodes is not supported at this moment: >>> https://bugzilla.redhat.com/show_bug.cgi?id=1243312 >>> >>> On Sat, Jun 4, 2016 at 5:05 PM, Pedro Sousa >> > wrote: >>> > Hi, >>> > >>> > some update on scaling the cloud: >>> > >>> > 1 controller + 1 compute -> 1 controller + 3 computes OK >>> > >>> > 1 controller + 3 computes -> 3 controllers + 3 compute FAILS >>> > >>> > Problem: The new controller nodes are "stuck" in "pscd start", so >>> it seems >>> > to be a problem joining the pacemaker cluster... Did anyone had this >>> > problem? >>> > >>> > Regards >>> > >>> > >>> > >>> > >>> > >>> > >>> > On Sat, Jun 4, 2016 at 1:50 AM, Pedro Sousa >> > wrote: >>> >> >>> >> Hi, >>> >> >>> >> I finally managed to install a baremetal in mitaka with 1 >>> controller + 1 >>> >> compute with network isolation. Thank god :) >>> >> >>> >> All I did was: >>> >> >>> >> #yum install centos-release-openstack-mitaka >>> >> #sudo yum install python-tripleoclient >>> >> >>> >> without epel repos. >>> >> >>> >> Then followed instructions from Redhat Site. >>> >> >>> >> I downloaded the overcloud images from: >>> >> >>> http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/ >>> >> >>> >> I do have an issue that forces me to delete a json file and run >>> >> os-refresh-config inside my overcloud nodes other than that it >>> installs >>> >> fine. >>> >> >>> >> Now I'll test with more 2 controllers + 2 computes to have a full HA >>> >> deployment. >>> >> >>> >> If anyone needs help to document this I'll be happy to help. >>> >> >>> >> Regards, >>> >> Pedro Sousa >>> >> >>> >> >>> >> On Fri, Jun 3, 2016 at 8:26 PM, Ronelle Landy >> > wrote: >>> >>> >>> >>> The report says: "Fix Released" as of 2016-05-24. >>> >>> Are you installing on a clean system with the latest repositories? >>> >>> >>> >>> Might also want to check your version of rabbitmq: I have >>> >>> rabbitmq-server-3.6.2-3.el7.noarch on CentOS 7. >>> >>> >>> >>> ----- Original Message ----- >>> >>> > From: "Pedro Sousa" > >>> >>> > To: "Ronelle Landy" > >>> >>> > Cc: "Christopher Brown" >> >, "Ignacio Bravo" >>> >>> > >, "rdo-list" >>> >>> > > >>> >>> > Sent: Friday, June 3, 2016 1:20:43 PM >>> >>> > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>> >>> > >>> >>> > Anyway to workaround this? Maybe downgrade hiera? >>> >>> > >>> >>> > On Fri, Jun 3, 2016 at 5:55 PM, Ronelle Landy >>> > >>> >>> > wrote: >>> >>> > >>> >>> > > I am not sure exactly where you installed from, and when you >>> did your >>> >>> > > installation, but any chance, you've hit: >>> >>> > > https://bugs.launchpad.net/tripleo/+bug/1584892? >>> >>> > > There is a link bugzilla record. >>> >>> > > >>> >>> > > ----- Original Message ----- >>> >>> > > > From: "Pedro Sousa" >> > >>> >>> > > > To: "Ronelle Landy" >> > >>> >>> > > > Cc: "Christopher Brown" >> >, "Ignacio Bravo" < >>> >>> > > ibravo at ltgfederal.com >, >>> "rdo-list" >>> >>> > > > > >>> >>> > > > Sent: Friday, June 3, 2016 12:26:58 PM >>> >>> > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>> >>> > > > >>> >>> > > > Thanks Ronelle, >>> >>> > > > >>> >>> > > > do you think this kind of errors can be related with network >>> >>> > > > settings? >>> >>> > > > >>> >>> > > > "Could not retrieve fact='rabbitmq_nodename', >>> >>> > > > resolution='': >>> >>> > > > undefined method `[]' for nil:NilClass Could not retrieve >>> >>> > > > fact='rabbitmq_nodename', resolution='': undefined >>> >>> > > > method `[]' >>> >>> > > > for nil:NilClass" >>> >>> > > > >>> >>> > > > On Fri, Jun 3, 2016 at 4:56 PM, Ronelle Landy >>> > >>> >>> > > > wrote: >>> >>> > > > >>> >>> > > > > Hi Pedro, >>> >>> > > > > >>> >>> > > > > You could use the docs you referred to. >>> >>> > > > > Alternatively, if you want to use a vm for the >>> undercloud and >>> >>> > > > > baremetal >>> >>> > > > > machines for the overcloud, it is possible to use Tripleo >>> >>> > > > > Qucikstart >>> >>> > > with a >>> >>> > > > > few modifications. >>> >>> > > > > https://bugs.launchpad.net/tripleo-quickstart/+bug/1571028. >>> >>> > > > > >>> >>> > > > > ----- Original Message ----- >>> >>> > > > > > From: "Pedro Sousa" >> > >>> >>> > > > > > To: "Ronelle Landy" >> > >>> >>> > > > > > Cc: "Christopher Brown" >> >, "Ignacio Bravo" < >>> >>> > > > > ibravo at ltgfederal.com >, >>> "rdo-list" >>> >>> > > > > > > >>> >>> > > > > > Sent: Friday, June 3, 2016 11:48:38 AM >>> >>> > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable version? >>> >>> > > > > > >>> >>> > > > > > Hi Ronelle, >>> >>> > > > > > >>> >>> > > > > > maybe I understand it wrong but I thought that Tripleo >>> >>> > > > > > Quickstart >>> >>> > > was for >>> >>> > > > > > deploying virtual environments? >>> >>> > > > > > >>> >>> > > > > > And for baremetal we should use >>> >>> > > > > > >>> >>> > > > > >>> >>> > > >>> >>> > > >>> http://docs.openstack.org/developer/tripleo-docs/installation/installation.html >>> >>> > > > > > ? >>> >>> > > > > > >>> >>> > > > > > Thanks >>> >>> > > > > > >>> >>> > > > > > >>> >>> > > > > > >>> >>> > > > > > >>> >>> > > > > > >>> >>> > > > > > On Fri, Jun 3, 2016 at 4:43 PM, Ronelle Landy >>> >>> > > > > > > >>> >>> > > wrote: >>> >>> > > > > > >>> >>> > > > > > > Hello, >>> >>> > > > > > > >>> >>> > > > > > > We have had success deploying RDO (Mitaka) on baremetal >>> >>> > > > > > > systems - >>> >>> > > using >>> >>> > > > > > > Tripleo Quickstart with both single-nic-vlans and >>> >>> > > > > > > bond-with-vlans >>> >>> > > > > network >>> >>> > > > > > > isolation configurations. >>> >>> > > > > > > >>> >>> > > > > > > Baremetal can have some complicated networking >>> issues but, >>> >>> > > > > > > from >>> >>> > > > > previous >>> >>> > > > > > > experiences, if a single-controller deployment >>> worked but a >>> >>> > > > > > > HA >>> >>> > > > > deployment >>> >>> > > > > > > did not, I would check: >>> >>> > > > > > > - does the HA deployment command include: -e >>> >>> > > > > > > >>> >>> > > > > >>> >>> > > >>> >>> > > >>> /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >>> >>> > > > > > > - are there possible MTU issues? >>> >>> > > > > > > >>> >>> > > > > > > >>> >>> > > > > > > ----- Original Message ----- >>> >>> > > > > > > > From: "Christopher Brown" >> > >>> >>> > > > > > > > To: pgsousa at gmail.com , >>> ibravo at ltgfederal.com >>> >>> > > > > > > > Cc: rdo-list at redhat.com >>> >>> > > > > > > > Sent: Friday, June 3, 2016 10:29:39 AM >>> >>> > > > > > > > Subject: Re: [rdo-list] Baremetal Tripleo stable >>> version? >>> >>> > > > > > > > >>> >>> > > > > > > > Hello Ignacio, >>> >>> > > > > > > > >>> >>> > > > > > > > Thanks for your response and good to know it isn't >>> just me! >>> >>> > > > > > > > >>> >>> > > > > > > > I would be more than happy to provide developers with >>> >>> > > > > > > > access to >>> >>> > > our >>> >>> > > > > > > > bare metal environments. I'll also file some bugzilla >>> >>> > > > > > > > reports to >>> >>> > > see >>> >>> > > > > if >>> >>> > > > > > > > this generates any interest. >>> >>> > > > > > > > >>> >>> > > > > > > > Please do let me know if you make any progress - I am >>> >>> > > > > > > > trying to >>> >>> > > > > deploy >>> >>> > > > > > > > HA with network isolation, multiple nics and vlans. >>> >>> > > > > > > > >>> >>> > > > > > > > The RDO web page states: >>> >>> > > > > > > > >>> >>> > > > > > > > "If you want to create a production-ready cloud, >>> you'll >>> >>> > > > > > > > want to >>> >>> > > use >>> >>> > > > > the >>> >>> > > > > > > > TripleO quickstart guide." >>> >>> > > > > > > > >>> >>> > > > > > > > which is a contradiction in terms really. >>> >>> > > > > > > > >>> >>> > > > > > > > Cheers >>> >>> > > > > > > > >>> >>> > > > > > > > On Fri, 2016-06-03 at 14:30 +0100, Ignacio Bravo >>> wrote: >>> >>> > > > > > > > > Pedro / Christopher, >>> >>> > > > > > > > > >>> >>> > > > > > > > > Just wanted to share with you that I also had >>> plenty of >>> >>> > > > > > > > > issues >>> >>> > > > > > > > > deploying on bare metal HA servers, and have >>> paused the >>> >>> > > deployment >>> >>> > > > > > > > > using TripleO until better winds start to flow >>> here. I >>> >>> > > > > > > > > was >>> >>> > > able to >>> >>> > > > > > > > > deploy the QuickStart, but on bare metal the >>> history was >>> >>> > > different. >>> >>> > > > > > > > > Couldn't even deploy a two server configuration. >>> >>> > > > > > > > > >>> >>> > > > > > > > > I was thinking that it would be good to have the >>> >>> > > > > > > > > developers >>> >>> > > have >>> >>> > > > > > > > > access to one of our environments and go through >>> a full >>> >>> > > > > > > > > install >>> >>> > > > > with >>> >>> > > > > > > > > us to better see where things fail. We can do this >>> >>> > > > > > > > > handholding >>> >>> > > > > > > > > deployment once every week/month based on >>> developers time >>> >>> > > > > > > > > availability. That way we can get a working >>> install, and >>> >>> > > > > > > > > we can >>> >>> > > > > > > > > troubleshoot real life environment problems. >>> >>> > > > > > > > > >>> >>> > > > > > > > > >>> >>> > > > > > > > > IB >>> >>> > > > > > > > > >>> >>> > > > > > > > > On Jun 3, 2016, at 6:15 AM, Pedro Sousa >>> >>> > > > > > > > > > >>> >>> > > wrote: >>> >>> > > > > > > > > >>> >>> > > > > > > > > > Yes. I've used this, but I'll try again as there's >>> >>> > > > > > > > > > seems to >>> >>> > > be >>> >>> > > > > new >>> >>> > > > > > > > > > updates. >>> >>> > > > > > > > > > >>> >>> > > > > > > > > > >>> >>> > > > > > > > > > >>> >>> > > > > > > > > > Stable Branch Skip all repos mentioned above, >>> other >>> >>> > > > > > > > > > than >>> >>> > > epel- >>> >>> > > > > > > > > > release which is still required. >>> >>> > > > > > > > > > Enable latest RDO Stable Delorean repository >>> for all >>> >>> > > > > > > > > > packages >>> >>> > > > > > > > > > sudo curl -o >>> /etc/yum.repos.d/delorean-liberty.repo >>> >>> > > > > https://trunk.r >>> >>> > > > > > > > > > >>> doproject.org/centos7-liberty/current/delorean.repo >>> >>> >>> > > > > > > > > > Enable the Delorean Deps repository >>> >>> > > > > > > > > > sudo curl -o >>> >>> > > > > > > > > > /etc/yum.repos.d/delorean-deps-liberty.repo >>> >>> > > > > http://tru >>> >>> > > > > > > > > > >>> nk.rdoproject.org/centos7-liberty/delorean-deps.repo >>> >>> >>> > > > > > > > > > >>> >>> > > > > > > > > > On Fri, Jun 3, 2016 at 11:10 AM, Christopher >>> Brown < >>> >>> > > > > cbrown2 at ocf.co . >>> >>> > > > > > > > > > uk> wrote: >>> >>> > > > > > > > > > > No, Liberty deployed ok for us. >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > It suggests to me a package mismatch. Have you >>> >>> > > > > > > > > > > completely >>> >>> > > > > rebuilt >>> >>> > > > > > > > > > > the >>> >>> > > > > > > > > > > undercloud and then the images using Liberty? >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > On Fri, 2016-06-03 at 11:04 +0100, Pedro >>> Sousa wrote: >>> >>> > > > > > > > > > > > AttributeError: 'module' object has no >>> attribute >>> >>> > > 'PortOpt' >>> >>> > > > > > > > > > > -- >>> >>> > > > > > > > > > > Regards, >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > Christopher Brown >>> >>> > > > > > > > > > > OpenStack Engineer >>> >>> > > > > > > > > > > OCF plc >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > Tel: +44 (0)114 257 2200 >>> >>> >>> > > > > > > > > > > Web: www.ocf.co.uk >>> >>> > > > > > > > > > > Blog: blog.ocf.co.uk >>> >>> > > > > > > > > > > Twitter: @ocfplc >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > Please note, any emails relating to an OCF >>> Support >>> >>> > > > > > > > > > > request >>> >>> > > must >>> >>> > > > > > > > > > > always >>> >>> > > > > > > > > > > be sent to support at ocf.co.uk >>> for a ticket number to >>> >>> > > > > > > > > > > be >>> >>> > > > > generated >>> >>> > > > > > > > > > > or >>> >>> > > > > > > > > > > existing support ticket to be updated. >>> Should this >>> >>> > > > > > > > > > > not be >>> >>> > > done >>> >>> > > > > > > > > > > then OCF >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > cannot be held responsible for requests not >>> dealt >>> >>> > > > > > > > > > > with in a >>> >>> > > > > > > > > > > timely >>> >>> > > > > > > > > > > manner. >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > OCF plc is a company registered in England >>> and Wales. >>> >>> > > > > Registered >>> >>> > > > > > > > > > > number >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > 4132533, VAT number GB 780 6803 14. >>> Registered office >>> >>> > > address: >>> >>> > > > > > > > > > > OCF plc, >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, >>> >>> > > > > > > > > > > Chapeltown, >>> >>> > > > > > > > > > > Sheffield S35 >>> >>> > > > > > > > > > > 2PG. >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > > > If you have received this message in error, >>> please >>> >>> > > > > > > > > > > notify >>> >>> > > us >>> >>> > > > > > > > > > > immediately and remove it from your system. >>> >>> > > > > > > > > > > >>> >>> > > > > > > > > >>> >>> > > > > > > > > _______________________________________________ >>> >>> > > > > > > > > rdo-list mailing list >>> >>> > > > > > > > > rdo-list at redhat.com >>> >>> > > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> > > > > > > > > >>> >>> > > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >>> >>> > > > > > > > -- >>> >>> > > > > > > > Regards, >>> >>> > > > > > > > >>> >>> > > > > > > > Christopher Brown >>> >>> > > > > > > > OpenStack Engineer >>> >>> > > > > > > > OCF plc >>> >>> > > > > > > > >>> >>> > > > > > > > Tel: +44 (0)114 257 2200 >>> >>> >>> > > > > > > > Web: www.ocf.co.uk >>> >>> > > > > > > > Blog: blog.ocf.co.uk >>> >>> > > > > > > > Twitter: @ocfplc >>> >>> > > > > > > > >>> >>> > > > > > > > Please note, any emails relating to an OCF Support >>> request >>> >>> > > > > > > > must >>> >>> > > > > always >>> >>> > > > > > > > be sent to support at ocf.co.uk >>> for a ticket number to be >>> >>> > > generated or >>> >>> > > > > > > > existing support ticket to be updated. Should this >>> not be >>> >>> > > > > > > > done >>> >>> > > then >>> >>> > > > > OCF >>> >>> > > > > > > > >>> >>> > > > > > > > cannot be held responsible for requests not dealt >>> with in a >>> >>> > > timely >>> >>> > > > > > > > manner. >>> >>> > > > > > > > >>> >>> > > > > > > > OCF plc is a company registered in England and Wales. >>> >>> > > > > > > > Registered >>> >>> > > > > number >>> >>> > > > > > > > >>> >>> > > > > > > > 4132533, VAT number GB 780 6803 14. Registered office >>> >>> > > > > > > > address: >>> >>> > > OCF >>> >>> > > > > plc, >>> >>> > > > > > > > >>> >>> > > > > > > > 5 Rotunda Business Centre, Thorncliffe Park, >>> Chapeltown, >>> >>> > > Sheffield >>> >>> > > > > S35 >>> >>> > > > > > > > 2PG. >>> >>> > > > > > > > >>> >>> > > > > > > > If you have received this message in error, please >>> notify >>> >>> > > > > > > > us >>> >>> > > > > > > > immediately and remove it from your system. >>> >>> > > > > > > > >>> >>> > > > > > > > _______________________________________________ >>> >>> > > > > > > > rdo-list mailing list >>> >>> > > > > > > > rdo-list at redhat.com >>> >>> > > > > > > > https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> > > > > > > > >>> >>> > > > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >>> >>> > > > > > > > >>> >>> > > > > > > >>> >>> > > > > > >>> >>> > > > > >>> >>> > > > >>> >>> > > >>> >>> > >>> >> >>> >> >>> > >>> > >>> > _______________________________________________ >>> > rdo-list mailing list >>> > rdo-list at redhat.com >>> > https://www.redhat.com/mailman/listinfo/rdo-list >>> > >>> > To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >>> >>> >>> >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From ggillies at redhat.com Tue Jun 7 00:02:44 2016 From: ggillies at redhat.com (Graeme Gillies) Date: Tue, 7 Jun 2016 10:02:44 +1000 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: <6e8295ec-7111-be30-e969-3075a22e9e21@redhat.com> References: <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> <57557BE8.3080305@redhat.com> <6e8295ec-7111-be30-e969-3075a22e9e21@redhat.com> Message-ID: On 07/06/16 00:37, Rich Bowen wrote: > > > On 06/06/2016 10:25 AM, Alan Pevec wrote: >>>> 4) We should curate and keep an up to date page on rdoproject.org that >>>> does highlight the outstanding issues related to TripleO on the RDO >>>> Stable (CBS) releases. These should have links to relevant bugzillas, >>>> clean instructions on how to work around the issue, or cleanly apply a >>>> patch to avoid the issue, and as new releases make it out, we should >>>> update the page to drop off workarounds that are no longer needed. >> There's placeholder page exactly for that: >> https://www.rdoproject.org/testday/workarounds/ >> (shortcut https://www.rdoproject.org/workarounds ) >> > > This page tends to get flushed each test day, and so we haven't done a > great job of keeping it up to date *between* test days. Right now, > there's nothing there. It would indeed be really helpful to have this > page updated on a regular basis, with no-longer-relevant workarounds > removed and new ones added. While this page is good, I think it makes sense to drop the references to testday, because I know when I see something that mentions test day I automatically assume it's out of date and only affects the pre-release versions that were tested on test day. If we want to use this page could we move it to https://www.rdoproject.org/workarounds as the canonical source? > > --Rich > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Graeme Gillies Principal Systems Administrator Openstack Infrastructure Red Hat Australia From rbowen at redhat.com Tue Jun 7 00:46:02 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 6 Jun 2016 20:46:02 -0400 Subject: [rdo-list] Baremetal Tripleo stable version? In-Reply-To: References: <934876527.19639685.1464969395230.JavaMail.zimbra@redhat.com> <138454101.19664515.1464972945010.JavaMail.zimbra@redhat.com> <828347230.19699763.1464981971819.JavaMail.zimbra@redhat.com> <53222da4-5d28-4e21-c8de-a9b2ab4e1e10@redhat.com> <57557BE8.3080305@redhat.com> <6e8295ec-7111-be30-e969-3075a22e9e21@redhat.com> Message-ID: <0f9fe6dd-e494-96a0-fb3c-ce148ce848f6@redhat.com> On 06/06/2016 08:02 PM, Graeme Gillies wrote: > On 07/06/16 00:37, Rich Bowen wrote: >> > >> > >> > On 06/06/2016 10:25 AM, Alan Pevec wrote: >>>>> >>>> 4) We should curate and keep an up to date page on rdoproject.org that >>>>> >>>> does highlight the outstanding issues related to TripleO on the RDO >>>>> >>>> Stable (CBS) releases. These should have links to relevant bugzillas, >>>>> >>>> clean instructions on how to work around the issue, or cleanly apply a >>>>> >>>> patch to avoid the issue, and as new releases make it out, we should >>>>> >>>> update the page to drop off workarounds that are no longer needed. >>> >> There's placeholder page exactly for that: >>> >> https://www.rdoproject.org/testday/workarounds/ >>> >> (shortcut https://www.rdoproject.org/workarounds ) >>> >> >> > >> > This page tends to get flushed each test day, and so we haven't done a >> > great job of keeping it up to date *between* test days. Right now, >> > there's nothing there. It would indeed be really helpful to have this >> > page updated on a regular basis, with no-longer-relevant workarounds >> > removed and new ones added. > While this page is good, I think it makes sense to drop the references > to testday, because I know when I see something that mentions test day I > automatically assume it's out of date and only affects the pre-release > versions that were tested on test day. If we want to use this page could > we move it to https://www.rdoproject.org/workarounds as the canonical > source? > +1 From me at gbraad.nl Tue Jun 7 00:48:27 2016 From: me at gbraad.nl (Gerard Braad) Date: Tue, 7 Jun 2016 08:48:27 +0800 Subject: [rdo-list] Reminder: Newton 1 test day, Thursday and Friday In-Reply-To: <88f78c9b-fd9d-bc87-986c-f7cb63d5ee35@redhat.com> References: <88f78c9b-fd9d-bc87-986c-f7cb63d5ee35@redhat.com> Message-ID: Hi, On Mon, Jun 6, 2016 at 10:48 PM, Rich Bowen wrote: > Details are at > https://www.rdoproject.org/testday/newton/milestone1/ According to this page: * For a TripleO-based installs, try the TripleO quickstart. how about instack? Currently quickstart does not do baremetal deployments well. I do not say it is impossible but needs additional work... regards, Gerard Note: Holiday period in China -- Gerard Braad | http://gbraad.nl [ Doing Open Source Matters ] From ckdwibedy at gmail.com Tue Jun 7 05:28:00 2016 From: ckdwibedy at gmail.com (Chinmaya Dwibedy) Date: Tue, 7 Jun 2016 10:58:00 +0530 Subject: [rdo-list] =?utf-8?q?Issue_with_assignment_of_Intel=E2=80=99s_QAT?= =?utf-8?q?_Card_to_VM_=28PCI-passthrough=29_using_openstack-mitaka?= =?utf-8?q?_release_on_Cent_OS_7=2E2_host?= Message-ID: Hi All, I want the Intel?s QAT Card to be used for PCI Passthrough device. But to implement PCI-passthrough, when I launch a VM using a flavor configured for passthrough, it gives the below errors in nova-conductor.log and instance goes into Error state. Note that, I have installed openstack-mitaka release on host (Cent OS 7.2). Can anyone please have a look into the below stated and let me know if I have missed anything or done anything wrong? Thank you in advance for your support and time. When I create an instance, this error is output in nova-conductor.log. 2016-06-06 05:42:42.005 4898 WARNING nova.scheduler.utils [req-94484e27-1998-4e3a-8aa8-06805613ae65 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 150, in inner return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 104, in select_destinations dests = self.driver.select_destinations(ctxt, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 74, in select_destinations raise exception.NoValidHost(reason=reason) *NoValidHost: No valid host was found. There are not enough hosts available.* 2016-06-06 05:42:42.006 4898 WARNING nova.scheduler.utils [req-94484e27-1998-4e3a-8aa8-06805613ae65 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] [instance: f1db1cce-0777-4f0e-a141-4b278c2d98b4] Setting instance to ERROR state In order to assign Intel?s QAT Card to VMs , followed below procedures Using the PCI bus ID, found out the product id 1) [root at localhost ~(keystone_admin)]# lspci -nn | grep QAT 83:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT [8086:0435] 88:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT [8086:0435] [root at localhost ~(keystone_admin)]# cat /sys/bus/pci/devices/0000:83:00.0/device 0x0435 [root at localhost ~(keystone_admin)]# cat /sys/bus/pci/devices/0000:88:00.0/device 0x0435 [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# 2) Configured the below stated in nova.conf pci_alias = {"name": "QuickAssist", "product_id": "0435", "vendor_id": "8086", "device_type": "type-PCI"} pci_passthrough_whitelist = [{"vendor_id":"8086","product_id":"0435"}] scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter, *PciPassthroughFilter* scheduler_available_filters=nova.scheduler.filters.all_filter 3) service openstack-nova-api restart 4) systemctl restart openstack-nova-compute 5) [root at localhost ~(keystone_admin)]# nova flavor-list +--------------------------------------+-----------------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +--------------------------------------+-----------------+-----------+------+-----------+------+-------+-------------+-----------+ | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True | | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | +--------------------------------------+-----------------+-----------+------+-----------+------+-------+-------------+-----------+ [root at localhost ~(keystone_admin)]# 6) nova flavor-key 4 set "pci_passthrough:alias"="QuickAssist:1" 7) [root at localhost ~(keystone_admin)]# nova flavor-show 4 +----------------------------+--------------------------------------------+ | Property | Value | +----------------------------+--------------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 80 | | extra_specs | {"pci_passthrough:alias": "QuickAssist:1"} | | id | 4 | | name | m1.large | | os-flavor-access:is_public | True | | ram | 8192 | | rxtx_factor | 1.0 | | swap | | | vcpus | 4 | +----------------------------+--------------------------------------------+ [root at localhost ~(keystone_admin)]# 8) [root at localhost ~(keystone_admin)]# nova boot --flavor 4 --key_name oskey1 --image bc859dc5-103b-428b-814f-d36e59009454 --nic net-id=e2ca118d-1f25-47de-8524-bb2a2635c4be --user-data=./myfile.txt TEST WARNING: Option "--key_name" is deprecated; use "--key-name"; this option will be removed in novaclient 3.3.0. +--------------------------------------+--------------------------------------------------------------------------+ | Property | Value | +--------------------------------------+--------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000026 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | 7ZKdcaQut7gu | | config_drive | | | created | 2016-06-06T09:42:41Z | | flavor | m1.large (4) | | hostId | | | id | f1db1cce-0777-4f0e-a141-4b278c2d98b4 | | image | Benu-vMEG-Dev-M.0.0.0-160525-1347 (bc859dc5-103b-428b-814f-d36e59009454) | | key_name | oskey1 | | metadata | {} | | name | TEST | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | 4bc608763cee41d9a8df26d3ef919825 | | updated | 2016-06-06T09:42:41Z | | user_id | 266f5859848e4f39b9725203dda5c3f2 | +--------------------------------------+--------------------------------------------------------------------------+ [root@ localhost ~(keystone_admin)]# . 9) MariaDB [nova]> select * from pci_devices; +---------------------+---------------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+------------+---------------+------------+-----------+-------------+ | created_at | updated_at | deleted_at | deleted | id | compute_node_id | address | product_id | vendor_id | dev_type | dev_id | label | status | extra_info | instance_uuid | request_id | numa_node | parent_addr | +---------------------+---------------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+------------+---------------+------------+-----------+-------------+ | 2016-06-03 12:01:45 | 2016-06-06 09:46:35 | NULL | 0 | 1 | 1 | 0000:83:00.0 | 0435 | 8086 | type-PF | pci_0000_83_00_0 | label_8086_0435 | available | {} | NULL | NULL | 1 | NULL | | 2016-06-03 12:01:45 | 2016-06-06 09:46:35 | NULL | 0 | 2 | 1 | 0000:88:00.0 | 0435 | 8086 | type-PF | pci_0000_88_00_0 | label_8086_0435 | available | {} | NULL | NULL | 1 | NULL | +---------------------+---------------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+------------+---------------+------------+-----------+-------------+ 2 rows in set (0.00 sec) MariaDB [nova]> [root at localhost ~(keystone_admin)]# dmesg | grep -e DMAR -e IOMMU [ 0.000000] ACPI: DMAR 000000007b69a000 00130 (v01 INTEL S2600WT 00000001 INTL 20091013) [ 0.128779] dmar: IOMMU 0: reg_base_addr fbffc000 ver 1:0 cap d2078c106f0466 ecap f020de [ 0.128785] dmar: IOMMU 1: reg_base_addr c7ffc000 ver 1:0 cap d2078c106f0466 ecap f020de [ 0.128911] IOAPIC id 10 under DRHD base 0xfbffc000 IOMMU 0 [ 0.128912] IOAPIC id 8 under DRHD base 0xc7ffc000 IOMMU 1 [ 0.128913] IOAPIC id 9 under DRHD base 0xc7ffc000 IOMMU 1 [root@ localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# lscpu | grep Virtualization Virtualization: VT-x [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# nova service-list +----+------------------+--------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+--------+----------+---------+-------+----------------------------+-----------------+ | 9 | nova-cert | localhost | internal | enabled | up | 2016-06-07T04:58:28.000000 | - | | 10 | nova-consoleauth | localhost | internal | enabled | up | 2016-06-07T04:58:30.000000 | - | | 11 | nova-scheduler | localhost | internal | enabled | up | 2016-06-07T04:58:30.000000 | - | | 12 | nova-conductor | localhost | internal | enabled | up | 2016-06-07T04:58:29.000000 | - | | 18 | nova-compute | localhost | nova | enabled | up | 2016-06-07T04:58:29.000000 | - | +----+------------------+--------+----------+---------+-------+----------------------------+-----------------+ [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# nova host-list +-----------+-------------+----------+ | host_name | service | zone | +-----------+-------------+----------+ | localhost | cert | internal | | localhost | consoleauth | internal | | localhost | scheduler | internal | | localhost | conductor | internal | | localhost | compute | nova | +-----------+-------------+----------+ [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# neutron agent-list +--------------------------------------+--------------------+--------+-------------------+-------+----------------+---------------------------+ | id | agent_type | host | availability_zone | alive | admin_state_up | binary | +--------------------------------------+--------------------+--------+-------------------+-------+----------------+---------------------------+ | 0e81d20f-b41d-490a-966a-7171880963b9 | Metadata agent | localhost | | :-) | True | neutron-metadata-agent | | 2ccb17dc-35d8-41cc-8e5d-83496a7e26b0 | Metering agent | localhost | | :-) | True | neutron-metering-agent | | 6fef2fa7-2479-4d45-889c-b38b854ac3e3 | DHCP agent | localhost | nova | :-) | True | neutron-dhcp-agent | | 87c976cc-e3cd-4818-aa4f-ee599bf812b1 | L3 agent | localhost | nova | :-) | True | neutron-l3-agent | | aeb4f399-2281-4ad3-b880-802812910ec8 | Open vSwitch agent | localhost | | :-) | True | neutron-openvswitch-agent | +--------------------------------------+--------------------+--------+-------------------+-------+----------------+---------------------------+ [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# nova hypervisor-list +----+---------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+---------------------+-------+---------+ | 1 | localhost | up | enabled | +----+---------------------+-------+---------+ [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# grep ^virt_type /etc/nova/nova.conf virt_type=kvm [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# grep ^compute_driver /etc/nova/nova.conf compute_driver=libvirt.LibvirtDriver [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# Regards, Chinmaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckdwibedy at gmail.com Tue Jun 7 05:30:37 2016 From: ckdwibedy at gmail.com (Chinmaya Dwibedy) Date: Tue, 7 Jun 2016 11:00:37 +0530 Subject: [rdo-list] =?utf-8?q?Unable_to_log_in_to_the_VM_instance=E2=80=99?= =?utf-8?q?s_console_using_openstack-mitaka_release?= In-Reply-To: References: Message-ID: Thank you Boris for your valuable suggestions. It worked. On Fri, May 27, 2016 at 5:58 PM, Boris Derzhavets wrote: > > > > ------------------------------ > *From:* Chinmaya Dwibedy > *Sent:* Friday, May 27, 2016 7:31 AM > *To:* Boris Derzhavets > *Cc:* rdo-list at redhat.com > *Subject:* Re: [rdo-list] Unable to log in to the VM instance?s console > using openstack-mitaka release > > > Hi Boris, > > > > Thank you for your prompt response. > > As a matter of clarification, I did not manage the key pairs in web interface or through command line. > It launches the instance without any key pair. Also I am not trying to login into VM?s floating-ip via ssh. > I am trying to access an Instance Console using the Dashboard. > > Option 1. > [BD] Then you may to start VM with "--user-data" :- > > > *[root at dfw02 ~(keystone_admin)]$ nova boot --flavor 2 --user-data=./myfile.txt --image 03c9ad20-b0a3-4b71-aa08-2728ecb66210 VF20Devs* > > where > > > > > > *[root at dfw02 ~(keystone_admin)]$ cat ./myfile.txt#cloud-config password: mysecret chpasswd: { expire: False } ssh_pwauth: True* > This way will allow you login via ssh and dashboard with password "secret" > No ssh keypairs are supposed to be created > > It shows me the login prompt. But I am not able to log in to the instance?s console > (Dashboard) using username (root) and password (root). it says ?Log in incorrect?. > > Option 2. > [BD] Please, read and follow ( I am sending this to you second time ) > > 1) create key-pair via nova CLI or dashboard > 2) Launch instance dialog is providing an entry line "Key pair" . Place oskey01 there ( click on "+" in Mitaka ) > Thus it would write RSA public key to ~fedora/.ssh/authorized_keys on VM's file system ( when you boot VM the first time ) > 3) oskey01.pem would be located in folder where you ran `nova keypair-add oske01 > oskey01.pem` > Content of pem file is RSA private key from keypair generated by nova CLI > 4) Login to VM via SSH :- > ssh -i oskey01.pem fedora at VM's floating-ip ( no password needed ) > Then inside VM :- > $ sudo su - > no password required fedora is a special user setup via cloud-init > As root assign password to fedora and root ( for instance ) > Switch to dashboard and log into VNC console to VM . Created password are persistent for VM > > > Regards, > > Chinmaya > > > On Fri, May 27, 2016 at 2:50 PM, Boris Derzhavets > wrote: > >> Then as fedora user inside VM :- >> >> $ sudo su - >> >> # passwd fedora >> >> You will get login prompt for fedora via dashboard console >> >> in the same session ( or for root, it doesn't matter ) >> >> >> >> ------------------------------ >> *From:* rdo-list-bounces at redhat.com on >> behalf of Boris Derzhavets >> *Sent:* Friday, May 27, 2016 2:50 AM >> *To:* Chinmaya Dwibedy; rdo-list at redhat.com >> *Subject:* Re: [rdo-list] Unable to log in to the VM instance?s console >> using openstack-mitaka release >> >> >> When you run :- >> >> >> # source keystonerc_demo >> >> # nova keypair-add oskey01 > oskey01.pem >> >> # chmod 600 *.pem >> >> SSH RSA public key gets uploaded to Nova and may be used when you launch >> the VM >> >> It would be written by default to ~fedora/.ssh/authorized_keys ( as far >> as I remember ) on your VM >> >> when it comes to ACTIVE state >> >> >> # nova keypair-list >> >> shows this public rsa key been generated by nova command. >> >> >> SSH RSA private key gets written to oskey01.pem >> >> No hackery is needed to connect to VM via it's FIP >> >> $ ssh -i oskey01.pem fedora at VM's floating-ip >> >> Boris. >> ------------------------------ >> *From:* rdo-list-bounces at redhat.com on >> behalf of Chinmaya Dwibedy >> *Sent:* Friday, May 27, 2016 2:24 AM >> *To:* rdo-list at redhat.com >> *Subject:* [rdo-list] Unable to log in to the VM instance?s console >> using openstack-mitaka release >> >> >> Hi All, >> >> >> I have installed OpenStack (i.e., openstack-mitaka release) on CentOS7.2 >> . Used Fedora20 qcow2 cloud image for creating a VM using Dashboard. >> >> 1) Installed ?libguestfs? on Nova compute node. >> >> 2) Updated these lines in ?/etc/nova/nova.conf ? >> >> inject_password=true >> >> inject_key=true >> >> inject_partition=-1 >> >> >> >> 3) Restarted nove-compute: # service openstack-nova-compute restart >> >> 4) Enabled setting root password in >> /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py >> >> OPENSTACK_HYPERVISOR_FEATURES = { >> >> ?.. >> >> ?can_set_password?: True, >> >> } >> >> 5) Placed the below code in ?Customization Script? section of the >> Launch Instance dialog box in OpenStack. >> >> >> >> #cloud-config >> >> ssh_pwauth: True >> >> chpasswd: >> >> list: | >> >> root: root >> >> expire: False >> >> runcmd: >> >> - [ sh, -c, echo "=========hello world'=========" ] >> >> >> >> It appears that, when the instance was launched, cloud-init did not change the password for root user, and I was not able to log in to the instance?s console (Dashboard) using username (root) and password (root). it says ?Log in incorrect?. >> >> Upon checking the boot log found that, cloud-init has executed /var/lib/cloud/instance/scripts/runcmd and printed hello world. Can anyone please let me know where I am wrong ? Thanks in advance for your support and time. >> >> >> >> Regards, >> >> Chinmaya >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at gbraad.nl Tue Jun 7 05:48:33 2016 From: me at gbraad.nl (Gerard Braad) Date: Tue, 7 Jun 2016 13:48:33 +0800 Subject: [rdo-list] Documenting RDO Stable and workarounds In-Reply-To: <1465247338.16318.11.camel@ocf.co.uk> References: <1465247338.16318.11.camel@ocf.co.uk> Message-ID: On Tue, Jun 7, 2016 at 5:08 AM, Christopher Brown wrote: > However I need to clarify where TripleO QuickStart fits into the > picture. I have the exact same question. Especially how this relates to the current documentation as performed with instack and the future of this method. It does feel (and is possible) to perform the bare-metal deployments from quickstart. But what is the expectation for this? At the moment there is quickstart documentation and the general TripleO documentation. -- Gerard Braad | http://gbraad.nl [ Doing Open Source Matters ] From whayutin at redhat.com Tue Jun 7 11:38:23 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 7 Jun 2016 07:38:23 -0400 Subject: [rdo-list] Documenting RDO Stable and workarounds In-Reply-To: References: <1465247338.16318.11.camel@ocf.co.uk> Message-ID: On Tue, Jun 7, 2016 at 1:48 AM, Gerard Braad wrote: > On Tue, Jun 7, 2016 at 5:08 AM, Christopher Brown > wrote: > > However I need to clarify where TripleO QuickStart fits into the > > picture. > > I have the exact same question. Especially how this relates to the > current documentation as performed with instack and the future of this > method. It does feel (and is possible) to perform the bare-metal > deployments from quickstart. But what is the expectation for this? At > the moment there is quickstart documentation and the general TripleO > documentation. > > -- > > Gerard Braad | http://gbraad.nl > [ Doing Open Source Matters ] > Couple notes.. We are closing out some work at the moment that will enable tripleo-quickstart to deploy on bare metal. There are two methods in play here. 1. Use a virtualized undercloud node on the bare metal undercloud physical box. 2. Install the undercloud directly on the bare metal undercloud node. I'll let Ronelle and Rasca update the status on that. Secondly, Tripleo-Quickstart should be able to produce up to date tripleo documentation from it's ci logs. We'll be exploring this concept in the coming weeks and would welcome any input and help. Thanks > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Tue Jun 7 12:12:26 2016 From: dms at redhat.com (David Moreau Simard) Date: Tue, 7 Jun 2016 08:12:26 -0400 Subject: [rdo-list] Read the docs for DLRN shows an old version In-Reply-To: References: Message-ID: Hey, Just letting you know I haven't forgotten about this, still on my to-do list. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On May 30, 2016 8:59 PM, "Gerard Braad" wrote: Hi All, The read the docs for DLRN is showing an older version of the documentation. Likely a push does not trigger the rebuild of the docs automatically. I have experienced the same with one of my projects. I created an issue at the project's Github for this [1]. Hope this can be resolved. regards, Gerard [1] https://github.com/openstack-packages/DLRN/issues/17 -- Gerard Braad F/OSS & IT Consultant _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeekhof at redhat.com Wed Jun 8 01:35:22 2016 From: abeekhof at redhat.com (Andrew Beekhof) Date: Wed, 8 Jun 2016 11:35:22 +1000 Subject: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash In-Reply-To: References: Message-ID: In case you didn't resolve this in the meantime... On Mon, May 30, 2016 at 6:56 PM, Boris Derzhavets wrote: > > # Define a single controller node and a single compute node. > overcloud_nodes: > - name: control_0 > flavor: control > > - name: compute_0 > flavor: compute > > - name: compute_1 > flavor: compute > > # Tell tripleo how we want things done. > extra_args: >- > --neutron-network-type vxlan > --neutron-tunnel-types vxlan > --ntp-server pool.ntp.org > > network_isolation: true > > > Picks up new memory setting but doesn't create second Compute Node. > > Every time just Controller && (1)* Compute. You need to add --control-scale 2 and --compute-scale 2 to 'extra_args' From bderzhavets at hotmail.com Wed Jun 8 08:03:12 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 8 Jun 2016 08:03:12 +0000 Subject: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash In-Reply-To: References: , Message-ID: It's done http://dbaxps.blogspot.com/2016/06/attempt-of-rdo-triple0-quickstart-ha.html Thank you. Boris. ________________________________ From: Andrew Beekhof Sent: Tuesday, June 7, 2016 9:35 PM To: Boris Derzhavets Cc: John Trowbridge; Lars Kellogg-Stedman; rdo-list Subject: Re: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash In case you didn't resolve this in the meantime... On Mon, May 30, 2016 at 6:56 PM, Boris Derzhavets wrote: > > # Define a single controller node and a single compute node. > overcloud_nodes: > - name: control_0 > flavor: control > > - name: compute_0 > flavor: compute > > - name: compute_1 > flavor: compute > > # Tell tripleo how we want things done. > extra_args: >- > --neutron-network-type vxlan > --neutron-tunnel-types vxlan > --ntp-server pool.ntp.org > > network_isolation: true > > > Picks up new memory setting but doesn't create second Compute Node. > > Every time just Controller && (1)* Compute. You need to add --control-scale 2 and --compute-scale 2 to 'extra_args' -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Wed Jun 8 08:10:58 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 8 Jun 2016 08:10:58 +0000 Subject: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash In-Reply-To: References: , , Message-ID: Link doesn't work :- http://dbaxps.blogspot.ru/2016/06/attempt-of-rdo-triple0-quickstart-ha.html [https://3.bp.blogspot.com/-BbTypts6iFo/V1PnPm6mlKI/AAAAAAAAG0M/elqhjc_fJkIIq8q-fyRahJ2gJyqrDrrugCLcB/w1200-h630-p-nu/Screenshot%2Bfrom%2B2016-06-05%2B11-37-45.png] Openstack RDO && KVM Hypervisor: Triple0 QuickStart HA Setup on Intel Core i7-4790 Desktop dbaxps.blogspot.ru ________________________________ From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets Sent: Wednesday, June 8, 2016 4:03 AM To: Andrew Beekhof Cc: John Trowbridge; rdo-list Subject: Re: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash It's done http://dbaxps.blogspot.com/2016/06/attempt-of-rdo-triple0-quickstart-ha.html Thank you. Boris. ________________________________ From: Andrew Beekhof Sent: Tuesday, June 7, 2016 9:35 PM To: Boris Derzhavets Cc: John Trowbridge; Lars Kellogg-Stedman; rdo-list Subject: Re: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash In case you didn't resolve this in the meantime... On Mon, May 30, 2016 at 6:56 PM, Boris Derzhavets wrote: > > # Define a single controller node and a single compute node. > overcloud_nodes: > - name: control_0 > flavor: control > > - name: compute_0 > flavor: compute > > - name: compute_1 > flavor: compute > > # Tell tripleo how we want things done. > extra_args: >- > --neutron-network-type vxlan > --neutron-tunnel-types vxlan > --ntp-server pool.ntp.org > > network_isolation: true > > > Picks up new memory setting but doesn't create second Compute Node. > > Every time just Controller && (1)* Compute. You need to add --control-scale 2 and --compute-scale 2 to 'extra_args' -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick at laimbock.com Wed Jun 8 11:20:02 2016 From: patrick at laimbock.com (Patrick Laimbock) Date: Wed, 8 Jun 2016 13:20:02 +0200 Subject: [rdo-list] Tripleo QuickStart HA deployment attempts constantly crash In-Reply-To: References: Message-ID: <2894381f-0062-53dc-f090-b4bb5cc88e6e@laimbock.com> On 08-06-16 10:10, Boris Derzhavets wrote: > Link doesn't work :- > > > http://dbaxps.blogspot.ru/2016/06/attempt-of-rdo-triple0-quickstart-ha.html > > > Openstack RDO && KVM Hypervisor: Triple0 QuickStart HA Setup on Intel > Core i7-4790 Desktop > > dbaxps.blogspot.ru > Try: http://dbaxps.blogspot.nl/2016/06/attempt-of-rdo-triple0-quickstart-ha.html HTH, Patrick From ckdwibedy at gmail.com Wed Jun 8 12:08:05 2016 From: ckdwibedy at gmail.com (Chinmaya Dwibedy) Date: Wed, 8 Jun 2016 17:38:05 +0530 Subject: [rdo-list] =?utf-8?q?Fwd=3A_ConnectFailure_error_upon_triggering_?= =?utf-8?q?=E2=80=9Cnova_image-list=E2=80=9D_command_using_openstac?= =?utf-8?q?k-mitaka_release?= In-Reply-To: References: Message-ID: Hi , I am getting the ConnectFailure error message upon triggering ?nova image-list? command. nova-api process should be listening on 8774. It doesn't look like it is not running. Also I do not find any error logs in nova-api.log nova-compute.log and nova-conductor.log. I am using openstack- mitaka release on host (Cent OS 7.2). How can I debug and know what prevents it from running ? please suggest. Note: This was working while back and got this issue all of a sudden. Here are some logs. [root at localhost ~(keystone_admin)]# nova image-list ERROR (ConnectFailure): Unable to establish connection to http://172.18.121.48:8774/v2/4bc608763cee41d9a8df26d3ef919825 [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# service openstack-nova-api restart Redirecting to /bin/systemctl restart openstack-nova-api.service Job for openstack-nova-api.service failed because the control process exited with error code. See "systemctl status openstack-nova-api.service" and "journalctl -xe" for details. [root at localhost ~(keystone_admin)]# systemctl status openstack-nova-api.service ? openstack-nova-api.service - OpenStack Nova API Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-api.service; enabled; vendor preset: disabled) Active: activating (start) since Wed 2016-06-08 07:59:20 EDT; 2s ago Main PID: 179955 (nova-api) CGroup: /system.slice/openstack-nova-api.service ??179955 /usr/bin/python2 /usr/bin/nova-api Jun 08 07:59:20 localhost systemd[1]: Starting OpenStack Nova API Server... Jun 08 07:59:22 localhost python2[179955]: detected unhandled Python exception in '/usr/bin/nova-api' [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# keystone endpoint-list /usr/lib/python2.7/site-packages/keystoneclient/shell.py:64: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient. 'python-keystoneclient.', DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:145: DeprecationWarning: Constructing an instance of the keystoneclient.v2_0.client.Client class without a session is deprecated as of the 1.7.0 release and may be removed in the 2.0.0 release. 'the 2.0.0 release.', DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:147: DeprecationWarning: Using the 'tenant_name' argument is deprecated in version '1.7.0' and will be removed in version '2.0.0', please use the 'project_name' argument instead super(Client, self).__init__(**kwargs) /usr/lib/python2.7/site-packages/debtcollector/renames.py:45: DeprecationWarning: Using the 'tenant_id' argument is deprecated in version '1.7.0' and will be removed in version '2.0.0', please use the 'project_id' argument instead return f(*args, **kwargs) /usr/lib/python2.7/site-packages/keystoneclient/httpclient.py:371: DeprecationWarning: Constructing an HTTPClient instance without using a session is deprecated as of the 1.7.0 release and may be removed in the 2.0.0 release. 'the 2.0.0 release.', DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/session.py:140: DeprecationWarning: keystoneclient.session.Session is deprecated as of the 2.1.0 release in favor of keystoneauth1.session.Session. It will be removed in future releases. DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py:56: DeprecationWarning: keystoneclient auth plugins are deprecated as of the 2.1.0 release in favor of keystoneauth1 plugins. They will be removed in future releases. 'in future releases.', DeprecationWarning) +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+-------------------------------------------------+----------------------------------+ | id | region | publicurl | internalurl | adminurl | service_id | +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+-------------------------------------------------+----------------------------------+ | 02fcec9a7b834128b3e30403c4ed0de7 | RegionOne | http://172.18.121.48:8080/v1/AUTH_%(tenant_id)s | http://172.18.121.48:8080/v1/AUTH_%(tenant_id)s | http://172.18.121.48:8080/v1/AUTH_%(tenant_id)s | 5533324a63d8402888040832640a19d0 | | 295802909413422cb7c22dc1e268bce9 | RegionOne | http://172.18.121.48:8774/v2/%(tenant_id)s | http://172.18.121.48:8774/v2/%(tenant_id)s | http://172.18.121.48:8774/v2/%(tenant_id)s | f7fe68bf4cec47a4a3c942f3916dc377 | | 2a125f10b0d04f8a9306dede85b65514 | RegionOne | http://172.18.121.48:9696 | http://172.18.121.48:9696 | http://172.18.121.48:9696 | b2a60cdc144e40a49757f13c2264f030 | | 2d1a91d39f3d421cb1b2fe73fba5fd3a | RegionOne | http://172.18.121.48:8777 | http://172.18.121.48:8777 | http://172.18.121.48:8777 | e6d750ac5ef3433799d4fe39518a3fe6 | | 47b634f3e18e4caf914521a1a4157008 | RegionOne | http://172.18.121.48:8042 | http://172.18.121.48:8042 | http://172.18.121.48:8042 | 07cd8adf66254b4ab9b07be03a24084b | | 595913f7227b44dc8753db3b0cf6acdc | RegionOne | http://172.18.121.48:8041 | http://172.18.121.48:8041 | http://172.18.121.48:8041 | f43240abe5f3476ea64a8bd381fe4da7 | | 64381b509bc84639b6a4710e6d99a23b | RegionOne | http://172.18.121.48:8776/v1/%(tenant_id)s | http://172.18.121.48:8776/v1/%(tenant_id)s | http://172.18.121.48:8776/v1/%(tenant_id)s | 7edc7bedf93d4f388185699b9793ec7f | | 727d25775be54c9f8453f697ae5cb625 | RegionOne | http://172.18.121.48:5000/v2.0 | http://172.18.121.48:5000/v2.0 | http://172.18.121.48:35357/v2.0 | 25e99a2a98f244d9a73bf965acdd39da | | 9049338c57574b2d8ff8308b1a4265a5 | RegionOne | http://172.18.121.48:8776/v2/%(tenant_id)s | http://172.18.121.48:8776/v2/%(tenant_id)s | http://172.18.121.48:8776/v2/%(tenant_id)s | 6e070f0629094b72b66025250fdbda64 | | c051c0f9649143f6b29eaf0895940abe | RegionOne | http://172.18.121.48:9292 | http://172.18.121.48:9292 | http://172.18.121.48:9292 | 40874e10139a47eb88dfec2114047a34 | | ee4a00c1e8334cb8921fa3f2a7c82f1b | RegionOne | http://172.18.121.48:8774/v3 | http://172.18.121.48:8774/v3 | http://172.18.121.48:8774/v3 | 24c92ce4cd354e3db6c5ad59b8beeae8 | | fa60b5ba0ab7436ab1ffebb2982d3ccc | RegionOne | http://127.0.0.1:8776/v3/%(tenant_id)s | http://127.0.0.1:8776/v3/%(tenant_id)s | http://127.0.0.1:8776/v3/%(tenant_id)s | b0b6b97cf9d649c9800300cc64b0e866 | +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+-------------------------------------------------+----------------------------------+ [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# netstat -ntlp | grep 8774 [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# ps -ef | grep nova-api nova 156427 1 86 07:51 ? 00:00:01 /usr/bin/python2 /usr/bin/nova-api [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# lsof -i :8774 [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# keystone user-list /usr/lib/python2.7/site-packages/keystoneclient/shell.py:64: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient. 'python-keystoneclient.', DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:145: DeprecationWarning: Constructing an instance of the keystoneclient.v2_0.client.Client class without a session is deprecated as of the 1.7.0 release and may be removed in the 2.0.0 release. 'the 2.0.0 release.', DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:147: DeprecationWarning: Using the 'tenant_name' argument is deprecated in version '1.7.0' and will be removed in version '2.0.0', please use the 'project_name' argument instead super(Client, self).__init__(**kwargs) /usr/lib/python2.7/site-packages/debtcollector/renames.py:45: DeprecationWarning: Using the 'tenant_id' argument is deprecated in version '1.7.0' and will be removed in version '2.0.0', please use the 'project_id' argument instead return f(*args, **kwargs) /usr/lib/python2.7/site-packages/keystoneclient/httpclient.py:371: DeprecationWarning: Constructing an HTTPClient instance without using a session is deprecated as of the 1.7.0 release and may be removed in the 2.0.0 release. 'the 2.0.0 release.', DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/session.py:140: DeprecationWarning: keystoneclient.session.Session is deprecated as of the 2.1.0 release in favor of keystoneauth1.session.Session. It will be removed in future releases. DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py:56: DeprecationWarning: keystoneclient auth plugins are deprecated as of the 2.1.0 release in favor of keystoneauth1 plugins. They will be removed in future releases. 'in future releases.', DeprecationWarning) +----------------------------------+------------+---------+----------------------+ | id | name | enabled | email | +----------------------------------+------------+---------+----------------------+ | 266f5859848e4f39b9725203dda5c3f2 | admin | True | root at localhost | | 79a6ff3cc7cc4d018247c750adbc18e7 | aodh | True | aodh at localhost | | 90f28a2a80054132a901d39da307213f | ceilometer | True | ceilometer at localhost | | 16fa5ffa60e147d89ad84646b6519278 | cinder | True | cinder at localhost | | c6312ec6c2c444288a412f32173fcd99 | glance | True | glance at localhost | | ac8fb9c33d404a1697d576d428db90b3 | gnocchi | True | gnocchi at localhost | | 1a5b4da4ed974ac8a6c78b752ac8fab6 | neutron | True | neutron at localhost | | f21e8a15da5c40b7957416de4fa91b62 | nova | True | nova at localhost | | b843358d7ae44944b11af38ce4b61f4d | swift | True | swift at localhost | +----------------------------------+------------+---------+----------------------+ [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# nova-manage service list Option "verbose" from group "DEFAULT" is deprecated for removal. Its value may be silently ignored in the future. Option "notification_driver" from group "DEFAULT" is deprecated. Use option "driver" from group "oslo_messaging_notifications". Option "notification_topics" from group "DEFAULT" is deprecated. Use option "topics" from group "oslo_messaging_notifications". DEPRECATED: Use the nova service-* commands from python-novaclient instead or the os-services REST resource. The service subcommand will be removed in the 14.0 release. Binary Host Zone Status State Updated_At nova-osapi_compute 0.0.0.0 internal enabled XXX None nova-metadata 0.0.0.0 internal enabled XXX None nova-cert localhost internal enabled XXX 2016-06-08 06:12:38 nova-consoleauth localhost internal enabled XXX 2016-06-08 06:12:37 nova-scheduler localhost internal enabled XXX 2016-06-08 06:12:38 nova-conductor localhost internal enabled XXX 2016-06-08 06:12:37 nova-compute localhost nova enabled XXX 2016-06-08 06:12:43 [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# ls -l /var/log/nova/ total 4 -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-api.log -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-cert.log -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-compute.log -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-conductor.log -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-consoleauth.log -rw-r--r--. 1 root root 0 Jun 8 05:22 nova-manage.log -rw-r--r--. 1 nova nova 995 Jun 8 05:32 nova-novncproxy.log -rw-r--r--. 1 nova nova 0 Jun 8 05:23 nova-scheduler.log [root at localhost ~(keystone_admin)]# Regards, Chinmaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Wed Jun 8 12:19:17 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 8 Jun 2016 12:19:17 +0000 Subject: [rdo-list] =?windows-1252?q?Fwd=3A_ConnectFailure_error_upon_trig?= =?windows-1252?q?gering_=93nova_image-list=94_command_using_openstack-mit?= =?windows-1252?q?aka_release?= In-Reply-To: References: , Message-ID: ________________________________ From: rdo-list-bounces at redhat.com on behalf of Chinmaya Dwibedy Sent: Wednesday, June 8, 2016 8:08 AM To: rdo-list at redhat.com Subject: [rdo-list] Fwd: ConnectFailure error upon triggering ?nova image-list? command using openstack-mitaka release Hi , I am getting the ConnectFailure error message upon triggering ?nova image-list? command. nova-api process should be listening on 8774. It doesn't look like it is not running. Also I do not find any error logs in nova-api.log [BD] # netstat -antp | grep 8774 # iptables-save | grep 8774 If first command return xxxxxx/python then `ps -ef | grep xxxxxx` nova-compute.log and nova-conductor.log. I am using openstack-mitaka release on host (Cent OS 7.2). How can I debug and know what prevents it from running ? please suggest. Note: This was working while back and got this issue all of a sudden. Here are some logs. [root at localhost ~(keystone_admin)]# nova image-list ERROR (ConnectFailure): Unable to establish connection to http://172.18.121.48:8774/v2/4bc608763cee41d9a8df26d3ef919825 [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# service openstack-nova-api restart Redirecting to /bin/systemctl restart openstack-nova-api.service Job for openstack-nova-api.service failed because the control process exited with error code. See "systemctl status openstack-nova-api.service" and "journalctl -xe" for details. [root at localhost ~(keystone_admin)]# systemctl status openstack-nova-api.service ? openstack-nova-api.service - OpenStack Nova API Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-api.service; enabled; vendor preset: disabled) Active: activating (start) since Wed 2016-06-08 07:59:20 EDT; 2s ago Main PID: 179955 (nova-api) CGroup: /system.slice/openstack-nova-api.service ??179955 /usr/bin/python2 /usr/bin/nova-api Jun 08 07:59:20 localhost systemd[1]: Starting OpenStack Nova API Server... Jun 08 07:59:22 localhost python2[179955]: detected unhandled Python exception in '/usr/bin/nova-api' [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# keystone endpoint-list /usr/lib/python2.7/site-packages/keystoneclient/shell.py:64: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient. 'python-keystoneclient.', DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:145: DeprecationWarning: Constructing an instance of the keystoneclient.v2_0.client.Client class without a session is deprecated as of the 1.7.0 release and may be removed in the 2.0.0 release. 'the 2.0.0 release.', DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:147: DeprecationWarning: Using the 'tenant_name' argument is deprecated in version '1.7.0' and will be removed in version '2.0.0', please use the 'project_name' argument instead super(Client, self).__init__(**kwargs) /usr/lib/python2.7/site-packages/debtcollector/renames.py:45: DeprecationWarning: Using the 'tenant_id' argument is deprecated in version '1.7.0' and will be removed in version '2.0.0', please use the 'project_id' argument instead return f(*args, **kwargs) /usr/lib/python2.7/site-packages/keystoneclient/httpclient.py:371: DeprecationWarning: Constructing an HTTPClient instance without using a session is deprecated as of the 1.7.0 release and may be removed in the 2.0.0 release. 'the 2.0.0 release.', DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/session.py:140: DeprecationWarning: keystoneclient.session.Session is deprecated as of the 2.1.0 release in favor of keystoneauth1.session.Session. It will be removed in future releases. DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py:56: DeprecationWarning: keystoneclient auth plugins are deprecated as of the 2.1.0 release in favor of keystoneauth1 plugins. They will be removed in future releases. 'in future releases.', DeprecationWarning) +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+-------------------------------------------------+----------------------------------+ | id | region | publicurl | internalurl | adminurl | service_id | +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+-------------------------------------------------+----------------------------------+ | 02fcec9a7b834128b3e30403c4ed0de7 | RegionOne | http://172.18.121.48:8080/v1/AUTH_%(tenant_id)s | http://172.18.121.48:8080/v1/AUTH_%(tenant_id)s | http://172.18.121.48:8080/v1/AUTH_%(tenant_id)s | 5533324a63d8402888040832640a19d0 | | 295802909413422cb7c22dc1e268bce9 | RegionOne | http://172.18.121.48:8774/v2/%(tenant_id)s | http://172.18.121.48:8774/v2/%(tenant_id)s | http://172.18.121.48:8774/v2/%(tenant_id)s | f7fe68bf4cec47a4a3c942f3916dc377 | | 2a125f10b0d04f8a9306dede85b65514 | RegionOne | http://172.18.121.48:9696 | http://172.18.121.48:9696 | http://172.18.121.48:9696 | b2a60cdc144e40a49757f13c2264f030 | | 2d1a91d39f3d421cb1b2fe73fba5fd3a | RegionOne | http://172.18.121.48:8777 | http://172.18.121.48:8777 | http://172.18.121.48:8777 | e6d750ac5ef3433799d4fe39518a3fe6 | | 47b634f3e18e4caf914521a1a4157008 | RegionOne | http://172.18.121.48:8042 | http://172.18.121.48:8042 | http://172.18.121.48:8042 | 07cd8adf66254b4ab9b07be03a24084b | | 595913f7227b44dc8753db3b0cf6acdc | RegionOne | http://172.18.121.48:8041 | http://172.18.121.48:8041 | http://172.18.121.48:8041 | f43240abe5f3476ea64a8bd381fe4da7 | | 64381b509bc84639b6a4710e6d99a23b | RegionOne | http://172.18.121.48:8776/v1/%(tenant_id)s | http://172.18.121.48:8776/v1/%(tenant_id)s | http://172.18.121.48:8776/v1/%(tenant_id)s | 7edc7bedf93d4f388185699b9793ec7f | | 727d25775be54c9f8453f697ae5cb625 | RegionOne | http://172.18.121.48:5000/v2.0 | http://172.18.121.48:5000/v2.0 | http://172.18.121.48:35357/v2.0 | 25e99a2a98f244d9a73bf965acdd39da | | 9049338c57574b2d8ff8308b1a4265a5 | RegionOne | http://172.18.121.48:8776/v2/%(tenant_id)s | http://172.18.121.48:8776/v2/%(tenant_id)s | http://172.18.121.48:8776/v2/%(tenant_id)s | 6e070f0629094b72b66025250fdbda64 | | c051c0f9649143f6b29eaf0895940abe | RegionOne | http://172.18.121.48:9292 | http://172.18.121.48:9292 | http://172.18.121.48:9292 | 40874e10139a47eb88dfec2114047a34 | | ee4a00c1e8334cb8921fa3f2a7c82f1b | RegionOne | http://172.18.121.48:8774/v3 | http://172.18.121.48:8774/v3 | http://172.18.121.48:8774/v3 | 24c92ce4cd354e3db6c5ad59b8beeae8 | | fa60b5ba0ab7436ab1ffebb2982d3ccc | RegionOne | http://127.0.0.1:8776/v3/%(tenant_id)s | http://127.0.0.1:8776/v3/%(tenant_id)s | http://127.0.0.1:8776/v3/%(tenant_id)s | b0b6b97cf9d649c9800300cc64b0e866 | +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+-------------------------------------------------+----------------------------------+ [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# netstat -ntlp | grep 8774 [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# ps -ef | grep nova-api nova 156427 1 86 07:51 ? 00:00:01 /usr/bin/python2 /usr/bin/nova-api [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# lsof -i :8774 [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# keystone user-list /usr/lib/python2.7/site-packages/keystoneclient/shell.py:64: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient. 'python-keystoneclient.', DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:145: DeprecationWarning: Constructing an instance of the keystoneclient.v2_0.client.Client class without a session is deprecated as of the 1.7.0 release and may be removed in the 2.0.0 release. 'the 2.0.0 release.', DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:147: DeprecationWarning: Using the 'tenant_name' argument is deprecated in version '1.7.0' and will be removed in version '2.0.0', please use the 'project_name' argument instead super(Client, self).__init__(**kwargs) /usr/lib/python2.7/site-packages/debtcollector/renames.py:45: DeprecationWarning: Using the 'tenant_id' argument is deprecated in version '1.7.0' and will be removed in version '2.0.0', please use the 'project_id' argument instead return f(*args, **kwargs) /usr/lib/python2.7/site-packages/keystoneclient/httpclient.py:371: DeprecationWarning: Constructing an HTTPClient instance without using a session is deprecated as of the 1.7.0 release and may be removed in the 2.0.0 release. 'the 2.0.0 release.', DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/session.py:140: DeprecationWarning: keystoneclient.session.Session is deprecated as of the 2.1.0 release in favor of keystoneauth1.session.Session. It will be removed in future releases. DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py:56: DeprecationWarning: keystoneclient auth plugins are deprecated as of the 2.1.0 release in favor of keystoneauth1 plugins. They will be removed in future releases. 'in future releases.', DeprecationWarning) +----------------------------------+------------+---------+----------------------+ | id | name | enabled | email | +----------------------------------+------------+---------+----------------------+ | 266f5859848e4f39b9725203dda5c3f2 | admin | True | root at localhost | | 79a6ff3cc7cc4d018247c750adbc18e7 | aodh | True | aodh at localhost | | 90f28a2a80054132a901d39da307213f | ceilometer | True | ceilometer at localhost | | 16fa5ffa60e147d89ad84646b6519278 | cinder | True | cinder at localhost | | c6312ec6c2c444288a412f32173fcd99 | glance | True | glance at localhost | | ac8fb9c33d404a1697d576d428db90b3 | gnocchi | True | gnocchi at localhost | | 1a5b4da4ed974ac8a6c78b752ac8fab6 | neutron | True | neutron at localhost | | f21e8a15da5c40b7957416de4fa91b62 | nova | True | nova at localhost | | b843358d7ae44944b11af38ce4b61f4d | swift | True | swift at localhost | +----------------------------------+------------+---------+----------------------+ [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# nova-manage service list Option "verbose" from group "DEFAULT" is deprecated for removal. Its value may be silently ignored in the future. Option "notification_driver" from group "DEFAULT" is deprecated. Use option "driver" from group "oslo_messaging_notifications". Option "notification_topics" from group "DEFAULT" is deprecated. Use option "topics" from group "oslo_messaging_notifications". DEPRECATED: Use the nova service-* commands from python-novaclient instead or the os-services REST resource. The service subcommand will be removed in the 14.0 release. Binary Host Zone Status State Updated_At nova-osapi_compute 0.0.0.0 internal enabled XXX None nova-metadata 0.0.0.0 internal enabled XXX None nova-cert localhost internal enabled XXX 2016-06-08 06:12:38 nova-consoleauth localhost internal enabled XXX 2016-06-08 06:12:37 nova-scheduler localhost internal enabled XXX 2016-06-08 06:12:38 nova-conductor localhost internal enabled XXX 2016-06-08 06:12:37 nova-compute localhost nova enabled XXX 2016-06-08 06:12:43 [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# ls -l /var/log/nova/ total 4 -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-api.log -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-cert.log -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-compute.log -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-conductor.log -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-consoleauth.log -rw-r--r--. 1 root root 0 Jun 8 05:22 nova-manage.log -rw-r--r--. 1 nova nova 995 Jun 8 05:32 nova-novncproxy.log -rw-r--r--. 1 nova nova 0 Jun 8 05:23 nova-scheduler.log [root at localhost ~(keystone_admin)]# Regards, Chinmaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckdwibedy at gmail.com Wed Jun 8 12:39:58 2016 From: ckdwibedy at gmail.com (Chinmaya Dwibedy) Date: Wed, 8 Jun 2016 18:09:58 +0530 Subject: [rdo-list] =?utf-8?q?Fwd=3A_ConnectFailure_error_upon_triggering_?= =?utf-8?q?=E2=80=9Cnova_image-list=E2=80=9D_command_using_openstac?= =?utf-8?q?k-mitaka_release?= In-Reply-To: References: Message-ID: Hi Boris, It appears that, the nova-api process is not running . [root at localhost ~(keystone_admin)]# netstat -antp | grep 8774 [root at localhost ~(keystone_admin)]# iptables-save | grep 8774 -A INPUT -p tcp -m multiport --dports 8773,8774,8775 -m comment --comment "001 nova api incoming nova_api" -j ACCEPT [root at localhost ~(keystone_admin)]# Regards, Chinmaya On Wed, Jun 8, 2016 at 5:49 PM, Boris Derzhavets wrote: > > > > ------------------------------ > *From:* rdo-list-bounces at redhat.com on > behalf of Chinmaya Dwibedy > *Sent:* Wednesday, June 8, 2016 8:08 AM > *To:* rdo-list at redhat.com > *Subject:* [rdo-list] Fwd: ConnectFailure error upon triggering ?nova > image-list? command using openstack-mitaka release > > > Hi , > > > I am getting the ConnectFailure error message upon triggering ?nova > image-list? command. nova-api process should be listening on 8774. It > doesn't look like it is not running. Also I do not find any error logs in > nova-api.log > > > [BD] > > > # netstat -antp | grep 8774 > > # iptables-save | grep 8774 > > If first command return xxxxxx/python > > then `ps -ef | grep xxxxxx` > > > nova-compute.log and nova-conductor.log. I am using openstack-mitaka > release on host (Cent OS 7.2). How can I debug and know what prevents it > from running ? please suggest. > > > Note: This was working while back and got this issue all of a sudden. > > > Here are some logs. > > > [root at localhost ~(keystone_admin)]# nova image-list > > ERROR (ConnectFailure): Unable to establish connection to > http://172.18.121.48:8774/v2/4bc608763cee41d9a8df26d3ef919825 > > [root at localhost ~(keystone_admin)]# > > > > [root at localhost ~(keystone_admin)]# service openstack-nova-api restart > > Redirecting to /bin/systemctl restart openstack-nova-api.service > > Job for openstack-nova-api.service failed because the control process > exited with error code. See "systemctl status openstack-nova-api.service" > and "journalctl -xe" for details. > > [root at localhost ~(keystone_admin)]# systemctl status > openstack-nova-api.service > > ? openstack-nova-api.service - OpenStack Nova API Server > > Loaded: loaded (/usr/lib/systemd/system/openstack-nova-api.service; > enabled; vendor preset: disabled) > > Active: activating (start) since Wed 2016-06-08 07:59:20 EDT; 2s ago > > Main PID: 179955 (nova-api) > > CGroup: /system.slice/openstack-nova-api.service > > ??179955 /usr/bin/python2 /usr/bin/nova-api > > > > Jun 08 07:59:20 localhost systemd[1]: Starting OpenStack Nova API Server... > > Jun 08 07:59:22 localhost python2[179955]: detected unhandled Python > exception in '/usr/bin/nova-api' > > [root at localhost ~(keystone_admin)]# > > > > [root at localhost ~(keystone_admin)]# keystone endpoint-list > > /usr/lib/python2.7/site-packages/keystoneclient/shell.py:64: > DeprecationWarning: The keystone CLI is deprecated in favor of > python-openstackclient. For a Python library, continue using > python-keystoneclient. > > 'python-keystoneclient.', DeprecationWarning) > > /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:145: > DeprecationWarning: Constructing an instance of the > keystoneclient.v2_0.client.Client class without a session is deprecated as > of the 1.7.0 release and may be removed in the 2.0.0 release. > > 'the 2.0.0 release.', DeprecationWarning) > > /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:147: > DeprecationWarning: Using the 'tenant_name' argument is deprecated in > version '1.7.0' and will be removed in version '2.0.0', please use the > 'project_name' argument instead > > super(Client, self).__init__(**kwargs) > > /usr/lib/python2.7/site-packages/debtcollector/renames.py:45: > DeprecationWarning: Using the 'tenant_id' argument is deprecated in version > '1.7.0' and will be removed in version '2.0.0', please use the 'project_id' > argument instead > > return f(*args, **kwargs) > > /usr/lib/python2.7/site-packages/keystoneclient/httpclient.py:371: > DeprecationWarning: Constructing an HTTPClient instance without using a > session is deprecated as of the 1.7.0 release and may be removed in the > 2.0.0 release. > > 'the 2.0.0 release.', DeprecationWarning) > > /usr/lib/python2.7/site-packages/keystoneclient/session.py:140: > DeprecationWarning: keystoneclient.session.Session is deprecated as of the > 2.1.0 release in favor of keystoneauth1.session.Session. It will be removed > in future releases. > > DeprecationWarning) > > /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py:56: > DeprecationWarning: keystoneclient auth plugins are deprecated as of the > 2.1.0 release in favor of keystoneauth1 plugins. They will be removed in > future releases. > > 'in future releases.', DeprecationWarning) > > > +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+-------------------------------------------------+----------------------------------+ > > | id | region | > publicurl | > internalurl | > adminurl | service_id | > > > +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+-------------------------------------------------+----------------------------------+ > > | 02fcec9a7b834128b3e30403c4ed0de7 | RegionOne | > http://172.18.121.48:8080/v1/AUTH_%(tenant_id)s | > http://172.18.121.48:8080/v1/AUTH_%(tenant_id)s | > http://172.18.121.48:8080/v1/AUTH_%(tenant_id)s | > 5533324a63d8402888040832640a19d0 | > > | 295802909413422cb7c22dc1e268bce9 | RegionOne | > http://172.18.121.48:8774/v2/%(tenant_id)s | > http://172.18.121.48:8774/v2/%(tenant_id)s | > http://172.18.121.48:8774/v2/%(tenant_id)s | > f7fe68bf4cec47a4a3c942f3916dc377 | > > | 2a125f10b0d04f8a9306dede85b65514 | RegionOne | > http://172.18.121.48:9696 | > http://172.18.121.48:9696 | > http://172.18.121.48:9696 | b2a60cdc144e40a49757f13c2264f030 | > > | 2d1a91d39f3d421cb1b2fe73fba5fd3a | RegionOne | > http://172.18.121.48:8777 | > http://172.18.121.48:8777 | > http://172.18.121.48:8777 | e6d750ac5ef3433799d4fe39518a3fe6 | > > | 47b634f3e18e4caf914521a1a4157008 | RegionOne | > http://172.18.121.48:8042 | > http://172.18.121.48:8042 | > http://172.18.121.48:8042 | 07cd8adf66254b4ab9b07be03a24084b | > > | 595913f7227b44dc8753db3b0cf6acdc | RegionOne | > http://172.18.121.48:8041 | > http://172.18.121.48:8041 | > http://172.18.121.48:8041 | f43240abe5f3476ea64a8bd381fe4da7 | > > | 64381b509bc84639b6a4710e6d99a23b | RegionOne | > http://172.18.121.48:8776/v1/%(tenant_id)s | > http://172.18.121.48:8776/v1/%(tenant_id)s | > http://172.18.121.48:8776/v1/%(tenant_id)s | > 7edc7bedf93d4f388185699b9793ec7f | > > | 727d25775be54c9f8453f697ae5cb625 | RegionOne | > http://172.18.121.48:5000/v2.0 | > http://172.18.121.48:5000/v2.0 | > http://172.18.121.48:35357/v2.0 | > 25e99a2a98f244d9a73bf965acdd39da | > > | 9049338c57574b2d8ff8308b1a4265a5 | RegionOne | > http://172.18.121.48:8776/v2/%(tenant_id)s | > http://172.18.121.48:8776/v2/%(tenant_id)s | > http://172.18.121.48:8776/v2/%(tenant_id)s | > 6e070f0629094b72b66025250fdbda64 | > > | c051c0f9649143f6b29eaf0895940abe | RegionOne | > http://172.18.121.48:9292 | > http://172.18.121.48:9292 | > http://172.18.121.48:9292 | 40874e10139a47eb88dfec2114047a34 | > > | ee4a00c1e8334cb8921fa3f2a7c82f1b | RegionOne | > http://172.18.121.48:8774/v3 | > http://172.18.121.48:8774/v3 | > http://172.18.121.48:8774/v3 | 24c92ce4cd354e3db6c5ad59b8beeae8 | > > | fa60b5ba0ab7436ab1ffebb2982d3ccc | RegionOne | > http://127.0.0.1:8776/v3/%(tenant_id)s | > http://127.0.0.1:8776/v3/%(tenant_id)s | > http://127.0.0.1:8776/v3/%(tenant_id)s | > b0b6b97cf9d649c9800300cc64b0e866 | > > > +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+-------------------------------------------------+----------------------------------+ > > [root at localhost ~(keystone_admin)]# > > > > > > [root at localhost ~(keystone_admin)]# netstat -ntlp | grep 8774 > > [root at localhost ~(keystone_admin)]# > > > > [root at localhost ~(keystone_admin)]# ps -ef | grep nova-api > > nova 156427 1 86 07:51 ? 00:00:01 /usr/bin/python2 > /usr/bin/nova-api > > [root at localhost ~(keystone_admin)]# > > > > [root at localhost ~(keystone_admin)]# lsof -i :8774 > > [root at localhost ~(keystone_admin)]# > > > > > > [root at localhost ~(keystone_admin)]# keystone user-list > > /usr/lib/python2.7/site-packages/keystoneclient/shell.py:64: > DeprecationWarning: The keystone CLI is deprecated in favor of > python-openstackclient. For a Python library, continue using > python-keystoneclient. > > 'python-keystoneclient.', DeprecationWarning) > > /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:145: > DeprecationWarning: Constructing an instance of the > keystoneclient.v2_0.client.Client class without a session is deprecated as > of the 1.7.0 release and may be removed in the 2.0.0 release. > > 'the 2.0.0 release.', DeprecationWarning) > > /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:147: > DeprecationWarning: Using the 'tenant_name' argument is deprecated in > version '1.7.0' and will be removed in version '2.0.0', please use the > 'project_name' argument instead > > super(Client, self).__init__(**kwargs) > > /usr/lib/python2.7/site-packages/debtcollector/renames.py:45: > DeprecationWarning: Using the 'tenant_id' argument is deprecated in version > '1.7.0' and will be removed in version '2.0.0', please use the 'project_id' > argument instead > > return f(*args, **kwargs) > > /usr/lib/python2.7/site-packages/keystoneclient/httpclient.py:371: > DeprecationWarning: Constructing an HTTPClient instance without using a > session is deprecated as of the 1.7.0 release and may be removed in the > 2.0.0 release. > > 'the 2.0.0 release.', DeprecationWarning) > > /usr/lib/python2.7/site-packages/keystoneclient/session.py:140: > DeprecationWarning: keystoneclient.session.Session is deprecated as of the > 2.1.0 release in favor of keystoneauth1.session.Session. It will be removed > in future releases. > > DeprecationWarning) > > /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py:56: > DeprecationWarning: keystoneclient auth plugins are deprecated as of the > 2.1.0 release in favor of keystoneauth1 plugins. They will be removed in > future releases. > > 'in future releases.', DeprecationWarning) > > > +----------------------------------+------------+---------+----------------------+ > > | id | name | enabled | > email | > > > +----------------------------------+------------+---------+----------------------+ > > | 266f5859848e4f39b9725203dda5c3f2 | admin | True | > root at localhost | > > | 79a6ff3cc7cc4d018247c750adbc18e7 | aodh | True | > aodh at localhost | > > | 90f28a2a80054132a901d39da307213f | ceilometer | True | > ceilometer at localhost | > > | 16fa5ffa60e147d89ad84646b6519278 | cinder | True | > cinder at localhost | > > | c6312ec6c2c444288a412f32173fcd99 | glance | True | > glance at localhost | > > | ac8fb9c33d404a1697d576d428db90b3 | gnocchi | True | > gnocchi at localhost | > > | 1a5b4da4ed974ac8a6c78b752ac8fab6 | neutron | True | > neutron at localhost | > > | f21e8a15da5c40b7957416de4fa91b62 | nova | True | > nova at localhost | > > | b843358d7ae44944b11af38ce4b61f4d | swift | True | > swift at localhost | > > > +----------------------------------+------------+---------+----------------------+ > > [root at localhost ~(keystone_admin)]# > > > > [root at localhost ~(keystone_admin)]# nova-manage service list > > Option "verbose" from group "DEFAULT" is deprecated for removal. Its > value may be silently ignored in the future. > > Option "notification_driver" from group "DEFAULT" is deprecated. Use > option "driver" from group "oslo_messaging_notifications". > > Option "notification_topics" from group "DEFAULT" is deprecated. Use > option "topics" from group "oslo_messaging_notifications". > > DEPRECATED: Use the nova service-* commands from python-novaclient instead > or the os-services REST resource. The service subcommand will be removed in > the 14.0 release. > > Binary Host Zone > Status State Updated_At > > nova-osapi_compute 0.0.0.0 internal > enabled XXX None > > nova-metadata 0.0.0.0 internal > enabled XXX None > > nova-cert localhost internal > enabled XXX 2016-06-08 06:12:38 > > nova-consoleauth localhost internal > enabled XXX 2016-06-08 06:12:37 > > nova-scheduler localhost internal > enabled XXX 2016-06-08 06:12:38 > > nova-conductor localhost internal > enabled XXX 2016-06-08 06:12:37 > > nova-compute localhost nova > enabled XXX 2016-06-08 06:12:43 > > [root at localhost ~(keystone_admin)]# > > > > [root at localhost ~(keystone_admin)]# ls -l /var/log/nova/ > > total 4 > > -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-api.log > > -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-cert.log > > -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-compute.log > > -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-conductor.log > > -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-consoleauth.log > > -rw-r--r--. 1 root root 0 Jun 8 05:22 nova-manage.log > > -rw-r--r--. 1 nova nova 995 Jun 8 05:32 nova-novncproxy.log > > -rw-r--r--. 1 nova nova 0 Jun 8 05:23 nova-scheduler.log > > [root at localhost ~(keystone_admin)]# > > > > > > Regards, > > Chinmaya > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Wed Jun 8 12:58:38 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 8 Jun 2016 12:58:38 +0000 Subject: [rdo-list] =?windows-1252?q?Fwd=3A_ConnectFailure_error_upon_trig?= =?windows-1252?q?gering_=93nova_image-list=94_command_using_openstack-mit?= =?windows-1252?q?aka_release?= In-Reply-To: References: , Message-ID: ________________________________ From: Chinmaya Dwibedy Sent: Wednesday, June 8, 2016 8:39 AM To: Boris Derzhavets Cc: rdo-list at redhat.com Subject: Re: [rdo-list] Fwd: ConnectFailure error upon triggering ?nova image-list? command using openstack-mitaka release Hi Boris, It appears that, the nova-api process is not running . Check /var/log/nova/api.log (or nova-api.log , I just forgot exact name of log file ) [root at localhost ~(keystone_admin)]# netstat -antp | grep 8774 [root at localhost ~(keystone_admin)]# iptables-save | grep 8774 -A INPUT -p tcp -m multiport --dports 8773,8774,8775 -m comment --comment "001 nova api incoming nova_api" -j ACCEPT [root at localhost ~(keystone_admin)]# Regards, Chinmaya On Wed, Jun 8, 2016 at 5:49 PM, Boris Derzhavets > wrote: ________________________________ From: rdo-list-bounces at redhat.com > on behalf of Chinmaya Dwibedy > Sent: Wednesday, June 8, 2016 8:08 AM To: rdo-list at redhat.com Subject: [rdo-list] Fwd: ConnectFailure error upon triggering ?nova image-list? command using openstack-mitaka release Hi , I am getting the ConnectFailure error message upon triggering ?nova image-list? command. nova-api process should be listening on 8774. It doesn't look like it is not running. Also I do not find any error logs in nova-api.log [BD] # netstat -antp | grep 8774 # iptables-save | grep 8774 If first command return xxxxxx/python then `ps -ef | grep xxxxxx` nova-compute.log and nova-conductor.log. I am using openstack-mitaka release on host (Cent OS 7.2). How can I debug and know what prevents it from running ? please suggest. Note: This was working while back and got this issue all of a sudden. Here are some logs. [root at localhost ~(keystone_admin)]# nova image-list ERROR (ConnectFailure): Unable to establish connection to http://172.18.121.48:8774/v2/4bc608763cee41d9a8df26d3ef919825 [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# service openstack-nova-api restart Redirecting to /bin/systemctl restart openstack-nova-api.service Job for openstack-nova-api.service failed because the control process exited with error code. See "systemctl status openstack-nova-api.service" and "journalctl -xe" for details. [root at localhost ~(keystone_admin)]# systemctl status openstack-nova-api.service ? openstack-nova-api.service - OpenStack Nova API Server Loaded: loaded (/usr/lib/systemd/system/openstack-nova-api.service; enabled; vendor preset: disabled) Active: activating (start) since Wed 2016-06-08 07:59:20 EDT; 2s ago Main PID: 179955 (nova-api) CGroup: /system.slice/openstack-nova-api.service ??179955 /usr/bin/python2 /usr/bin/nova-api Jun 08 07:59:20 localhost systemd[1]: Starting OpenStack Nova API Server... Jun 08 07:59:22 localhost python2[179955]: detected unhandled Python exception in '/usr/bin/nova-api' [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# keystone endpoint-list /usr/lib/python2.7/site-packages/keystoneclient/shell.py:64: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient. 'python-keystoneclient.', DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:145: DeprecationWarning: Constructing an instance of the keystoneclient.v2_0.client.Client class without a session is deprecated as of the 1.7.0 release and may be removed in the 2.0.0 release. 'the 2.0.0 release.', DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:147: DeprecationWarning: Using the 'tenant_name' argument is deprecated in version '1.7.0' and will be removed in version '2.0.0', please use the 'project_name' argument instead super(Client, self).__init__(**kwargs) /usr/lib/python2.7/site-packages/debtcollector/renames.py:45: DeprecationWarning: Using the 'tenant_id' argument is deprecated in version '1.7.0' and will be removed in version '2.0.0', please use the 'project_id' argument instead return f(*args, **kwargs) /usr/lib/python2.7/site-packages/keystoneclient/httpclient.py:371: DeprecationWarning: Constructing an HTTPClient instance without using a session is deprecated as of the 1.7.0 release and may be removed in the 2.0.0 release. 'the 2.0.0 release.', DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/session.py:140: DeprecationWarning: keystoneclient.session.Session is deprecated as of the 2.1.0 release in favor of keystoneauth1.session.Session. It will be removed in future releases. DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py:56: DeprecationWarning: keystoneclient auth plugins are deprecated as of the 2.1.0 release in favor of keystoneauth1 plugins. They will be removed in future releases. 'in future releases.', DeprecationWarning) +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+-------------------------------------------------+----------------------------------+ | id | region | publicurl | internalurl | adminurl | service_id | +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+-------------------------------------------------+----------------------------------+ | 02fcec9a7b834128b3e30403c4ed0de7 | RegionOne | http://172.18.121.48:8080/v1/AUTH_%(tenant_id)s | http://172.18.121.48:8080/v1/AUTH_%(tenant_id)s | http://172.18.121.48:8080/v1/AUTH_%(tenant_id)s | 5533324a63d8402888040832640a19d0 | | 295802909413422cb7c22dc1e268bce9 | RegionOne | http://172.18.121.48:8774/v2/%(tenant_id)s | http://172.18.121.48:8774/v2/%(tenant_id)s | http://172.18.121.48:8774/v2/%(tenant_id)s | f7fe68bf4cec47a4a3c942f3916dc377 | | 2a125f10b0d04f8a9306dede85b65514 | RegionOne | http://172.18.121.48:9696 | http://172.18.121.48:9696 | http://172.18.121.48:9696 | b2a60cdc144e40a49757f13c2264f030 | | 2d1a91d39f3d421cb1b2fe73fba5fd3a | RegionOne | http://172.18.121.48:8777 | http://172.18.121.48:8777 | http://172.18.121.48:8777 | e6d750ac5ef3433799d4fe39518a3fe6 | | 47b634f3e18e4caf914521a1a4157008 | RegionOne | http://172.18.121.48:8042 | http://172.18.121.48:8042 | http://172.18.121.48:8042 | 07cd8adf66254b4ab9b07be03a24084b | | 595913f7227b44dc8753db3b0cf6acdc | RegionOne | http://172.18.121.48:8041 | http://172.18.121.48:8041 | http://172.18.121.48:8041 | f43240abe5f3476ea64a8bd381fe4da7 | | 64381b509bc84639b6a4710e6d99a23b | RegionOne | http://172.18.121.48:8776/v1/%(tenant_id)s | http://172.18.121.48:8776/v1/%(tenant_id)s | http://172.18.121.48:8776/v1/%(tenant_id)s | 7edc7bedf93d4f388185699b9793ec7f | | 727d25775be54c9f8453f697ae5cb625 | RegionOne | http://172.18.121.48:5000/v2.0 | http://172.18.121.48:5000/v2.0 | http://172.18.121.48:35357/v2.0 | 25e99a2a98f244d9a73bf965acdd39da | | 9049338c57574b2d8ff8308b1a4265a5 | RegionOne | http://172.18.121.48:8776/v2/%(tenant_id)s | http://172.18.121.48:8776/v2/%(tenant_id)s | http://172.18.121.48:8776/v2/%(tenant_id)s | 6e070f0629094b72b66025250fdbda64 | | c051c0f9649143f6b29eaf0895940abe | RegionOne | http://172.18.121.48:9292 | http://172.18.121.48:9292 | http://172.18.121.48:9292 | 40874e10139a47eb88dfec2114047a34 | | ee4a00c1e8334cb8921fa3f2a7c82f1b | RegionOne | http://172.18.121.48:8774/v3 | http://172.18.121.48:8774/v3 | http://172.18.121.48:8774/v3 | 24c92ce4cd354e3db6c5ad59b8beeae8 | | fa60b5ba0ab7436ab1ffebb2982d3ccc | RegionOne | http://127.0.0.1:8776/v3/%(tenant_id)s | http://127.0.0.1:8776/v3/%(tenant_id)s | http://127.0.0.1:8776/v3/%(tenant_id)s | b0b6b97cf9d649c9800300cc64b0e866 | +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+-------------------------------------------------+----------------------------------+ [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# netstat -ntlp | grep 8774 [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# ps -ef | grep nova-api nova 156427 1 86 07:51 ? 00:00:01 /usr/bin/python2 /usr/bin/nova-api [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# lsof -i :8774 [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# keystone user-list /usr/lib/python2.7/site-packages/keystoneclient/shell.py:64: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient. 'python-keystoneclient.', DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:145: DeprecationWarning: Constructing an instance of the keystoneclient.v2_0.client.Client class without a session is deprecated as of the 1.7.0 release and may be removed in the 2.0.0 release. 'the 2.0.0 release.', DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:147: DeprecationWarning: Using the 'tenant_name' argument is deprecated in version '1.7.0' and will be removed in version '2.0.0', please use the 'project_name' argument instead super(Client, self).__init__(**kwargs) /usr/lib/python2.7/site-packages/debtcollector/renames.py:45: DeprecationWarning: Using the 'tenant_id' argument is deprecated in version '1.7.0' and will be removed in version '2.0.0', please use the 'project_id' argument instead return f(*args, **kwargs) /usr/lib/python2.7/site-packages/keystoneclient/httpclient.py:371: DeprecationWarning: Constructing an HTTPClient instance without using a session is deprecated as of the 1.7.0 release and may be removed in the 2.0.0 release. 'the 2.0.0 release.', DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/session.py:140: DeprecationWarning: keystoneclient.session.Session is deprecated as of the 2.1.0 release in favor of keystoneauth1.session.Session. It will be removed in future releases. DeprecationWarning) /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py:56: DeprecationWarning: keystoneclient auth plugins are deprecated as of the 2.1.0 release in favor of keystoneauth1 plugins. They will be removed in future releases. 'in future releases.', DeprecationWarning) +----------------------------------+------------+---------+----------------------+ | id | name | enabled | email | +----------------------------------+------------+---------+----------------------+ | 266f5859848e4f39b9725203dda5c3f2 | admin | True | root at localhost | | 79a6ff3cc7cc4d018247c750adbc18e7 | aodh | True | aodh at localhost | | 90f28a2a80054132a901d39da307213f | ceilometer | True | ceilometer at localhost | | 16fa5ffa60e147d89ad84646b6519278 | cinder | True | cinder at localhost | | c6312ec6c2c444288a412f32173fcd99 | glance | True | glance at localhost | | ac8fb9c33d404a1697d576d428db90b3 | gnocchi | True | gnocchi at localhost | | 1a5b4da4ed974ac8a6c78b752ac8fab6 | neutron | True | neutron at localhost | | f21e8a15da5c40b7957416de4fa91b62 | nova | True | nova at localhost | | b843358d7ae44944b11af38ce4b61f4d | swift | True | swift at localhost | +----------------------------------+------------+---------+----------------------+ [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# nova-manage service list Option "verbose" from group "DEFAULT" is deprecated for removal. Its value may be silently ignored in the future. Option "notification_driver" from group "DEFAULT" is deprecated. Use option "driver" from group "oslo_messaging_notifications". Option "notification_topics" from group "DEFAULT" is deprecated. Use option "topics" from group "oslo_messaging_notifications". DEPRECATED: Use the nova service-* commands from python-novaclient instead or the os-services REST resource. The service subcommand will be removed in the 14.0 release. Binary Host Zone Status State Updated_At nova-osapi_compute 0.0.0.0 internal enabled XXX None nova-metadata 0.0.0.0 internal enabled XXX None nova-cert localhost internal enabled XXX 2016-06-08 06:12:38 nova-consoleauth localhost internal enabled XXX 2016-06-08 06:12:37 nova-scheduler localhost internal enabled XXX 2016-06-08 06:12:38 nova-conductor localhost internal enabled XXX 2016-06-08 06:12:37 nova-compute localhost nova enabled XXX 2016-06-08 06:12:43 [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# ls -l /var/log/nova/ total 4 -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-api.log -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-cert.log -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-compute.log -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-conductor.log -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-consoleauth.log -rw-r--r--. 1 root root 0 Jun 8 05:22 nova-manage.log -rw-r--r--. 1 nova nova 995 Jun 8 05:32 nova-novncproxy.log -rw-r--r--. 1 nova nova 0 Jun 8 05:23 nova-scheduler.log [root at localhost ~(keystone_admin)]# Regards, Chinmaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From shevel.andrey at gmail.com Wed Jun 8 13:06:47 2016 From: shevel.andrey at gmail.com (Andrey Shevel) Date: Wed, 8 Jun 2016 16:06:47 +0300 Subject: [rdo-list] Reinstallation RDO Openstack Message-ID: Hello, after update OS (Scientific Linux 7.2) I decided to reinstall the Openstack. I did everything from the page openstack.redhat.com unfortunately I got message (I tried several times and gat exactly same answer) ========================================= Applying 212.193.96.154_keystone.pp Applying 212.193.96.154_glance.pp Applying 212.193.96.154_cinder.pp 212.193.96.154_keystone.pp: [ DONE ] 212.193.96.154_glance.pp: [ DONE ] 212.193.96.154_cinder.pp: [ DONE ] Applying 212.193.96.154_api_nova.pp 212.193.96.154_api_nova.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 212.193.96.154_api_nova.pp Error: Could not autoload puppet/provider/nova_flavor/openstack: uninitialized constant Puppet::Provider::Openstack You will find full trace in log /var/tmp/packstack/20160608-154226-EtZiWG/manifests/212.193.96.154_api_nova.pp.log Please check log file /var/tmp/packstack/20160608-154226-EtZiWG/openstack-setup.log for more information Additional information: * A new answerfile was created in: /root/packstack-answers-20160608-154226.txt * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components. * File /root/keystonerc_admin has been created on OpenStack client host 212.193.96.154. To use the command line tools you need to source the file. * To access the OpenStack Dashboard browse to http://212.193.96.154/dashboard . Please, find your login credentials stored in the keystonerc_admin in your home directory. * To use Nagios, browse to http://212.193.96.154/nagios username: nagiosadmin, password: 89651f6d0bdd4176 ++ echo +++ date ++ echo 'Stop Date & Time = ' Wed Jun 8 15:51:09 MSK 2016 Stop Date & Time = Wed Jun 8 15:51:09 MSK 2016 [root at lmsys001 ~]# uname -a Linux lmsys001.pnpi.spb.ru 3.10.0-327.18.2.el7.x86_64 #1 SMP Thu May 12 04:13:05 CDT 2016 x86_64 x86_64 x86_64 GNU/Linux [root at lmsys001 ~]# cat /proc/version Linux version 3.10.0-327.18.2.el7.x86_64 (mockbuild at sl7-uefisign.fnal.gov) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) ) #1 SMP Thu May 12 04:13:05 CDT 2016 [root at lmsys001 ~]# puppet --version 3.6.2 =================================== Any ideas would be helpful. Thanks in advance. -- Andrey Y Shevel From javier.pena at redhat.com Wed Jun 8 16:29:54 2016 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 8 Jun 2016 12:29:54 -0400 (EDT) Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: <1652324805.13593510.1465402844203.JavaMail.zimbra@redhat.com> Message-ID: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> Hi RDO, After some discussions about the way Packstack was (mis)using Puppet, and how to improve it, I've been working on a refactor. Its current state is available at https://github.com/javierpena/packstack/tree/feature/manifest_refactor, and as soon as it is polished it will go to Gerrit. It basically tries to reduce the number of Puppet executions to one per server role (controller, network node, compute), instead of multiple individual runs. While talking about the refactor, a second discussion about a deeper change was started. I'd like to summarize the current concerns and ideas in this mail, so we can follow-up and make a decision: - Currently, the Packstack CI is only testing single-node installs. Testing multi-node installs upstream has been challenging, and multi-node may go beyond the PoC target of Packstack. So, one proposal is to keep all-in-one single node only, add Ansible wrapper (in unsupported contrib/ subfolder) reading *_HOSTS parameters for backward compat. - Another idea was to refactor the Packstack Python part around Ansible, summarized at https://etherpad.openstack.org/p/packstack-refactor-take2 . This proposal aims at keeping multi-node support, since Ansible makes it easy anyway. Any other ideas/concerns? Pros and cons of each? Thanks, Javier From ichavero at redhat.com Wed Jun 8 16:32:52 2016 From: ichavero at redhat.com (Ivan Chavero) Date: Wed, 8 Jun 2016 12:32:52 -0400 (EDT) Subject: [rdo-list] Reinstallation RDO Openstack In-Reply-To: References: Message-ID: <1079074612.57957501.1465403572848.JavaMail.zimbra@redhat.com> Can you check the OpenStack Puppet Modules and Packstack versions? You should have the latest versions of both packages Cheers, Ivan ----- Original Message ----- > From: "Andrey Shevel" > To: rdo-list at redhat.com > Sent: Wednesday, June 8, 2016 8:06:47 AM > Subject: [rdo-list] Reinstallation RDO Openstack > > Hello, > > after update OS (Scientific Linux 7.2) I decided to reinstall the Openstack. > > I did everything from the page openstack.redhat.com > > unfortunately I got message (I tried several times and gat exactly same > answer) > > ========================================= > Applying 212.193.96.154_keystone.pp > Applying 212.193.96.154_glance.pp > Applying 212.193.96.154_cinder.pp > 212.193.96.154_keystone.pp: [ DONE ] > 212.193.96.154_glance.pp: [ DONE ] > 212.193.96.154_cinder.pp: [ DONE ] > Applying 212.193.96.154_api_nova.pp > 212.193.96.154_api_nova.pp: [ ERROR ] > Applying Puppet manifests [ ERROR ] > > ERROR : Error appeared during Puppet run: 212.193.96.154_api_nova.pp > Error: Could not autoload puppet/provider/nova_flavor/openstack: > uninitialized constant Puppet::Provider::Openstack > You will find full trace in log > /var/tmp/packstack/20160608-154226-EtZiWG/manifests/212.193.96.154_api_nova.pp.log > Please check log file > /var/tmp/packstack/20160608-154226-EtZiWG/openstack-setup.log for more > information > Additional information: > * A new answerfile was created in: > /root/packstack-answers-20160608-154226.txt > * Time synchronization installation was skipped. Please note that > unsynchronized time on server instances might be problem for some > OpenStack components. > * File /root/keystonerc_admin has been created on OpenStack client > host 212.193.96.154. To use the command line tools you need to source > the file. > * To access the OpenStack Dashboard browse to > http://212.193.96.154/dashboard . > Please, find your login credentials stored in the keystonerc_admin in > your home directory. > * To use Nagios, browse to http://212.193.96.154/nagios username: > nagiosadmin, password: 89651f6d0bdd4176 > ++ echo > > +++ date > ++ echo 'Stop Date & Time = ' Wed Jun 8 15:51:09 MSK 2016 > Stop Date & Time = Wed Jun 8 15:51:09 MSK 2016 > [root at lmsys001 ~]# uname -a > Linux lmsys001.pnpi.spb.ru 3.10.0-327.18.2.el7.x86_64 #1 SMP Thu May > 12 04:13:05 CDT 2016 x86_64 x86_64 x86_64 GNU/Linux > [root at lmsys001 ~]# cat /proc/version > Linux version 3.10.0-327.18.2.el7.x86_64 > (mockbuild at sl7-uefisign.fnal.gov) (gcc version 4.8.5 20150623 (Red Hat > 4.8.5-4) (GCC) ) #1 SMP Thu May 12 04:13:05 CDT 2016 > [root at lmsys001 ~]# puppet --version > 3.6.2 > =================================== > > > Any ideas would be helpful. > > Thanks in advance. > > > > -- > Andrey Y Shevel > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From hguemar at fedoraproject.org Wed Jun 8 17:24:36 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 8 Jun 2016 19:24:36 +0200 Subject: [rdo-list] [Meeting] RDO meeting (2016-06-08) Minutes Message-ID: ============================== #rdo: RDO meeting (2016-06-08) ============================== Meeting started by number80 at 15:00:36 UTC. The full logs are available at https://meetbot.fedoraproject.org/rdo/2016-06-08/rdo_meeting_(2016-06-08).2016-06-08-15.00.log.html . Meeting summary --------------- * LINK: https://etherpad.openstack.org/p/RDO-Meeting (number80, 15:02:02) * DLRN instance migration to ci.centos infra (recurring) (number80, 15:04:44) * ACTION: dmsimard to symlink hashes on internal dlrn (current-passed-ci, current-tripleo) (dmsimard, 15:11:26) * ACTION: jpena to switch DNS for trunk-primary to the ci.centos.org instance on Jun 13 (jpena, 15:17:26) * Test day readiness (number80, 15:17:52) * LINK: https://www.rdoproject.org/testday/ (number80, 15:19:48) * ACTION: everyone help rbowen to update test scenarios (number80, 15:21:43) * Packstack refactor (number80, 15:23:19) * LINK: https://github.com/javierpena/packstack/commit/affad262614a375ed48eac5964dd477d392cf4ca crashed my browser (EmilienM, 15:25:36) * LINK: https://review.openstack.org/#/q/status:open+topic:tripleo-multinode (EmilienM, 15:28:59) * ACTION: jpena put packstack phase 2 discussion on the list (number80, 15:36:21) * Demos needed for RDO booth @ Red Hat Summit (number80, 15:38:08) * LINK: https://etherpad.openstack.org/p/rhsummit-rdo-booth (number80, 15:38:17) * LINK: https://etherpad.openstack.org/p/rhsummit-rdo-booth (rbowen, 15:38:25) * if you have a cool demo to show @ RH Summit ping rbowen (number80, 15:40:43) * open floor (number80, 15:41:14) * LINK: https://review.rdoproject.org/r/1100 adds a second plugin (jpena, 15:42:46) Meeting ended at 15:50:28 UTC. Action Items ------------ * dmsimard to symlink hashes on internal dlrn (current-passed-ci, current-tripleo) * jpena to switch DNS for trunk-primary to the ci.centos.org instance on Jun 13 * everyone help rbowen to update test scenarios * jpena put packstack phase 2 discussion on the list Action Items, by person ----------------------- * dmsimard * dmsimard to symlink hashes on internal dlrn (current-passed-ci, current-tripleo) * jpena * jpena to switch DNS for trunk-primary to the ci.centos.org instance on Jun 13 * jpena put packstack phase 2 discussion on the list * rbowen * everyone help rbowen to update test scenarios * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * number80 (51) * dmsimard (47) * jpena (39) * leifmadsen (15) * imcsk8 (14) * trown (13) * rbowen (11) * EmilienM (9) * zodbot (8) * openstack (4) * amoralej (4) * Duck (3) * ccamacho (2) * larsks (1) * champson (1) * eggmaster (1) * rdogerrit (1) * trwon (0) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From ichavero at redhat.com Wed Jun 8 19:27:52 2016 From: ichavero at redhat.com (Ivan Chavero) Date: Wed, 8 Jun 2016 15:27:52 -0400 (EDT) Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> Message-ID: <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Javier Pena" > To: "rdo-list" > Sent: Wednesday, June 8, 2016 11:29:54 AM > Subject: [rdo-list] Packstack refactor and future ideas > > Hi RDO, > > After some discussions about the way Packstack was (mis)using Puppet, and how > to improve it, I've been working on a refactor. Its current state is > available at > https://github.com/javierpena/packstack/tree/feature/manifest_refactor, and > as soon as it is polished it will go to Gerrit. It basically tries to reduce > the number of Puppet executions to one per server role (controller, network > node, compute), instead of multiple individual runs. I think it can be reduced to a single manifest per node. Also, when a review is created it would be easier to check if you create one review for the python, puppet, tests and release notes. > While talking about the refactor, a second discussion about a deeper change > was started. I'd like to summarize the current concerns and ideas in this > mail, so we can follow-up and make a decision: > > - Currently, the Packstack CI is only testing single-node installs. Testing > multi-node installs upstream has been challenging, and multi-node may go > beyond the PoC target of Packstack. So, one proposal is to keep all-in-one > single node only, add Ansible wrapper (in unsupported contrib/ subfolder) > reading *_HOSTS parameters for backward compat. > I would like to have packstack to be multi node since the requirements for TripleO are still to big for PoC. > - Another idea was to refactor the Packstack Python part around Ansible, > summarized at https://etherpad.openstack.org/p/packstack-refactor-take2 . > This proposal aims at keeping multi-node support, since Ansible makes it > easy anyway. Does it make sense to convert packstack to an ansible module? > > Any other ideas/concerns? Pros and cons of each? I started a refactor [1] as part of a manifest cleanup, unifcation and to start de refactor discussion, i'm happy that Javier took the puppet-openstack-integration road. Another idea around this refactor is to make packstack create manifests that can be used even without packstack runs, installing them in the proper puppet environment directories and setting the OPM path as part of the this OpenStack? environment, thus making packstack a puppet manifest generator... Cheers, Ivan [1] https://review.openstack.org/#/c/307519/ From mohammed.arafa at gmail.com Wed Jun 8 19:51:11 2016 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 8 Jun 2016 15:51:11 -0400 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> Message-ID: Is this the beginning of a merge between packsack and triple quick start? It diesnt make sense to have 2 project do the same thing On Jun 8, 2016 3:30 PM, "Ivan Chavero" wrote: > > > ----- Original Message ----- > > From: "Javier Pena" > > To: "rdo-list" > > Sent: Wednesday, June 8, 2016 11:29:54 AM > > Subject: [rdo-list] Packstack refactor and future ideas > > > > Hi RDO, > > > > After some discussions about the way Packstack was (mis)using Puppet, > and how > > to improve it, I've been working on a refactor. Its current state is > > available at > > https://github.com/javierpena/packstack/tree/feature/manifest_refactor, > and > > as soon as it is polished it will go to Gerrit. It basically tries to > reduce > > the number of Puppet executions to one per server role (controller, > network > > node, compute), instead of multiple individual runs. > > I think it can be reduced to a single manifest per node. > Also, when a review is created it would be easier to check if you create > one > review for the python, puppet, tests and release notes. > > > While talking about the refactor, a second discussion about a deeper > change > > was started. I'd like to summarize the current concerns and ideas in this > > mail, so we can follow-up and make a decision: > > > > - Currently, the Packstack CI is only testing single-node installs. > Testing > > multi-node installs upstream has been challenging, and multi-node may go > > beyond the PoC target of Packstack. So, one proposal is to keep > all-in-one > > single node only, add Ansible wrapper (in unsupported contrib/ subfolder) > > reading *_HOSTS parameters for backward compat. > > > > I would like to have packstack to be multi node since the requirements for > TripleO are still to big for PoC. > > > - Another idea was to refactor the Packstack Python part around Ansible, > > summarized at https://etherpad.openstack.org/p/packstack-refactor-take2 > . > > This proposal aims at keeping multi-node support, since Ansible makes it > > easy anyway. > > Does it make sense to convert packstack to an ansible module? > > > > > > Any other ideas/concerns? Pros and cons of each? > > I started a refactor [1] as part of a manifest cleanup, unifcation and to > start de refactor discussion, i'm happy that Javier took the > puppet-openstack-integration > road. > Another idea around this refactor is to make packstack create manifests > that can be used even without packstack runs, installing them in the > proper puppet > environment directories and setting the OPM path as part of the this > OpenStack? environment, > thus making packstack a puppet manifest generator... > > Cheers, > Ivan > > > > [1] https://review.openstack.org/#/c/307519/ > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at redhat.com Wed Jun 8 20:04:10 2016 From: apevec at redhat.com (Alan Pevec) Date: Wed, 8 Jun 2016 22:04:10 +0200 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> Message-ID: On Wed, Jun 8, 2016 at 9:51 PM, Mohammed Arafa wrote: > Is this the beginning of a merge between packsack and triple quick start? It > diesnt make sense to have 2 project do the same thing It is not and it is not the same thing. Packstack really should focus to on original proof-of-concept use case and do one-thing well which is single node installation. Multinode will be hard to get reliably working in CI and is best left to the tripleo project to handle, at most we can have experimental wrapper based on Ansible, running single node Packstack. But core Packstack should have simple deps and NOT depend on Ansible. Cheers, Alan From dms at redhat.com Wed Jun 8 20:37:08 2016 From: dms at redhat.com (David Moreau Simard) Date: Wed, 8 Jun 2016 16:37:08 -0400 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> Message-ID: On Wed, Jun 8, 2016 at 3:27 PM, Ivan Chavero wrote: > I think it can be reduced to a single manifest per node. > Also, when a review is created it would be easier to check if you create one > review for the python, puppet, tests and release notes. This would not pass CI and thus could not be merged. If there are separate commits, each must pass CI. Otherwise, my opinion is that Packstack should focus on being a lean, simple and efficient single node installation tool that targets the same use case as DevStack but for the RHEL-derivatives and RDO/OSP population. A tool that is lightweight, simple (to an extent), easy to extend and add new projects in and focuses on developers and proof of concepts. I don't believe Packstack should be able to handle multi-node by itself. I don't think I am being pessimistic by saying there is too few resources contributing to Packstack to make multi-node a good story. We're not testing Packstack multi-node right now and testing it properly is really hard, just ask the whole teams of people focused on just testing TripleO. If Packstack is really good at installing things on one node, an advanced/experienced user could have Packstack install components on different servers if that is what he is looking for. Pseudo-code: - Server 1: packstack --install-rabbitmq=y --install-mariadb=y - Server 2: packstack --install-keystone=y --rabbitmq-server=server1 --database-server=server1 - Server 3: packstack --install-glance=y --keystone-server=server2 --database-server=server1 --rabbitmq-server=server1 - Server 4: packstack --install-nova=y --keystone-server=server2 --database-server=server1 --rabbitmq-server=server1 (etc) So in my concept, Packstack is not able to do multi node by itself but provides the necessary mechanisms to allow to be installed across different nodes. If an orchestration or wrapper mechanism is required, Ansible is a obvious choice but not the only one. Using Ansible would, notably, make it easy to rip out all the python code that's around executing things on servers over SSH. David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] From bderzhavets at hotmail.com Wed Jun 8 21:45:54 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 8 Jun 2016 21:45:54 +0000 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com>, <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> Message-ID: ________________________________ From: rdo-list-bounces at redhat.com on behalf of Ivan Chavero Sent: Wednesday, June 8, 2016 3:27 PM To: Javier Pena Cc: rdo-list Subject: Re: [rdo-list] Packstack refactor and future ideas ----- Original Message ----- > From: "Javier Pena" > To: "rdo-list" > Sent: Wednesday, June 8, 2016 11:29:54 AM > Subject: [rdo-list] Packstack refactor and future ideas > > Hi RDO, > > After some discussions about the way Packstack was (mis)using Puppet, and how > to improve it, I've been working on a refactor. Its current state is > available at > https://github.com/javierpena/packstack/tree/feature/manifest_refactor, and [https://avatars2.githubusercontent.com/u/8833973?v=3&s=400] javierpena/packstack github.com packstack - Install utility to deploy openstack on multiple hosts. > as soon as it is polished it will go to Gerrit. It basically tries to reduce > the number of Puppet executions to one per server role (controller, network > node, compute), instead of multiple individual runs. I think it can be reduced to a single manifest per node. Also, when a review is created it would be easier to check if you create one review for the python, puppet, tests and release notes. > While talking about the refactor, a second discussion about a deeper change > was started. I'd like to summarize the current concerns and ideas in this > mail, so we can follow-up and make a decision: > > - Currently, the Packstack CI is only testing single-node installs. Testing > multi-node installs upstream has been challenging, and multi-node may go > beyond the PoC target of Packstack. So, one proposal is to keep all-in-one > single node only, add Ansible wrapper (in unsupported contrib/ subfolder) > reading *_HOSTS parameters for backward compat. > I would like to have packstack to be multi node since the requirements for TripleO are still to big for PoC. But it is :- RDO Mitaka ML2&OVS&VXLAN (VLAN) Controller+Network+N*Compute+Storage (CONFIG_UNSUPORTED=y) Controller/Network+N*Compute+Storage (CONFIG_UNSUPORTED=y) Just external OVS bridge on Controller/Network ( Network) has to be configured manually. It works fine unless HA Controller(s) is a must. Some posts to this thread sound like intend to drop features present and working all way long in packstack. I hardly understand why working features should be dropped due to not passing through CI. Yes , conversion system to DVR requires several updates after packstack pre-deployment, but I would never compare Packstack with Devstack due to 1. Packstack installs different services on different nodes exactly as answer-file instructs it to do. 2. Devstack starts a bunch of non-restart able ( in meantime ) daemons on one or several nodes, Thanks. Boris. > - Another idea was to refactor the Packstack Python part around Ansible, > summarized at https://etherpad.openstack.org/p/packstack-refactor-take2 . > This proposal aims at keeping multi-node support, since Ansible makes it > easy anyway. Does it make sense to convert packstack to an ansible module? > > Any other ideas/concerns? Pros and cons of each? I started a refactor [1] as part of a manifest cleanup, unifcation and to start de refactor discussion, i'm happy that Javier took the puppet-openstack-integration road. Another idea around this refactor is to make packstack create manifests that can be used even without packstack runs, installing them in the proper puppet environment directories and setting the OPM path as part of the this OpenStack? environment, thus making packstack a puppet manifest generator... Cheers, Ivan [1] https://review.openstack.org/#/c/307519/ _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at redhat.com Wed Jun 8 22:13:51 2016 From: apevec at redhat.com (Alan Pevec) Date: Thu, 9 Jun 2016 00:13:51 +0200 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> Message-ID: > I hardly understand why working features should be dropped due to not passing through CI. If it is not tested, it doesn't work, by definition. We want to ensure, with resources available, that Packstack works well in the future for the use-case it was designed for, proof-of-concept installations. For production installations TripleO is recommended. Cheers, Alan From ichavero at redhat.com Wed Jun 8 22:33:25 2016 From: ichavero at redhat.com (Ivan Chavero) Date: Wed, 8 Jun 2016 18:33:25 -0400 (EDT) Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> Message-ID: <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "David Moreau Simard" > To: "Ivan Chavero" > Cc: "Javier Pena" , "rdo-list" > Sent: Wednesday, June 8, 2016 3:37:08 PM > Subject: Re: [rdo-list] Packstack refactor and future ideas > > On Wed, Jun 8, 2016 at 3:27 PM, Ivan Chavero wrote: > > I think it can be reduced to a single manifest per node. > > Also, when a review is created it would be easier to check if you create > > one > > review for the python, puppet, tests and release notes. > > This would not pass CI and thus could not be merged. > If there are separate commits, each must pass CI. well, make it just one big commit if there's no way around this > Otherwise, my opinion is that Packstack should focus on being a lean, > simple and efficient single node installation tool that targets the > same use case as DevStack but for the RHEL-derivatives and RDO/OSP > population. > A tool that is lightweight, simple (to an extent), easy to extend and > add new projects in and focuses on developers and proof of concepts. > I don't believe Packstack should be able to handle multi-node by itself. > I don't think I am being pessimistic by saying there is too few > resources contributing to Packstack to make multi-node a good story. > We're not testing Packstack multi-node right now and testing it > properly is really hard, just ask the whole teams of people focused on > just testing TripleO. So in your opinion we should drop features that packstack already has because this are difficult to test. I don't agree with this, we can set the untested features as "experimental" or "unsupported" > If Packstack is really good at installing things on one node, an > advanced/experienced user could have Packstack install components on > different servers if that is what he is looking for. > > Pseudo-code: > - Server 1: packstack --install-rabbitmq=y --install-mariadb=y > - Server 2: packstack --install-keystone=y --rabbitmq-server=server1 > --database-server=server1 > - Server 3: packstack --install-glance=y --keystone-server=server2 > --database-server=server1 --rabbitmq-server=server1 > - Server 4: packstack --install-nova=y --keystone-server=server2 > --database-server=server1 --rabbitmq-server=server1 > (etc) I can be wrong but right now Packstack can already do this stuff, more command line options are needed or it might need little tweaks to the code but this is not far from current Packstack options. > So in my concept, Packstack is not able to do multi node by itself but > provides the necessary mechanisms to allow to be installed across > different nodes. > If an orchestration or wrapper mechanism is required, Ansible is a > obvious choice but not the only one. > Using Ansible would, notably, make it easy to rip out all the python > code that's around executing things on servers over SSH. > I think this refactor discussion should focus on a proper puppet usage and optimizations instead of retiring stuff that already works. Actually Packstack used to be able to install all the components in different nodes and this feature was modified to the current limited multinode features. We need a tool like Packstack so users can try RDO without the complexity of TripleO, imagine you're new to OpenStack and you want to test it in different scenarios, not everybody has a spare machine with 16GB of ram just to test, not to mention the fact of understanding the concept of undercloud before understanding the key concepts of OpenStack. Cheers, Ivan From hbrock at redhat.com Wed Jun 8 22:40:39 2016 From: hbrock at redhat.com (Hugh Brock) Date: Thu, 9 Jun 2016 00:40:39 +0200 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> Message-ID: On Jun 8, 2016 11:33 PM, "Ivan Chavero" wrote: > > > > ----- Original Message ----- > > From: "David Moreau Simard" > > To: "Ivan Chavero" > > Cc: "Javier Pena" , "rdo-list" < rdo-list at redhat.com> > > Sent: Wednesday, June 8, 2016 3:37:08 PM > > Subject: Re: [rdo-list] Packstack refactor and future ideas > > > > On Wed, Jun 8, 2016 at 3:27 PM, Ivan Chavero wrote: > > > I think it can be reduced to a single manifest per node. > > > Also, when a review is created it would be easier to check if you create > > > one > > > review for the python, puppet, tests and release notes. > > > > This would not pass CI and thus could not be merged. > > If there are separate commits, each must pass CI. > > well, make it just one big commit if there's no way around this > > > Otherwise, my opinion is that Packstack should focus on being a lean, > > simple and efficient single node installation tool that targets the > > same use case as DevStack but for the RHEL-derivatives and RDO/OSP > > population. > > A tool that is lightweight, simple (to an extent), easy to extend and > > add new projects in and focuses on developers and proof of concepts. > > > I don't believe Packstack should be able to handle multi-node by itself. > > I don't think I am being pessimistic by saying there is too few > > resources contributing to Packstack to make multi-node a good story. > > We're not testing Packstack multi-node right now and testing it > > properly is really hard, just ask the whole teams of people focused on > > just testing TripleO. > > So in your opinion we should drop features that packstack already has > because this are difficult to test. > I don't agree with this, we can set the untested features as "experimental" > or "unsupported" > > > > If Packstack is really good at installing things on one node, an > > advanced/experienced user could have Packstack install components on > > different servers if that is what he is looking for. > > > > Pseudo-code: > > - Server 1: packstack --install-rabbitmq=y --install-mariadb=y > > - Server 2: packstack --install-keystone=y --rabbitmq-server=server1 > > --database-server=server1 > > - Server 3: packstack --install-glance=y --keystone-server=server2 > > --database-server=server1 --rabbitmq-server=server1 > > - Server 4: packstack --install-nova=y --keystone-server=server2 > > --database-server=server1 --rabbitmq-server=server1 > > (etc) > > I can be wrong but right now Packstack can already do this stuff, > more command line options are needed or it might need little tweaks to the > code but this is not far from current Packstack options. > > > So in my concept, Packstack is not able to do multi node by itself but > > provides the necessary mechanisms to allow to be installed across > > different nodes. > > If an orchestration or wrapper mechanism is required, Ansible is a > > obvious choice but not the only one. > > Using Ansible would, notably, make it easy to rip out all the python > > code that's around executing things on servers over SSH. > > > > > I think this refactor discussion should focus on a proper puppet usage and > optimizations instead of retiring stuff that already works. > Actually Packstack used to be able to install all the components in different > nodes and this feature was modified to the current limited multinode features. > > We need a tool like Packstack so users can try RDO without the complexity of > TripleO, imagine you're new to OpenStack and you want to test it in different > scenarios, not everybody has a spare machine with 16GB of ram just to test, not to > mention the fact of understanding the concept of undercloud before understanding > the key concepts of OpenStack. > > Cheers, > Ivan Here's a possibly stupid question, indulge me.... Seems like we're always going to need a simple (ish) tool that just installs the openstack services on a single machine, without any need for VMs. In fact, the tripleo installer - instack - is one such tool. Packstack is another, more flexible such tool. Should we consider merging or adapting them to be the same tool? -Hugh > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichavero at redhat.com Wed Jun 8 22:44:23 2016 From: ichavero at redhat.com (Ivan Chavero) Date: Wed, 8 Jun 2016 18:44:23 -0400 (EDT) Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> Message-ID: <492812519.58008630.1465425863142.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Alan Pevec" > To: "Boris Derzhavets" > Cc: "rdo-list" > Sent: Wednesday, June 8, 2016 5:13:51 PM > Subject: Re: [rdo-list] Packstack refactor and future ideas > > > I hardly understand why working features should be dropped due to not > > passing through CI. > > If it is not tested, it doesn't work, by definition. Well, it has been constantly manually tested and most use cases work. If a feature of a piece of software is difficult to being automatically tested should it be dropped? I think this is a bit extreme, the features that can't or won't be tested can be flagged as "unstable" or "experimental" but not just dropped. > We want to ensure, with resources available, that Packstack works well > in the future for the use-case it was designed for, proof-of-concept > installations. We all want that and i've noticed that people does multinode PoC installations Cheers, Ivan From ichavero at redhat.com Wed Jun 8 23:05:13 2016 From: ichavero at redhat.com (Ivan Chavero) Date: Wed, 8 Jun 2016 19:05:13 -0400 (EDT) Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> Message-ID: <1420215145.58009304.1465427113133.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Alan Pevec" > To: "Ivan Chavero" > Cc: "rdo-list" > Sent: Wednesday, June 8, 2016 5:48:07 PM > Subject: Re: [rdo-list] Packstack refactor and future ideas > > > I don't agree with this, we can set the untested features as "experimental" > or "unsupported" > > Best is to remove them to make that clear, because no matter what > deprecation warnings you put[*] people will keep using features until > they're gone, giving us bug reports to take care of and without CI to > verify fixes. > It's long-term maintenance burden I am concerned about, for features > which are clearly out of scope. I understand this, my concern is that if we remove this feature we will leave users with no tool for doing multinode in a lightweight way and this might drive off users from testing/adopting RDO. I'm willing to create CI tests for multinode Packstack in order to maintain this features. And believe me it would be easier for me just to drop this features but i really think users are benefiting from this features. Cheers, Ivan From apevec at redhat.com Wed Jun 8 23:21:55 2016 From: apevec at redhat.com (Alan Pevec) Date: Thu, 9 Jun 2016 01:21:55 +0200 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: <1420215145.58009304.1465427113133.JavaMail.zimbra@redhat.com> References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> <1420215145.58009304.1465427113133.JavaMail.zimbra@redhat.com> Message-ID: > I understand this, my concern is that if we remove this feature we will leave > users with no tool for doing multinode in a lightweight way and this might > drive off users from testing/adopting RDO. We would provide equivalent tool based on singlenode Packstack, with a clear unsupported message. That's really the only honest and responsible thing we can do, given resources available. > I'm willing to create CI tests for multinode Packstack in order to maintain > this features. It's not just tests, it's multinode CI infra that is hard. Cheers, Alan From javier.pena at redhat.com Thu Jun 9 08:23:33 2016 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 9 Jun 2016 04:23:33 -0400 (EDT) Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> Message-ID: <314502750.13808285.1465460613103.JavaMail.zimbra@redhat.com> ----- Original Message ----- > On Wed, Jun 8, 2016 at 6:33 PM, Ivan Chavero wrote: > > I can be wrong but right now Packstack can already do this stuff, > > more command line options are needed or it might need little tweaks to the > > code but this is not far from current Packstack options. > > Right now Packstack has a lot of code and logic to connect to > additional nodes and do things. To be honest, the amount of code is not that big (at least to me). On a quick check over the refactored version, I see https://github.com/javierpena/packstack/blob/feature/manifest_refactor/packstack/plugins/prescript_000.py#L1277-L1363 could be simplified (maybe removed), then https://github.com/javierpena/packstack/blob/feature/manifest_refactor/packstack/plugins/puppet_950.py#L147-L246 would need to be rewritten, to support a single node. Everything else is small simplifications on the plugins to assume all hosts are the same. > Packstack, itself, connects to compute hosts to install nova, same > with the other kind of hosts. > > What I am saying is that Packstack should only ever be able to install > (efficiently) services on "localhost". > > Hence, me, as a user (with Ansible or manually), could do something > like I mentioned before: > - Login to Server 1 and run "packstack --install-rabbitmq=y > --install-mariadb=y" > - Login to Server 2 and run "packstack --install-keystone=y > --rabbitmq-server=server1 --database-server=server1" > - Login to Server 3 and run "packstack --install-glance=y > --keystone-server=server2 --database-server=server1 > --rabbitmq-server=server1" > - Login to Server 4 and run "packstack --install-nova=y > --keystone-server=server2 --database-server=server1 > --rabbitmq-server=server1" > (etc) > > This would work, allow multi node without having all the multi node > logic embedded and handled by Packstack itself. Doing this would require adding a similar layer of complexity, but in the puppet code instead of python. Right now, we assume that every API service is running on config['CONTROLLER_HOST'], with your proposal we should have the current host, and separate variables (and Hiera processing in python) to have a single variable per service. I think it's a good idea anyway, but I think it wouldn't reduce complexity or any associated CI coverage concerns. We could take an easier way and assume we only have 3 roles, as in the current refactored code: controller, network, compute. The logic would then be: - By default we install everything, so all in one - If our host is not CONFIG_CONTROLLER_HOST but is part of CONFIG_NETWORK_HOSTS, we apply the network manifest - Same as above if our host is part of CONFIG_COMPUTE_HOSTS Of course, the last two options would assume a first server is installed as controller. This would allow us to reuse the same answer file on all runs (one per host as you proposed), eliminate the ssh code as we are always running locally, and make some assumptions in the python code, like expecting OPM to be deployed and such. A contributed ansible wrapper to automate the runs would be straightforward to create. What do you think? Would it be worth the effort? Regards, Javier > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > From javier.pena at redhat.com Thu Jun 9 08:44:13 2016 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 9 Jun 2016 04:44:13 -0400 (EDT) Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> <261446398.58009005.1465426441069.JavaMail.zimbra@redhat.com> Message-ID: <1179768269.13812648.1465461853624.JavaMail.zimbra@redhat.com> ----- Original Message ----- > On Jun 8, 2016 11:54 PM, "Ivan Chavero" < ichavero at redhat.com > wrote: > > > > > > > > ----- Original Message ----- > > > From: "Hugh Brock" < hbrock at redhat.com > > > > To: "Ivan Chavero" < ichavero at redhat.com > > > > Cc: "Javier Pena" < javier.pena at redhat.com >, "David Moreau Simard" < > > > dms at redhat.com >, "rdo-list" < rdo-list at redhat.com > > > > Sent: Wednesday, June 8, 2016 5:40:39 PM > > > Subject: Re: [rdo-list] Packstack refactor and future ideas > > > > > > On Jun 8, 2016 11:33 PM, "Ivan Chavero" < ichavero at redhat.com > wrote: > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > > From: "David Moreau Simard" < dms at redhat.com > > > > > > To: "Ivan Chavero" < ichavero at redhat.com > > > > > > Cc: "Javier Pena" < javier.pena at redhat.com >, "rdo-list" < > > > rdo-list at redhat.com > > > > > > Sent: Wednesday, June 8, 2016 3:37:08 PM > > > > > Subject: Re: [rdo-list] Packstack refactor and future ideas > > > > > > > > > > On Wed, Jun 8, 2016 at 3:27 PM, Ivan Chavero < ichavero at redhat.com > > > > wrote: > > > > > > I think it can be reduced to a single manifest per node. > > > > > > Also, when a review is created it would be easier to check if you > > > create > > > > > > one > > > > > > review for the python, puppet, tests and release notes. > > > > > > > > > > This would not pass CI and thus could not be merged. > > > > > If there are separate commits, each must pass CI. > > > > > > > > well, make it just one big commit if there's no way around this > > > > > > > > > Otherwise, my opinion is that Packstack should focus on being a lean, > > > > > simple and efficient single node installation tool that targets the > > > > > same use case as DevStack but for the RHEL-derivatives and RDO/OSP > > > > > population. > > > > > A tool that is lightweight, simple (to an extent), easy to extend and > > > > > add new projects in and focuses on developers and proof of concepts. > > > > > > > > > I don't believe Packstack should be able to handle multi-node by > > > > > itself. > > > > > I don't think I am being pessimistic by saying there is too few > > > > > resources contributing to Packstack to make multi-node a good story. > > > > > We're not testing Packstack multi-node right now and testing it > > > > > properly is really hard, just ask the whole teams of people focused > > > > > on > > > > > just testing TripleO. > > > > > > > > So in your opinion we should drop features that packstack already has > > > > because this are difficult to test. > > > > I don't agree with this, we can set the untested features as > > > "experimental" > > > > or "unsupported" > > > > > > > > > > > > > If Packstack is really good at installing things on one node, an > > > > > advanced/experienced user could have Packstack install components on > > > > > different servers if that is what he is looking for. > > > > > > > > > > Pseudo-code: > > > > > - Server 1: packstack --install-rabbitmq=y --install-mariadb=y > > > > > - Server 2: packstack --install-keystone=y --rabbitmq-server=server1 > > > > > --database-server=server1 > > > > > - Server 3: packstack --install-glance=y --keystone-server=server2 > > > > > --database-server=server1 --rabbitmq-server=server1 > > > > > - Server 4: packstack --install-nova=y --keystone-server=server2 > > > > > --database-server=server1 --rabbitmq-server=server1 > > > > > (etc) > > > > > > > > I can be wrong but right now Packstack can already do this stuff, > > > > more command line options are needed or it might need little tweaks to > > > > the > > > > code but this is not far from current Packstack options. > > > > > > > > > So in my concept, Packstack is not able to do multi node by itself > > > > > but > > > > > provides the necessary mechanisms to allow to be installed across > > > > > different nodes. > > > > > If an orchestration or wrapper mechanism is required, Ansible is a > > > > > obvious choice but not the only one. > > > > > Using Ansible would, notably, make it easy to rip out all the python > > > > > code that's around executing things on servers over SSH. > > > > > > > > > > > > > > > > > I think this refactor discussion should focus on a proper puppet usage > > > > and > > > > optimizations instead of retiring stuff that already works. > > > > Actually Packstack used to be able to install all the components in > > > different > > > > nodes and this feature was modified to the current limited multinode > > > features. > > > > > > > > We need a tool like Packstack so users can try RDO without the > > > > complexity > > > of > > > > TripleO, imagine you're new to OpenStack and you want to test it in > > > different > > > > scenarios, not everybody has a spare machine with 16GB of ram just to > > > test, not to > > > > mention the fact of understanding the concept of undercloud before > > > understanding > > > > the key concepts of OpenStack. > > > > > > > > Cheers, > > > > Ivan > > > > > > Here's a possibly stupid question, indulge me.... > > > > > > Seems like we're always going to need a simple (ish) tool that just > > > installs the openstack services on a single machine, without any need for > > > VMs. > > > > > > In fact, the tripleo installer - instack - is one such tool. Packstack is > > > another, more flexible such tool. Should we consider merging or adapting > > > them to be the same tool? > > > > I don't think this is supid at all, actually TripleO and Packstack are both > > based on OpenStack Puppet Modules but i don't think you can make merge them > > since the behaviour of both tools is very different, Packstack is not > > focused on managing the hardware, it's just focused on installing OpenStack > > and i'm not very familiar with TripleO inner workings but i think it would > > be > > very difficult to make it more like Packstack. > > > > Cheers, > > Ivan > No, sorry, I didn't mean merge packstack and tripleo, they are very different > beasts. I meant merge the tripleo installer -- which is called "instack", > and whose job it is to install an openstack undercloud on a single machine > so that it can then install openstack -- with packstack, whose job it is to > install openstack on a single machine, for whatever reason. Deployers who > want a production install could then go on to deploy a full overcloud. This could make a lot of sense. I'm not very aware of the instack internals, and I remember it had some code to create VMs for the overcloud on test environments, but it could make sense to use a specific Packstack profile for that. Javier > -Hugh From amoralej at redhat.com Thu Jun 9 09:07:38 2016 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Thu, 9 Jun 2016 11:07:38 +0200 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> <1420215145.58009304.1465427113133.JavaMail.zimbra@redhat.com> Message-ID: On Thu, Jun 9, 2016 at 1:21 AM, Alan Pevec wrote: >> I understand this, my concern is that if we remove this feature we will leave >> users with no tool for doing multinode in a lightweight way and this might >> drive off users from testing/adopting RDO. > > We would provide equivalent tool based on singlenode Packstack, with a > clear unsupported message. > That's really the only honest and responsible thing we can do, given > resources available. > Given that: - Users require multihost for PoCs - We can't test multihost in CI What is the difference between creating a new "unsupported" additional tool and marking multihost capabilities in packstack as "unsuported/untested"?. Current code enabling multihost in packstack is working and i have doubts about the expected wins in time supporting multihost in packstack versus creating a new ansible tooling and modifying packstack to work fine in this new "externally orchestrated mode". >> I'm willing to create CI tests for multinode Packstack in order to maintain >> this features. > > It's not just tests, it's multinode CI infra that is hard. > > > Cheers, > Alan > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From amoralej at redhat.com Thu Jun 9 09:17:59 2016 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Thu, 9 Jun 2016 11:17:59 +0200 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: <314502750.13808285.1465460613103.JavaMail.zimbra@redhat.com> References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> <314502750.13808285.1465460613103.JavaMail.zimbra@redhat.com> Message-ID: On Thu, Jun 9, 2016 at 10:23 AM, Javier Pena wrote: > > > ----- Original Message ----- >> On Wed, Jun 8, 2016 at 6:33 PM, Ivan Chavero wrote: >> > I can be wrong but right now Packstack can already do this stuff, >> > more command line options are needed or it might need little tweaks to the >> > code but this is not far from current Packstack options. >> >> Right now Packstack has a lot of code and logic to connect to >> additional nodes and do things. > > To be honest, the amount of code is not that big (at least to me). > > On a quick check over the refactored version, I see https://github.com/javierpena/packstack/blob/feature/manifest_refactor/packstack/plugins/prescript_000.py#L1277-L1363 could be simplified (maybe removed), then https://github.com/javierpena/packstack/blob/feature/manifest_refactor/packstack/plugins/puppet_950.py#L147-L246 would need to be rewritten, to support a single node. Everything else is small simplifications on the plugins to assume all hosts are the same. > >> Packstack, itself, connects to compute hosts to install nova, same >> with the other kind of hosts. >> >> What I am saying is that Packstack should only ever be able to install >> (efficiently) services on "localhost". >> >> Hence, me, as a user (with Ansible or manually), could do something >> like I mentioned before: >> - Login to Server 1 and run "packstack --install-rabbitmq=y >> --install-mariadb=y" >> - Login to Server 2 and run "packstack --install-keystone=y >> --rabbitmq-server=server1 --database-server=server1" >> - Login to Server 3 and run "packstack --install-glance=y >> --keystone-server=server2 --database-server=server1 >> --rabbitmq-server=server1" >> - Login to Server 4 and run "packstack --install-nova=y >> --keystone-server=server2 --database-server=server1 >> --rabbitmq-server=server1" >> (etc) >> >> This would work, allow multi node without having all the multi node >> logic embedded and handled by Packstack itself. > > Doing this would require adding a similar layer of complexity, but in the puppet code instead of python. Right now, we assume that every API service is running on config['CONTROLLER_HOST'], with your proposal we should have the current host, and separate variables (and Hiera processing in python) to have a single variable per service. I think it's a good idea anyway, but I think it wouldn't reduce complexity or any associated CI coverage concerns. > > We could take an easier way and assume we only have 3 roles, as in the current refactored code: controller, network, compute. The logic would then be: > - By default we install everything, so all in one > - If our host is not CONFIG_CONTROLLER_HOST but is part of CONFIG_NETWORK_HOSTS, we apply the network manifest > - Same as above if our host is part of CONFIG_COMPUTE_HOSTS > > Of course, the last two options would assume a first server is installed as controller. > > This would allow us to reuse the same answer file on all runs (one per host as you proposed), eliminate the ssh code as we are always running locally, and make some assumptions in the python code, like expecting OPM to be deployed and such. A contributed ansible wrapper to automate the runs would be straightforward to create. > > What do you think? Would it be worth the effort? > IMO, the modular deployment model proposed by David is the best approach for openstack installers that i always dreamt about when deploying OpenStack in real production environments. However, i think moving into that will be hard and would exceed the PoC oriented nature of Packstack where a controller + compute + network should suffice. In fact, i'd say that the best way to go for this modular approach is with containers and a orchestration tool designed for them (kubernetes, right?) but this is a different story, :) > Regards, > Javier > >> >> David Moreau Simard >> Senior Software Engineer | Openstack RDO >> >> dmsimard = [irc, github, twitter] >> > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From amoralej at redhat.com Thu Jun 9 09:54:38 2016 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Thu, 9 Jun 2016 11:54:38 +0200 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: <1179768269.13812648.1465461853624.JavaMail.zimbra@redhat.com> References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> <261446398.58009005.1465426441069.JavaMail.zimbra@redhat.com> <1179768269.13812648.1465461853624.JavaMail.zimbra@redhat.com> Message-ID: On Thu, Jun 9, 2016 at 10:44 AM, Javier Pena wrote: > ----- Original Message ----- > >> On Jun 8, 2016 11:54 PM, "Ivan Chavero" < ichavero at redhat.com > wrote: >> > >> > >> > >> > ----- Original Message ----- >> > > From: "Hugh Brock" < hbrock at redhat.com > >> > > To: "Ivan Chavero" < ichavero at redhat.com > >> > > Cc: "Javier Pena" < javier.pena at redhat.com >, "David Moreau Simard" < >> > > dms at redhat.com >, "rdo-list" < rdo-list at redhat.com > >> > > Sent: Wednesday, June 8, 2016 5:40:39 PM >> > > Subject: Re: [rdo-list] Packstack refactor and future ideas >> > > >> > > On Jun 8, 2016 11:33 PM, "Ivan Chavero" < ichavero at redhat.com > wrote: >> > > > >> > > > >> > > > >> > > > ----- Original Message ----- >> > > > > From: "David Moreau Simard" < dms at redhat.com > >> > > > > To: "Ivan Chavero" < ichavero at redhat.com > >> > > > > Cc: "Javier Pena" < javier.pena at redhat.com >, "rdo-list" < >> > > rdo-list at redhat.com > >> > > > > Sent: Wednesday, June 8, 2016 3:37:08 PM >> > > > > Subject: Re: [rdo-list] Packstack refactor and future ideas >> > > > > >> > > > > On Wed, Jun 8, 2016 at 3:27 PM, Ivan Chavero < ichavero at redhat.com > >> > > wrote: >> > > > > > I think it can be reduced to a single manifest per node. >> > > > > > Also, when a review is created it would be easier to check if you >> > > create >> > > > > > one >> > > > > > review for the python, puppet, tests and release notes. >> > > > > >> > > > > This would not pass CI and thus could not be merged. >> > > > > If there are separate commits, each must pass CI. >> > > > >> > > > well, make it just one big commit if there's no way around this >> > > > >> > > > > Otherwise, my opinion is that Packstack should focus on being a lean, >> > > > > simple and efficient single node installation tool that targets the >> > > > > same use case as DevStack but for the RHEL-derivatives and RDO/OSP >> > > > > population. >> > > > > A tool that is lightweight, simple (to an extent), easy to extend and >> > > > > add new projects in and focuses on developers and proof of concepts. >> > > > >> > > > > I don't believe Packstack should be able to handle multi-node by >> > > > > itself. >> > > > > I don't think I am being pessimistic by saying there is too few >> > > > > resources contributing to Packstack to make multi-node a good story. >> > > > > We're not testing Packstack multi-node right now and testing it >> > > > > properly is really hard, just ask the whole teams of people focused >> > > > > on >> > > > > just testing TripleO. >> > > > >> > > > So in your opinion we should drop features that packstack already has >> > > > because this are difficult to test. >> > > > I don't agree with this, we can set the untested features as >> > > "experimental" >> > > > or "unsupported" >> > > > >> > > > >> > > > > If Packstack is really good at installing things on one node, an >> > > > > advanced/experienced user could have Packstack install components on >> > > > > different servers if that is what he is looking for. >> > > > > >> > > > > Pseudo-code: >> > > > > - Server 1: packstack --install-rabbitmq=y --install-mariadb=y >> > > > > - Server 2: packstack --install-keystone=y --rabbitmq-server=server1 >> > > > > --database-server=server1 >> > > > > - Server 3: packstack --install-glance=y --keystone-server=server2 >> > > > > --database-server=server1 --rabbitmq-server=server1 >> > > > > - Server 4: packstack --install-nova=y --keystone-server=server2 >> > > > > --database-server=server1 --rabbitmq-server=server1 >> > > > > (etc) >> > > > >> > > > I can be wrong but right now Packstack can already do this stuff, >> > > > more command line options are needed or it might need little tweaks to >> > > > the >> > > > code but this is not far from current Packstack options. >> > > > >> > > > > So in my concept, Packstack is not able to do multi node by itself >> > > > > but >> > > > > provides the necessary mechanisms to allow to be installed across >> > > > > different nodes. >> > > > > If an orchestration or wrapper mechanism is required, Ansible is a >> > > > > obvious choice but not the only one. >> > > > > Using Ansible would, notably, make it easy to rip out all the python >> > > > > code that's around executing things on servers over SSH. >> > > > > >> > > > >> > > > >> > > > I think this refactor discussion should focus on a proper puppet usage >> > > > and >> > > > optimizations instead of retiring stuff that already works. >> > > > Actually Packstack used to be able to install all the components in >> > > different >> > > > nodes and this feature was modified to the current limited multinode >> > > features. >> > > > >> > > > We need a tool like Packstack so users can try RDO without the >> > > > complexity >> > > of >> > > > TripleO, imagine you're new to OpenStack and you want to test it in >> > > different >> > > > scenarios, not everybody has a spare machine with 16GB of ram just to >> > > test, not to >> > > > mention the fact of understanding the concept of undercloud before >> > > understanding >> > > > the key concepts of OpenStack. >> > > > >> > > > Cheers, >> > > > Ivan >> > > >> > > Here's a possibly stupid question, indulge me.... >> > > >> > > Seems like we're always going to need a simple (ish) tool that just >> > > installs the openstack services on a single machine, without any need for >> > > VMs. >> > > >> > > In fact, the tripleo installer - instack - is one such tool. Packstack is >> > > another, more flexible such tool. Should we consider merging or adapting >> > > them to be the same tool? >> > >> > I don't think this is supid at all, actually TripleO and Packstack are both >> > based on OpenStack Puppet Modules but i don't think you can make merge them >> > since the behaviour of both tools is very different, Packstack is not >> > focused on managing the hardware, it's just focused on installing OpenStack >> > and i'm not very familiar with TripleO inner workings but i think it would >> > be >> > very difficult to make it more like Packstack. >> > >> > Cheers, >> > Ivan >> No, sorry, I didn't mean merge packstack and tripleo, they are very different >> beasts. I meant merge the tripleo installer -- which is called "instack", >> and whose job it is to install an openstack undercloud on a single machine >> so that it can then install openstack -- with packstack, whose job it is to >> install openstack on a single machine, for whatever reason. Deployers who >> want a production install could then go on to deploy a full overcloud. > > This could make a lot of sense. I'm not very aware of the instack internals, and I remember it had some code to create VMs for the overcloud on test environments, but it could make sense to use a specific Packstack profile for that. Sounds good, maybe we could work with someone knowing how instack works to prototype how this could be done. > > Javier > >> -Hugh > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ckdwibedy at gmail.com Thu Jun 9 10:12:06 2016 From: ckdwibedy at gmail.com (Chinmaya Dwibedy) Date: Thu, 9 Jun 2016 15:42:06 +0530 Subject: [rdo-list] =?utf-8?q?Fwd=3A_ConnectFailure_error_upon_triggering_?= =?utf-8?q?=E2=80=9Cnova_image-list=E2=80=9D_command_using_openstac?= =?utf-8?q?k-mitaka_release?= In-Reply-To: References: Message-ID: Hi Boris, Thank you for your suggestion . I found , the permission of nova.conf was causing this issue . Changed the permission via #cmod 666 /etc/nova/nova.conf and did not find ConnectFailure issue. Regards, Chinmaya On Wed, Jun 8, 2016 at 6:28 PM, Boris Derzhavets wrote: > > > > ------------------------------ > *From:* Chinmaya Dwibedy > *Sent:* Wednesday, June 8, 2016 8:39 AM > *To:* Boris Derzhavets > *Cc:* rdo-list at redhat.com > *Subject:* Re: [rdo-list] Fwd: ConnectFailure error upon triggering ?nova > image-list? command using openstack-mitaka release > > > Hi Boris, > > It appears that, the nova-api process is not running . > > Check /var/log/nova/api.log (or nova-api.log , I just forgot exact name > of log file ) > > > [root at localhost ~(keystone_admin)]# netstat -antp | grep 8774 > > [root at localhost ~(keystone_admin)]# iptables-save | grep 8774 > > -A INPUT -p tcp -m multiport --dports 8773,8774,8775 -m comment --comment > "001 nova api incoming nova_api" -j ACCEPT > > [root at localhost ~(keystone_admin)]# > > > > Regards, > > Chinmaya > > > > > On Wed, Jun 8, 2016 at 5:49 PM, Boris Derzhavets > wrote: > >> >> >> >> ------------------------------ >> *From:* rdo-list-bounces at redhat.com on >> behalf of Chinmaya Dwibedy >> *Sent:* Wednesday, June 8, 2016 8:08 AM >> *To:* rdo-list at redhat.com >> *Subject:* [rdo-list] Fwd: ConnectFailure error upon triggering ?nova >> image-list? command using openstack-mitaka release >> >> >> Hi , >> >> >> I am getting the ConnectFailure error message upon triggering ?nova >> image-list? command. nova-api process should be listening on 8774. It >> doesn't look like it is not running. Also I do not find any error logs in >> nova-api.log >> >> >> [BD] >> >> >> # netstat -antp | grep 8774 >> >> # iptables-save | grep 8774 >> >> If first command return xxxxxx/python >> >> then `ps -ef | grep xxxxxx` >> >> >> nova-compute.log and nova-conductor.log. I am using openstack-mitaka >> release on host (Cent OS 7.2). How can I debug and know what prevents >> it from running ? please suggest. >> >> >> Note: This was working while back and got this issue all of a sudden. >> >> >> Here are some logs. >> >> >> [root at localhost ~(keystone_admin)]# nova image-list >> >> ERROR (ConnectFailure): Unable to establish connection to >> http://172.18.121.48:8774/v2/4bc608763cee41d9a8df26d3ef919825 >> >> [root at localhost ~(keystone_admin)]# >> >> >> >> [root at localhost ~(keystone_admin)]# service openstack-nova-api restart >> >> Redirecting to /bin/systemctl restart openstack-nova-api.service >> >> Job for openstack-nova-api.service failed because the control process >> exited with error code. See "systemctl status openstack-nova-api.service" >> and "journalctl -xe" for details. >> >> [root at localhost ~(keystone_admin)]# systemctl status >> openstack-nova-api.service >> >> ? openstack-nova-api.service - OpenStack Nova API Server >> >> Loaded: loaded (/usr/lib/systemd/system/openstack-nova-api.service; >> enabled; vendor preset: disabled) >> >> Active: activating (start) since Wed 2016-06-08 07:59:20 EDT; 2s ago >> >> Main PID: 179955 (nova-api) >> >> CGroup: /system.slice/openstack-nova-api.service >> >> ??179955 /usr/bin/python2 /usr/bin/nova-api >> >> >> >> Jun 08 07:59:20 localhost systemd[1]: Starting OpenStack Nova API >> Server... >> >> Jun 08 07:59:22 localhost python2[179955]: detected unhandled Python >> exception in '/usr/bin/nova-api' >> >> [root at localhost ~(keystone_admin)]# >> >> >> >> [root at localhost ~(keystone_admin)]# keystone endpoint-list >> >> /usr/lib/python2.7/site-packages/keystoneclient/shell.py:64: >> DeprecationWarning: The keystone CLI is deprecated in favor of >> python-openstackclient. For a Python library, continue using >> python-keystoneclient. >> >> 'python-keystoneclient.', DeprecationWarning) >> >> /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:145: >> DeprecationWarning: Constructing an instance of the >> keystoneclient.v2_0.client.Client class without a session is deprecated as >> of the 1.7.0 release and may be removed in the 2.0.0 release. >> >> 'the 2.0.0 release.', DeprecationWarning) >> >> /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:147: >> DeprecationWarning: Using the 'tenant_name' argument is deprecated in >> version '1.7.0' and will be removed in version '2.0.0', please use the >> 'project_name' argument instead >> >> super(Client, self).__init__(**kwargs) >> >> /usr/lib/python2.7/site-packages/debtcollector/renames.py:45: >> DeprecationWarning: Using the 'tenant_id' argument is deprecated in version >> '1.7.0' and will be removed in version '2.0.0', please use the 'project_id' >> argument instead >> >> return f(*args, **kwargs) >> >> /usr/lib/python2.7/site-packages/keystoneclient/httpclient.py:371: >> DeprecationWarning: Constructing an HTTPClient instance without using a >> session is deprecated as of the 1.7.0 release and may be removed in the >> 2.0.0 release. >> >> 'the 2.0.0 release.', DeprecationWarning) >> >> /usr/lib/python2.7/site-packages/keystoneclient/session.py:140: >> DeprecationWarning: keystoneclient.session.Session is deprecated as of the >> 2.1.0 release in favor of keystoneauth1.session.Session. It will be removed >> in future releases. >> >> DeprecationWarning) >> >> /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py:56: >> DeprecationWarning: keystoneclient auth plugins are deprecated as of the >> 2.1.0 release in favor of keystoneauth1 plugins. They will be removed in >> future releases. >> >> 'in future releases.', DeprecationWarning) >> >> >> +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+-------------------------------------------------+----------------------------------+ >> >> | id | region | >> publicurl | >> internalurl | >> adminurl | service_id | >> >> >> +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+-------------------------------------------------+----------------------------------+ >> >> | 02fcec9a7b834128b3e30403c4ed0de7 | RegionOne | >> http://172.18.121.48:8080/v1/AUTH_%(tenant_id)s | >> http://172.18.121.48:8080/v1/AUTH_%(tenant_id)s | >> http://172.18.121.48:8080/v1/AUTH_%(tenant_id)s | >> 5533324a63d8402888040832640a19d0 | >> >> | 295802909413422cb7c22dc1e268bce9 | RegionOne | >> http://172.18.121.48:8774/v2/%(tenant_id)s | >> http://172.18.121.48:8774/v2/%(tenant_id)s | >> http://172.18.121.48:8774/v2/%(tenant_id)s | >> f7fe68bf4cec47a4a3c942f3916dc377 | >> >> | 2a125f10b0d04f8a9306dede85b65514 | RegionOne | >> http://172.18.121.48:9696 | >> http://172.18.121.48:9696 | >> http://172.18.121.48:9696 | b2a60cdc144e40a49757f13c2264f030 | >> >> | 2d1a91d39f3d421cb1b2fe73fba5fd3a | RegionOne | >> http://172.18.121.48:8777 | >> http://172.18.121.48:8777 | >> http://172.18.121.48:8777 | e6d750ac5ef3433799d4fe39518a3fe6 | >> >> | 47b634f3e18e4caf914521a1a4157008 | RegionOne | >> http://172.18.121.48:8042 | >> http://172.18.121.48:8042 | >> http://172.18.121.48:8042 | 07cd8adf66254b4ab9b07be03a24084b | >> >> | 595913f7227b44dc8753db3b0cf6acdc | RegionOne | >> http://172.18.121.48:8041 | >> http://172.18.121.48:8041 | >> http://172.18.121.48:8041 | f43240abe5f3476ea64a8bd381fe4da7 | >> >> | 64381b509bc84639b6a4710e6d99a23b | RegionOne | >> http://172.18.121.48:8776/v1/%(tenant_id)s | >> http://172.18.121.48:8776/v1/%(tenant_id)s | >> http://172.18.121.48:8776/v1/%(tenant_id)s | >> 7edc7bedf93d4f388185699b9793ec7f | >> >> | 727d25775be54c9f8453f697ae5cb625 | RegionOne | >> http://172.18.121.48:5000/v2.0 | >> http://172.18.121.48:5000/v2.0 | >> http://172.18.121.48:35357/v2.0 | >> 25e99a2a98f244d9a73bf965acdd39da | >> >> | 9049338c57574b2d8ff8308b1a4265a5 | RegionOne | >> http://172.18.121.48:8776/v2/%(tenant_id)s | >> http://172.18.121.48:8776/v2/%(tenant_id)s | >> http://172.18.121.48:8776/v2/%(tenant_id)s | >> 6e070f0629094b72b66025250fdbda64 | >> >> | c051c0f9649143f6b29eaf0895940abe | RegionOne | >> http://172.18.121.48:9292 | >> http://172.18.121.48:9292 | >> http://172.18.121.48:9292 | 40874e10139a47eb88dfec2114047a34 | >> >> | ee4a00c1e8334cb8921fa3f2a7c82f1b | RegionOne | >> http://172.18.121.48:8774/v3 | >> http://172.18.121.48:8774/v3 | >> http://172.18.121.48:8774/v3 | 24c92ce4cd354e3db6c5ad59b8beeae8 >> | >> >> | fa60b5ba0ab7436ab1ffebb2982d3ccc | RegionOne | >> http://127.0.0.1:8776/v3/%(tenant_id)s | >> http://127.0.0.1:8776/v3/%(tenant_id)s | >> http://127.0.0.1:8776/v3/%(tenant_id)s | >> b0b6b97cf9d649c9800300cc64b0e866 | >> >> >> +----------------------------------+-----------+-------------------------------------------------+-------------------------------------------------+-------------------------------------------------+----------------------------------+ >> >> [root at localhost ~(keystone_admin)]# >> >> >> >> >> >> [root at localhost ~(keystone_admin)]# netstat -ntlp | grep 8774 >> >> [root at localhost ~(keystone_admin)]# >> >> >> >> [root at localhost ~(keystone_admin)]# ps -ef | grep nova-api >> >> nova 156427 1 86 07:51 ? 00:00:01 /usr/bin/python2 >> /usr/bin/nova-api >> >> [root at localhost ~(keystone_admin)]# >> >> >> >> [root at localhost ~(keystone_admin)]# lsof -i :8774 >> >> [root at localhost ~(keystone_admin)]# >> >> >> >> >> >> [root at localhost ~(keystone_admin)]# keystone user-list >> >> /usr/lib/python2.7/site-packages/keystoneclient/shell.py:64: >> DeprecationWarning: The keystone CLI is deprecated in favor of >> python-openstackclient. For a Python library, continue using >> python-keystoneclient. >> >> 'python-keystoneclient.', DeprecationWarning) >> >> /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:145: >> DeprecationWarning: Constructing an instance of the >> keystoneclient.v2_0.client.Client class without a session is deprecated as >> of the 1.7.0 release and may be removed in the 2.0.0 release. >> >> 'the 2.0.0 release.', DeprecationWarning) >> >> /usr/lib/python2.7/site-packages/keystoneclient/v2_0/client.py:147: >> DeprecationWarning: Using the 'tenant_name' argument is deprecated in >> version '1.7.0' and will be removed in version '2.0.0', please use the >> 'project_name' argument instead >> >> super(Client, self).__init__(**kwargs) >> >> /usr/lib/python2.7/site-packages/debtcollector/renames.py:45: >> DeprecationWarning: Using the 'tenant_id' argument is deprecated in version >> '1.7.0' and will be removed in version '2.0.0', please use the 'project_id' >> argument instead >> >> return f(*args, **kwargs) >> >> /usr/lib/python2.7/site-packages/keystoneclient/httpclient.py:371: >> DeprecationWarning: Constructing an HTTPClient instance without using a >> session is deprecated as of the 1.7.0 release and may be removed in the >> 2.0.0 release. >> >> 'the 2.0.0 release.', DeprecationWarning) >> >> /usr/lib/python2.7/site-packages/keystoneclient/session.py:140: >> DeprecationWarning: keystoneclient.session.Session is deprecated as of the >> 2.1.0 release in favor of keystoneauth1.session.Session. It will be removed >> in future releases. >> >> DeprecationWarning) >> >> /usr/lib/python2.7/site-packages/keystoneclient/auth/identity/base.py:56: >> DeprecationWarning: keystoneclient auth plugins are deprecated as of the >> 2.1.0 release in favor of keystoneauth1 plugins. They will be removed in >> future releases. >> >> 'in future releases.', DeprecationWarning) >> >> >> +----------------------------------+------------+---------+----------------------+ >> >> | id | name | enabled | >> email | >> >> >> +----------------------------------+------------+---------+----------------------+ >> >> | 266f5859848e4f39b9725203dda5c3f2 | admin | True | >> root at localhost | >> >> | 79a6ff3cc7cc4d018247c750adbc18e7 | aodh | True | >> aodh at localhost | >> >> | 90f28a2a80054132a901d39da307213f | ceilometer | True | >> ceilometer at localhost | >> >> | 16fa5ffa60e147d89ad84646b6519278 | cinder | True | >> cinder at localhost | >> >> | c6312ec6c2c444288a412f32173fcd99 | glance | True | >> glance at localhost | >> >> | ac8fb9c33d404a1697d576d428db90b3 | gnocchi | True | >> gnocchi at localhost | >> >> | 1a5b4da4ed974ac8a6c78b752ac8fab6 | neutron | True | >> neutron at localhost | >> >> | f21e8a15da5c40b7957416de4fa91b62 | nova | True | >> nova at localhost | >> >> | b843358d7ae44944b11af38ce4b61f4d | swift | True | >> swift at localhost | >> >> >> +----------------------------------+------------+---------+----------------------+ >> >> [root at localhost ~(keystone_admin)]# >> >> >> >> [root at localhost ~(keystone_admin)]# nova-manage service list >> >> Option "verbose" from group "DEFAULT" is deprecated for removal. Its >> value may be silently ignored in the future. >> >> Option "notification_driver" from group "DEFAULT" is deprecated. Use >> option "driver" from group "oslo_messaging_notifications". >> >> Option "notification_topics" from group "DEFAULT" is deprecated. Use >> option "topics" from group "oslo_messaging_notifications". >> >> DEPRECATED: Use the nova service-* commands from python-novaclient >> instead or the os-services REST resource. The service subcommand will be >> removed in the 14.0 release. >> >> Binary Host Zone >> Status State Updated_At >> >> nova-osapi_compute 0.0.0.0 internal >> enabled XXX None >> >> nova-metadata 0.0.0.0 internal >> enabled XXX None >> >> nova-cert localhost internal >> enabled XXX 2016-06-08 06:12:38 >> >> nova-consoleauth localhost internal >> enabled XXX 2016-06-08 06:12:37 >> >> nova-scheduler localhost internal >> enabled XXX 2016-06-08 06:12:38 >> >> nova-conductor localhost internal >> enabled XXX 2016-06-08 06:12:37 >> >> nova-compute localhost nova >> enabled XXX 2016-06-08 06:12:43 >> >> [root at localhost ~(keystone_admin)]# >> >> >> >> [root at localhost ~(keystone_admin)]# ls -l /var/log/nova/ >> >> total 4 >> >> -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-api.log >> >> -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-cert.log >> >> -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-compute.log >> >> -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-conductor.log >> >> -rw-r--r--. 1 nova nova 0 Jun 8 05:22 nova-consoleauth.log >> >> -rw-r--r--. 1 root root 0 Jun 8 05:22 nova-manage.log >> >> -rw-r--r--. 1 nova nova 995 Jun 8 05:32 nova-novncproxy.log >> >> -rw-r--r--. 1 nova nova 0 Jun 8 05:23 nova-scheduler.log >> >> [root at localhost ~(keystone_admin)]# >> >> >> >> >> >> Regards, >> >> Chinmaya >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at redhat.com Thu Jun 9 11:44:06 2016 From: apevec at redhat.com (Alan Pevec) Date: Thu, 9 Jun 2016 13:44:06 +0200 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> <1420215145.58009304.1465427113133.JavaMail.zimbra@redhat.com> Message-ID: > What is the difference between creating a new "unsupported" additional > tool and marking multihost capabilities in packstack as > "unsuported/untested"?. Looks like one of my reply didn't reach rdo-list (I got ... User unknown ??) Difference is that no matter what deprecation warnings you put people will keep using features until they're really gone and those features are out of scope. > Current code enabling multihost in packstack is working and i have It still adds complexity vs assuming local execution i.e. now Packstack does ssh even to localhost, right? > doubts about the expected wins in time supporting multihost in > packstack versus creating a new ansible tooling and modifying > packstack to work fine in this new "externally orchestrated mode". Win is the clear focus for the tool and local execution without messing with ssh. I like Javier's proposal that would keep controller, network, compute then you just need to execute packstack on multiple machines with the same answer file, manually or using Ansible. In that case, "new ansible tooling" is really just an example playbook in docs/. Cheers, Alan From apevec at redhat.com Thu Jun 9 11:47:20 2016 From: apevec at redhat.com (Alan Pevec) Date: Thu, 9 Jun 2016 13:47:20 +0200 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: <314502750.13808285.1465460613103.JavaMail.zimbra@redhat.com> References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> <314502750.13808285.1465460613103.JavaMail.zimbra@redhat.com> Message-ID: > We could take an easier way and assume we only have 3 roles, as in the current refactored code: controller, network, compute. The logic would then be: > - By default we install everything, so all in one > - If our host is not CONFIG_CONTROLLER_HOST but is part of CONFIG_NETWORK_HOSTS, we apply the network manifest > - Same as above if our host is part of CONFIG_COMPUTE_HOSTS > > Of course, the last two options would assume a first server is installed as controller. > > This would allow us to reuse the same answer file on all runs (one per host as you proposed), eliminate the ssh code as we are always running locally, and make some assumptions in the python code, like expecting OPM to be deployed and such. A contributed ansible wrapper to automate the runs would be straightforward to create. > > What do you think? Would it be worth the effort? +2 I like that proposal a lot! An ansible wrapper is then just an example playbook in docs but could be done w/o ansible as well, manually or using some other remote execution tooling of user's choice. Alan From jslagle at redhat.com Thu Jun 9 11:56:12 2016 From: jslagle at redhat.com (James Slagle) Date: Thu, 9 Jun 2016 07:56:12 -0400 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> Message-ID: <20160609115611.GH32033@localhost.localdomain> On Thu, Jun 09, 2016 at 12:40:39AM +0200, Hugh Brock wrote: > Here's a possibly stupid question, indulge me.... > > Seems like we're always going to need a simple (ish) tool that just > installs the openstack services on a single machine, without any need for > VMs. > > In fact, the tripleo installer - instack - is one such tool. Packstack is > another, more flexible such tool. Should we consider merging or adapting > them to be the same tool? The point of instack has always been to reuse the same tooling that is used for the overcloud to do the undercloud install. Given: - the composability that is coming in tripleo-heat-templates - driving SoftwareDeployment via Heat without a full stack - applying those SoftwareDeployments to servers that don't even have to be known to Heat I actually think we'd move more in the direction of using tripleo-heat-templates in some fashion to install the undercloud. Also given the requirements around undercloud HA, TripleO doesnt want to reinvent the wheel coming up with a new multinode HA installer. Users are likely to want compsability, network isolation, ipv6, etc, for their HA underclouds too. Reusing what we're already doing for the overcloud makes the most sense to me. That would be the next iteration of instack IMO, using Heat and tripleo-heat-templates. As long as those are our tools for the overcloud, I'd want to reuse them for the undercloud as well, especially as the requirements grow to require more complexity. That solution might result in a nice and simple AIO installer as well, but I don't hear a lot of requests for that outside the context of "how can we get rid of packstack". Which doesn't seem like all that valid of a reason tbh given that packstack is already really good at doing AIO. Approaching it from the other angle, what if packstack were really good at multinode HA, and we used that for undercloud HA? That could definitely work, but you don't have any code reuse between the undercloud and overcloud other than the puppet modules. Further, there would be a fair amount of churn in the install experience unless we developed a compatibility layer as well, would we really want to deprecate every undercloud.conf already out there, and require rewriting it as answers file? I think there's a fair amount of complexity and trade-offs to consider in going this route as well. I think the last time we had this discussion, it more or less inspired/moved us along the lines of creating tripleo-quickstart. It's just not clear to me if the current discussion is about making AIO better or making multinode easier via packstack. It might be useful to define who we're actually targetting with this effort, and what the requirements are. IME, the effort to get to a simple and working multinode installer if actually pretty easy. But then you're staring at a huge amount of complexity to move it to the next level of even being usable for POC's. We're spending a lot of time trying to make TripleO easier for POC's as well. I think what we're finding though is that there are no "simple" POC's either, every one is different. You still need a fair amount of customization, which results in complexity, even for POC's. -- -- James Slagle -- From marius at remote-lab.net Thu Jun 9 12:17:44 2016 From: marius at remote-lab.net (Marius Cornea) Date: Thu, 9 Jun 2016 14:17:44 +0200 Subject: [rdo-list] Newton test day repo Message-ID: Hi everyone, I noticed that after downloading the repo files mentioned in the Newton test day page[1] I get a repo called delorean-mitaka-testing. Is this ok given that we're testing Newton? Thank you [1] https://www.rdoproject.org/testday/newton/milestone1/ From amoralej at redhat.com Thu Jun 9 12:23:04 2016 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Thu, 9 Jun 2016 14:23:04 +0200 Subject: [rdo-list] Newton test day repo In-Reply-To: References: Message-ID: Hi, You should get two repos: delorean delorean-openstack-sahara-18aa22e7c579f12fafce28f09948e260e9cadc6c 436+208 delorean-mitaka-testing/x86_64 delorean-mitaka-testing 737+430 the delorean-mitaka-testing only has some dependencies, the real openstack newton pacakges are in the delorean-openstack-sahara-18aa22e7c579f12fafce28f09948e260e9cadc6c So yes, you are installing neutron. Best regards, Alfredo On Thu, Jun 9, 2016 at 2:17 PM, Marius Cornea wrote: > Hi everyone, > > I noticed that after downloading the repo files mentioned in the > Newton test day page[1] I get a repo called delorean-mitaka-testing. > Is this ok given that we're testing Newton? > > Thank you > > [1] https://www.rdoproject.org/testday/newton/milestone1/ > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From ayoung at redhat.com Thu Jun 9 14:12:02 2016 From: ayoung at redhat.com (Adam Young) Date: Thu, 9 Jun 2016 10:12:02 -0400 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: <1420215145.58009304.1465427113133.JavaMail.zimbra@redhat.com> References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> <1420215145.58009304.1465427113133.JavaMail.zimbra@redhat.com> Message-ID: On 06/08/2016 07:05 PM, Ivan Chavero wrote: > > ----- Original Message ----- >> From: "Alan Pevec" >> To: "Ivan Chavero" >> Cc: "rdo-list" >> Sent: Wednesday, June 8, 2016 5:48:07 PM >> Subject: Re: [rdo-list] Packstack refactor and future ideas >> >>> I don't agree with this, we can set the untested features as "experimental" >> or "unsupported" >> >> Best is to remove them to make that clear, because no matter what >> deprecation warnings you put[*] people will keep using features until >> they're gone, giving us bug reports to take care of and without CI to >> verify fixes. >> It's long-term maintenance burden I am concerned about, for features >> which are clearly out of scope. > I understand this, my concern is that if we remove this feature we will leave > users with no tool for doing multinode in a lightweight way and this might > drive off users from testing/adopting RDO. I think it is starting to have the opposite effect. Packstack, being available, gives the wrong idea about RDO: you are supposed to install bare metal. The Tripleo Quickstart approach is that everything is in a VM. Packstack is doing too much: image building, provision, and running the system. A better approach would be to have our tooling set out so that a user can build their own images, and then deploy to a VM. OR better yet, a container. We need to drive on to Kolla. Kolla as the Controller for Tripleo and Kolla running in a VM and Kolla running on my desktop should all be close enough to identical to avoid the fragmentation we have now. > > I'm willing to create CI tests for multinode Packstack in order to maintain > this features. > > And believe me it would be easier for me just to drop this features but i really > think users are benefiting from this features. > > Cheers, > Ivan > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From bderzhavets at hotmail.com Thu Jun 9 14:42:22 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 9 Jun 2016 14:42:22 +0000 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> <1420215145.58009304.1465427113133.JavaMail.zimbra@redhat.com>, Message-ID: ________________________________ From: rdo-list-bounces at redhat.com on behalf of Adam Young Sent: Thursday, June 9, 2016 10:12 AM To: rdo-list at redhat.com Subject: Re: [rdo-list] Packstack refactor and future ideas On 06/08/2016 07:05 PM, Ivan Chavero wrote: > > ----- Original Message ----- >> From: "Alan Pevec" >> To: "Ivan Chavero" >> Cc: "rdo-list" >> Sent: Wednesday, June 8, 2016 5:48:07 PM >> Subject: Re: [rdo-list] Packstack refactor and future ideas >> >>> I don't agree with this, we can set the untested features as "experimental" >> or "unsupported" >> >> Best is to remove them to make that clear, because no matter what >> deprecation warnings you put[*] people will keep using features until >> they're gone, giving us bug reports to take care of and without CI to >> verify fixes. >> It's long-term maintenance burden I am concerned about, for features >> which are clearly out of scope. > I understand this, my concern is that if we remove this feature we will leave > users with no tool for doing multinode in a lightweight way and this might > drive off users from testing/adopting RDO. I think it is starting to have the opposite effect. Packstack, being available, gives the wrong idea about RDO: you are supposed to install bare metal. The Tripleo Quickstart approach is that everything is in a VM. Sorry , if I am asking stupid question , but in meantime TripleO QuickStart configuration been generated on VIRTHOST is not persistent between cold reboots. As soon as I shutdown UNDERCLOUD VM (as stack) it's gone for ever. Packstack deployments on usual VMs driven by KVM/Libvirt are persistent between cold reboots. I don't have to start every morning from scratch Am I missing something in current TripleO QuickStart status ? Thanks. Boris. Packstack is doing too much: image building, provision, and running the system. A better approach would be to have our tooling set out so that a user can build their own images, and then deploy to a VM. OR better yet, a container. We need to drive on to Kolla. Kolla as the Controller for Tripleo and Kolla running in a VM and Kolla running on my desktop should all be close enough to identical to avoid the fragmentation we have now. > > I'm willing to create CI tests for multinode Packstack in order to maintain > this features. > > And believe me it would be easier for me just to drop this features but i really > think users are benefiting from this features. > > Cheers, > Ivan > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list rdo-list Info Page - Red Hat www.redhat.com The rdo-list mailing list provides a forum for discussions about installing, running, and using OpenStack on Red Hat based distributions. To see the collection of ... > > To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichavero at redhat.com Thu Jun 9 14:48:51 2016 From: ichavero at redhat.com (Ivan Chavero) Date: Thu, 9 Jun 2016 10:48:51 -0400 (EDT) Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> <314502750.13808285.1465460613103.JavaMail.zimbra@redhat.com> Message-ID: <368900734.58106032.1465483731791.JavaMail.zimbra@redhat.com> > In fact, i'd say that the best way to go for this modular approach is > with containers and a orchestration tool designed for them > (kubernetes, right?) but this is a different story, :) i agree with you, i've been flirting with the idea of doing that for a while now Cheers, Ivan From apevec at redhat.com Thu Jun 9 14:50:44 2016 From: apevec at redhat.com (Alan Pevec) Date: Thu, 9 Jun 2016 16:50:44 +0200 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: <20160609115611.GH32033@localhost.localdomain> References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> <20160609115611.GH32033@localhost.localdomain> Message-ID: > the current discussion is about making AIO better or making multinode easier via packstack. Current discussion is about scoping Packstack right, so it can both fulfill its mission to be an easy introduction to OpenStack for the new community members and at the same time have maintainable codebase. "An easy introduction to OpenStack" means providing real working cloud on a normal 8-16 GB laptop with the ability to run at least one real instance, not cirros in emulated VM-in-VM. For production use, as you say, doing multinode right is not simple and that's why there's TripleO. Multinode that has been historically included in Packstack is incomplete for real production usage and making it better is out of scope hence my proposal to remove it. Cheers, Alan From rbowen at redhat.com Thu Jun 9 20:56:25 2016 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 9 Jun 2016 16:56:25 -0400 Subject: [rdo-list] Deutsche OpenStack Tage Message-ID: If anyone is planning to attend Deutsche OpenStack Tage - https://openstack-tage.de/ - please let me know. Thanks. -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdocommunity.org @RDOCommunity From ichavero at redhat.com Thu Jun 9 20:57:19 2016 From: ichavero at redhat.com (Ivan Chavero) Date: Thu, 9 Jun 2016 16:57:19 -0400 (EDT) Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> <20160609115611.GH32033@localhost.localdomain> Message-ID: <110388196.58176144.1465505839045.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Alan Pevec" > To: "James Slagle" > Cc: "rdo-list" , "Javier Pena" > Sent: Thursday, June 9, 2016 9:50:44 AM > Subject: Re: [rdo-list] Packstack refactor and future ideas > > > the current discussion is about making AIO better or making multinode > > easier via packstack. > > Current discussion is about scoping Packstack right, so it can both > fulfill its mission to be an easy introduction to OpenStack for the > new community members and at the same time have maintainable codebase. > "An easy introduction to OpenStack" means providing real working cloud > on a normal 8-16 GB laptop with the ability to run at least one real > instance, not cirros in emulated VM-in-VM. > For production use, as you say, doing multinode right is not simple > and that's why there's TripleO. > Multinode that has been historically included in Packstack is > incomplete for real production usage and making it better is out of > scope hence my proposal to remove it. Taking in account comments by Javier and Daniel, i think we can effectively eliminate the multinode functionality of Packstack but not loosing its ability to install different components in a separate way. Extending what Daniel said we could do stuff like: packstack --controller-node packstack --compute-node packstack --network-node and if you want you can use: packstack --os-install-mariadb=y , etc... Cheers, Ivan From rbowen at redhat.com Thu Jun 9 21:05:17 2016 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 9 Jun 2016 17:05:17 -0400 Subject: [rdo-list] OpenStack Days: Silicon Valley Message-ID: <079eaa19-1d15-ac99-1bcf-0d80f1e0567b@redhat.com> Likewise, if you're planning to attend OpenStack Days Silicon Valley - https://www.openstacksv.com/- I would appreciate hearing from you. Thanks. --Rich -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdocommunity.org @RDOCommunity From apevec at redhat.com Fri Jun 10 12:37:06 2016 From: apevec at redhat.com (Alan Pevec) Date: Fri, 10 Jun 2016 14:37:06 +0200 Subject: [rdo-list] [CentOS-devel] Rebuilding OpenStack for AArch64 In-Reply-To: <575AA0FD.2060100@linaro.org> References: <575AA0FD.2060100@linaro.org> Message-ID: adding rdo-list On Fri, Jun 10, 2016 at 1:14 PM, Marcin Juszkiewicz wrote: > There were discussions on irc about it but I lost track of who I should > speak with etc so decided to write to mailing list instead. > > The goal is to rebuild OpenStack 'mitaka' packages for AArch64 (AltArch) in > CBS (there is one builder for this architecture). > > Question is how to make it in best way? > > One of ideas was: > > 1. create new tag (cloud7-openstack-mitaka-candidate-aarch64?) > 2. import all 'noarch' builds from cloud7-openstack-mitaka-* tags > 3. send all required packages to the builder > 4. test resulting packages > 5. import/merge all of them to proper cloud7-openstack-mitaka-* tag > > Other was about doing scratch builds and merging them with x86-64 ones. > > How we can proceed? I think that first idea is good as it allows us to do > builds without touching x86-64 tag before we are ready for merging. But I do > not have koji experience so may be wrong. New tag is the way to go, we don't want to block x86_64. Cheers, Alan From thomas.oulevey at cern.ch Fri Jun 10 13:04:02 2016 From: thomas.oulevey at cern.ch (Thomas Oulevey) Date: Fri, 10 Jun 2016 15:04:02 +0200 Subject: [rdo-list] [CentOS-devel] Rebuilding OpenStack for AArch64 In-Reply-To: References: <575AA0FD.2060100@linaro.org> Message-ID: <575ABAC2.8040702@cern.ch> Hi, >> How we can proceed? I think that first idea is good as it allows us to do >> builds without touching x86-64 tag before we are ready for merging. But I do >> not have koji experience so may be wrong. > > New tag is the way to go, we don't want to block x86_64. > +1 The status, we have a blocker with mergerepo / mergerepo_c [1] which are not merging package with a dist tag different on the source package and the binary. e.g: I had to leave the problem on the side during 6.8 cycle but we are resuming some work this week with Brian and hope to understand better the problem. -- Thomas [1]: $ repoquery --repofrompath=reponame,http://cbs.centos.org/kojifiles/repos/storage7-ceph-jewel-el7-build/latest/aarch64/ --repoid=reponame -q json-c return no result The mergerepo tasks (mergerepo_c because it is more verbose, same issue with mergerepo): https://cbs.centos.org/kojifiles/work/tasks/4276/94276/mergerepos.log """ 19:33:02: Reading metadata for json-c (0.11-4.el7.aarch64) 19:33:02: Package json-c has forbidden srpm json-c-0.11-4.el7.src.rpm """ (REf in github : https://github.com/rpm-software-management/createrepo_c/blob/master/src/mergerepo_c.c#L784) $ rpm -qpi http://mirror.centos.org/altarch/7/os/aarch64/Packages/json-c-0.11-4.el7.aarch64.rpm warning: http://mirror.centos.org/altarch/7/os/aarch64/Packages/json-c-0.11-4.el7.aarch64.rpm: Header V4 RSA/SHA1 Signature, key ID 305d49d6: NOKEY Name : json-c Relocations: (not relocatable) Version : 0.11 Vendor: CentOS Release : 4.el7 Build Date: Fri 24 Apr 2015 02:25:38 UTC Install Date: (not installed) Build Host: arm64.centos.lab Group : Development/Libraries Source RPM: json-c-0.11-4.el7.src.rpm Size : 151697 License: MIT Signature : RSA/SHA1, Wed 29 Jul 2015 13:30:29 UTC, Key ID 6c7cb6ef305d49d6 Packager : CentOS BuildSystem URL : https://github.com/json-c/json-c/wiki Summary : A JSON implementation in C Description : JSON-C implements a reference counting object model that allows you to easily construct JSON objects in C, output them as JSON formatted strings and parse JSON formatted strings back into the C representation of JSON objects. So it means that aarch64 buildroot is not complete and therefore the build fails if certain dependencies are not found. (which is teh case for ceph jewel) From adam.huffman at gmail.com Sat Jun 11 07:23:07 2016 From: adam.huffman at gmail.com (Adam Huffman) Date: Sat, 11 Jun 2016 08:23:07 +0100 Subject: [rdo-list] Late test day failure Message-ID: I'm following the TripleO quickstart instructions at: https://www.rdoproject.org/tripleo/ There is a failure quite early on: TASK [provision/remote : Grant libvirt privileges to non-root user] ************ task path: /home/adam/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/remote/tasks/main.yml:37 Saturday 11 June 2016 08:19:00 +0100 (0:00:01.699) 0:01:02.207 ********* fatal: [scaleway1]: FAILED! => {"changed": true, "failed": true, "msg": "Destination directory /etc/polkit-1/localauthority/50-local.d does not exist"} NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* localhost : ok=4 changed=2 unreachable=0 failed=0 scaleway1 : ok=5 changed=3 unreachable=0 failed=1 Saturday 11 June 2016 08:19:02 +0100 (0:00:02.017) 0:01:04.225 ********* =============================================================================== TASK: setup ------------------------------------------------------------ 25.28s TASK: setup ------------------------------------------------------------ 14.68s TASK: provision/remote : Create virthost access key --------------------- 9.28s TASK: provision/remote : Grant libvirt privileges to non-root user ------ 2.02s TASK: provision/remote : Configure non-root user authorized_keys -------- 1.70s TASK: provision/remote : Create non-root user --------------------------- 1.59s TASK: provision/local : Ensure local working dir exists ----------------- 1.54s TASK: provision/teardown : Get UID of non-root user --------------------- 1.44s TASK: provision/local : Create empty ssh config file -------------------- 1.14s TASK: provision/local : Check that virthost is set ---------------------- 0.84s TASK: provision/local : Add the virthost to the inventory --------------- 0.82s TASK: provision/teardown : Check for processes owned by non-root user --- 0.68s TASK: provision/teardown : Wait for processes to exit ------------------- 0.63s TASK: provision/teardown : Remove non-root user ephemeral files --------- 0.61s TASK: provision/teardown : Kill (SIGKILL) all processes owned by non-root user --- 0.60s TASK: provision/teardown : Remove non-root user account ----------------- 0.57s TASK: provision/teardown : Kill (SIGTERM) all processes owned by non-root user --- 0.57s The host is a CentOS 7 machine. From bderzhavets at hotmail.com Sat Jun 11 10:36:27 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Sat, 11 Jun 2016 10:36:27 +0000 Subject: [rdo-list] Would it be possible to deploy via Tripleo QuickStart HA && DVR configuration ? Message-ID: What, exactly I mean, how to update ha.yml to setup neutron router on PCS cluster to be HA and distributed at a time and run Compute Node(s) (<=2 ) in DVR mode ? In general , Mitaka would allow this configuration. Thanks. Boris. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jan.van.Eldik at cern.ch Mon Jun 13 09:45:02 2016 From: Jan.van.Eldik at cern.ch (Jan van Eldik) Date: Mon, 13 Jun 2016 11:45:02 +0200 Subject: [rdo-list] RDO/OpenStack bookmarks In-Reply-To: <302795e9-ff56-9b69-a380-557682522fea@redhat.com> References: <302795e9-ff56-9b69-a380-557682522fea@redhat.com> Message-ID: <575E809E.5070001@cern.ch> Hi Rich, all, I think the content of the bookmark is still pretty up-to-date. Unless people want to overhaul the contents, I have a few formatting fixes (nitpicks, mostly) to suggest. thanks, cheers, Jan * "Getting started" - add a trailing "\" in the line "sudo yum install -y \" * "Managing images" - change linebreak: $ qemu-img create -f qcow2 \ - add a trailing "\" in the line "-kernel-kqemu -boot d \" * "Grant role to user: - add a leading "$" in the line "$ openstack role add..." * "Attach a volume to an instance" - replace by $ openstack server add volume \ * "Managing networks" - add a trailing "\" in the line "[--prefix PREFIX] [--enable | --disable] * "Got Questions"? - increase fontsize On 05/23/2016 09:13 PM, Rich Bowen wrote: > I'm almost out of the RDO/OpenStack CLI cheatsheet bookmarks, and I was > hoping that before we do another run of them, I could get a few eyes on > them to be sure that what we are printing is still correct (it's been > over a year since it was updated) and covers the most useful things. > > https://github.com/redhat-openstack/website/tree/master/source/images/bookmark > > We have very limited space, of course, so deciding what's most important > can be difficult. > > I would appreciate any suggested edits and/or pull requests. > > --Rich > From apevec at redhat.com Mon Jun 13 10:51:01 2016 From: apevec at redhat.com (Alan Pevec) Date: Mon, 13 Jun 2016 12:51:01 +0200 Subject: [rdo-list] RDO/OpenStack bookmarks In-Reply-To: <302795e9-ff56-9b69-a380-557682522fea@redhat.com> References: <302795e9-ff56-9b69-a380-557682522fea@redhat.com> Message-ID: > https://github.com/redhat-openstack/website/tree/master/source/images/bookmark I tried to edit ODT but it doesn't let me, is an image? * Get started in Install.. drop Fedora and use this URL: https://rdoproject.org/repos/rdo-release.rpm Also RHEL has pre-requisite non-default repos, so we might want to focus this bookmark on CentOS only? From whayutin at redhat.com Mon Jun 13 13:37:57 2016 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 13 Jun 2016 09:37:57 -0400 Subject: [rdo-list] Late test day failure In-Reply-To: References: Message-ID: Adam, Any chance you can pull in the following change and see if this resolves your issue? You will need to use the quickstart.sh --no-clone option after pulling this change. https://review.openstack.org/#/c/328984/ On Sat, Jun 11, 2016 at 3:23 AM, Adam Huffman wrote: > I'm following the TripleO quickstart instructions at: > > https://www.rdoproject.org/tripleo/ > > There is a failure quite early on: > > TASK [provision/remote : Grant libvirt privileges to non-root user] > ************ > task path: > /home/adam/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/remote/tasks/main.yml:37 > Saturday 11 June 2016 08:19:00 +0100 (0:00:01.699) 0:01:02.207 > ********* > fatal: [scaleway1]: FAILED! => {"changed": true, "failed": true, > "msg": "Destination directory /etc/polkit-1/localauthority/50-local.d > does not exist"} > > NO MORE HOSTS LEFT > ************************************************************* > > PLAY RECAP > ********************************************************************* > localhost : ok=4 changed=2 unreachable=0 failed=0 > scaleway1 : ok=5 changed=3 unreachable=0 failed=1 > > Saturday 11 June 2016 08:19:02 +0100 (0:00:02.017) 0:01:04.225 > ********* > > =============================================================================== > TASK: setup ------------------------------------------------------------ > 25.28s > TASK: setup ------------------------------------------------------------ > 14.68s > TASK: provision/remote : Create virthost access key --------------------- > 9.28s > TASK: provision/remote : Grant libvirt privileges to non-root user ------ > 2.02s > TASK: provision/remote : Configure non-root user authorized_keys -------- > 1.70s > TASK: provision/remote : Create non-root user --------------------------- > 1.59s > TASK: provision/local : Ensure local working dir exists ----------------- > 1.54s > TASK: provision/teardown : Get UID of non-root user --------------------- > 1.44s > TASK: provision/local : Create empty ssh config file -------------------- > 1.14s > TASK: provision/local : Check that virthost is set ---------------------- > 0.84s > TASK: provision/local : Add the virthost to the inventory --------------- > 0.82s > TASK: provision/teardown : Check for processes owned by non-root user --- > 0.68s > TASK: provision/teardown : Wait for processes to exit ------------------- > 0.63s > TASK: provision/teardown : Remove non-root user ephemeral files --------- > 0.61s > TASK: provision/teardown : Kill (SIGKILL) all processes owned by > non-root user --- 0.60s > TASK: provision/teardown : Remove non-root user account ----------------- > 0.57s > TASK: provision/teardown : Kill (SIGTERM) all processes owned by > non-root user --- 0.57s > > The host is a CentOS 7 machine. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Mon Jun 13 15:00:03 2016 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 13 Jun 2016 15:00:03 +0000 (UTC) Subject: [rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20160613150003.8400160A4009@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2016-06-15 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Meeting ](https://etherpad.openstack.org/p/RDO-Meeting) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From rbowen at redhat.com Mon Jun 13 15:39:45 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 13 Jun 2016 11:39:45 -0400 Subject: [rdo-list] RDO/OpenStack bookmarks In-Reply-To: References: <302795e9-ff56-9b69-a380-557682522fea@redhat.com> Message-ID: <268c16e4-963c-d339-6929-9ed9e1f6bde4@redhat.com> On 06/13/2016 06:51 AM, Alan Pevec wrote: >> https://github.com/redhat-openstack/website/tree/master/source/images/bookmark > > I tried to edit ODT but it doesn't let me, is an image? No, but with all of the overlapping boxes, it's often very difficult to get the cursor exactly where you want it. > > * Get started > in Install.. drop Fedora and use this URL: > https://rdoproject.org/repos/rdo-release.rpm Done > Also RHEL has pre-requisite non-default repos, so we might want to > focus this bookmark on CentOS only? > Are you referring just to the introductory text in the first section? Sure, we could do that. -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdocommunity.org @RDOCommunity From rbowen at redhat.com Mon Jun 13 16:52:27 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 13 Jun 2016 12:52:27 -0400 Subject: [rdo-list] Unanswered "RDO" questions on ask.openstack.org Message-ID: <4a648cd9-c20a-1674-1974-8e58ee497264@redhat.com> 57 unanswered questions: Adding hard drive space to RDO installation https://ask.openstack.org/en/question/93412/adding-hard-drive-space-to-rdo-installation/ Tags: cinder, openstack, space, add AWS Ec2 inst Eth port loses IP when attached to linux bridge in Openstack https://ask.openstack.org/en/question/92271/aws-ec2-inst-eth-port-loses-ip-when-attached-to-linux-bridge-in-openstack/ Tags: openstack, networking, aws ceilometer: I've installed openstack mitaka. but swift stops working when i configured the pipeline and ceilometer filter https://ask.openstack.org/en/question/92035/ceilometer-ive-installed-openstack-mitaka-but-swift-stops-working-when-i-configured-the-pipeline-and-ceilometer-filter/ Tags: ceilometer, openstack-swift, mitaka Fail on installing the controller on Cent OS 7 https://ask.openstack.org/en/question/92025/fail-on-installing-the-controller-on-cent-os-7/ Tags: installation, centos7, controller the error of service entity and API endpoints https://ask.openstack.org/en/question/91702/the-error-of-service-entity-and-api-endpoints/ Tags: service, entity, and, api, endpoints Running delorean fails: Git won't fetch sources https://ask.openstack.org/en/question/91600/running-delorean-fails-git-wont-fetch-sources/ Tags: delorean, rdo RDO Manager install issue - can't resolve trunk-mgt.rdoproject.org https://ask.openstack.org/en/question/91533/rdo-manager-install-issue-cant-resolve-trunk-mgtrdoprojectorg/ Tags: rdo-manager Keystone authentication: Failed to contact the endpoint. https://ask.openstack.org/en/question/91517/keystone-authentication-failed-to-contact-the-endpoint/ Tags: keystone, authenticate, endpoint, murano adding computer node. https://ask.openstack.org/en/question/91417/adding-computer-node/ Tags: rdo, openstack Liberty RDO: stack resource topology icons are pink https://ask.openstack.org/en/question/91347/liberty-rdo-stack-resource-topology-icons-are-pink/ Tags: stack, resource, topology, dashboard Build of instance aborted: Block Device Mapping is Invalid. https://ask.openstack.org/en/question/91205/build-of-instance-aborted-block-device-mapping-is-invalid/ Tags: cinder, lvm, centos7 No handlers could be found for logger "oslo_config.cfg" while syncing the glance database https://ask.openstack.org/en/question/91169/no-handlers-could-be-found-for-logger-oslo_configcfg-while-syncing-the-glance-database/ Tags: liberty, glance, install-openstack how to use chef auto manage openstack in RDO? https://ask.openstack.org/en/question/90992/how-to-use-chef-auto-manage-openstack-in-rdo/ Tags: chef, rdo Separate Cinder storage traffic from management https://ask.openstack.org/en/question/90405/separate-cinder-storage-traffic-from-management/ Tags: cinder, separate, nic, iscsi Openstack installation fails using packstack, failure is in installation of openstack-nova-compute. Error: Dependency Package[nova-compute] has failures https://ask.openstack.org/en/question/88993/openstack-installation-fails-using-packstack-failure-is-in-installation-of-openstack-nova-compute-error-dependency-packagenova-compute-has-failures/ Tags: novacompute, rdo, packstack, dependency, failure CentOS OpenStack - compute node can't talk https://ask.openstack.org/en/question/88989/centos-openstack-compute-node-cant-talk/ Tags: rdo How to setup SWIFT_PROXY_NODE and SWIFT_STORAGE_NODEs separately on RDO Liberty ? https://ask.openstack.org/en/question/88897/how-to-setup-swift_proxy_node-and-swift_storage_nodes-separately-on-rdo-liberty/ Tags: rdo, liberty, swift, ha VM and container can't download anything from internet https://ask.openstack.org/en/question/88338/vm-and-container-cant-download-anything-from-internet/ Tags: rdo, neutron, network, connectivity Fedora22, Liberty, horizon VNC console and keymap=sv with ; and/ https://ask.openstack.org/en/question/87451/fedora22-liberty-horizon-vnc-console-and-keymapsv-with-and/ Tags: keyboard, map, keymap, vncproxy, novnc OpenStack-Docker driver failed https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/ Tags: docker, openstack, liberty Can't create volume with cinder https://ask.openstack.org/en/question/86670/cant-create-volume-with-cinder/ Tags: cinder, glusterfs, nfs Sahara SSHException: Error reading SSH protocol banner https://ask.openstack.org/en/question/84710/sahara-sshexception-error-reading-ssh-protocol-banner/ Tags: sahara, icehouse, ssh, vanila Error Sahara create cluster: 'Error attach volume to instance https://ask.openstack.org/en/question/84651/error-sahara-create-cluster-error-attach-volume-to-instance/ Tags: sahara, attach-volume, vanila, icehouse Creating Sahara cluster: Error attach volume to instance https://ask.openstack.org/en/question/84650/creating-sahara-cluster-error-attach-volume-to-instance/ Tags: sahara, attach-volume, hadoop, icehouse, vanilla Routing between two tenants https://ask.openstack.org/en/question/84645/routing-between-two-tenants/ Tags: kilo, fuel, rdo, routing RDO kilo installation metadata widget doesn't work https://ask.openstack.org/en/question/83870/rdo-kilo-installation-metadata-widget-doesnt-work/ Tags: kilo, flavor, metadata Not able to ssh into RDO Kilo instance https://ask.openstack.org/en/question/83707/not-able-to-ssh-into-rdo-kilo-instance/ Tags: rdo, instance-ssh redhat RDO enable access to swift via S3 https://ask.openstack.org/en/question/83607/redhat-rdo-enable-access-to-swift-via-s3/ Tags: swift, s3 openstack baremetal introspection internal server error https://ask.openstack.org/en/question/82790/openstack-baremetal-introspection-internal-server-error/ Tags: rdo, ironic-inspector, tripleo glance\nova command line SSL failure https://ask.openstack.org/en/question/82692/glancenova-command-line-ssl-failure/ Tags: glance, kilo-openstack, ssl -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdocommunity.org @RDOCommunity From rbowen at redhat.com Mon Jun 13 18:25:40 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 13 Jun 2016 14:25:40 -0400 Subject: [rdo-list] RDO doc days this week Message-ID: <67f14587-3ea0-bf2c-823c-9ab29db41ac2@redhat.com> A few of us have discussed doing a doc day later this week - June 16, 17 - and I just realized I never announced it on rdo-list. In particular, a few of us will be reviewing the website to identify docs that reference EOL'ed versions, docs which need other updates, and new docs that we feel need to be written. Additionally, we'll be discussing how the "paths" through the site may be made clearer for the various audiences that come to the site. There are already some tickets to this effect in the website issue tracker - https://github.com/redhat-openstack/website/issues More eyes will be appreciated, if you have a little time this week. -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdocommunity.org @RDOCommunity From rbowen at redhat.com Mon Jun 13 19:02:17 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 13 Jun 2016 15:02:17 -0400 Subject: [rdo-list] This week's OpenStack meetups. Message-ID: If you're a meetup group leader, or are speaking at a meetup, here's some info about how we can help you with that: https://www.rdoproject.org/events/meetup_assistance/ Below are the OpenStack meetups that I'm aware of in the coming week. --Rich * Tuesday June 14 in Seattle, WA, US: Learn How Datera's Elastic Data Fabric Can Accelerate OpenStack and Containers - http://www.meetup.com/OpenStack-Seattle/events/230928689/ * Tuesday June 14 in San Francisco, CA, US: SF Bay OpenStack Meetup: Cloud Native Networks with Neutron IPAM - http://www.meetup.com/openstack/events/231092447/ * Tuesday June 14 in Baltimore, MD, US: Network Virtualization in OpenStack - http://www.meetup.com/Software-Defined-Networking-Group-Baltimore-Washington/events/231197887/ * Wednesday June 15 in Lyon, FR: OpenStack Workshop Lyon 2016 - http://www.meetup.com/OpenStack-Rhone-Alpes/events/231387171/ * Wednesday June 15 in Hamburg, DE: Openstack Meetup powered by Windcloud - http://www.meetup.com/Openstack-Hamburg/events/231713712/ * Thursday June 16 in Philadelphia, PA, US: Private PaaS + Kubernetes + OpenStack! - http://www.meetup.com/Philly-OpenStack-Meetup-Group/events/231164733/ * Thursday June 16 in Portland, OR, US: OpenStack PDX Meetup - http://www.meetup.com/openstack-pdx/events/230892180/ * Thursday June 16 in Boston, MA, US: Exclusive Preview of the First Native Backup and Recovery Solution for OpenStack - http://www.meetup.com/Openstack-Boston/events/231006550/ * Thursday June 16 in Atlanta, GA, US: OpenStack Meetup (Topic TBD) - http://www.meetup.com/openstack-atlanta/events/230239816/ * Thursday June 16 in Montevideo, UY: El 13 no es mala suerte - http://www.meetup.com/OpenStack-Uruguay/events/231426806/ * Friday June 17 in Lagos, NG: OpenStack Meetup - http://www.meetup.com/OpenStack-User-Group-Nigeria/events/230988956/ * Sunday June 19 in Tambaram, IN: Want to Loud in Linux to shape your career ? - http://www.meetup.com/CloudnLoud-Openstack-Cloud-RedHat-Opensource/events/231588247/ -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdocommunity.org @RDOCommunity From slinaber at redhat.com Tue Jun 14 04:27:04 2016 From: slinaber at redhat.com (Steven Linabery) Date: Mon, 13 Jun 2016 23:27:04 -0500 Subject: [rdo-list] current tempest packaging issues/bugs In-Reply-To: References: Message-ID: I thought this might be of general interest for subscribers to rdo-list, so I'm cc'ing that list here. On Mon, Jun 13, 2016 at 12:32 AM, Steven Linabery wrote: > Hello, > > dmellado, apevec and I have been working through various problems with > the openstack-tempest RPM. I want to summarize some of the issues and > give an overview of where things stand. > > 1) There is an issue with upstream openstack zaqar package that breaks > using some of the tempest cli tools. The relevant bug is: > > https://bugzilla.redhat.com/show_bug.cgi?id=1333884 > > When installing the mitaka openstack-tempest RPM, two things are > broken: 'tempest --help --debug' fails, and shows that the cli entry > points are not configured correctly. More crucially, the downstream > tempest configuration script stacktraces as well. > > We proposed a fix for setting the cli options which clears the above > errors, but it is failing the upstream gates, so I marked it as WIP: > https://review.openstack.org/#/c/327268/ > > > 2) We merged a change to the tempest spec in RDO to remove > requirements.txt and test-requirements.txt from the package source. > One consequence of that was breaking the ability to run tempest in a > virtual environment. The temporary workaround for people who want to > use a venv is to download the requirements.txt file from the midstream > repo, but once we have fixed the RPM, it should be unnecessary to use > a virtual environment. > > I'm not sure we should be supporting the use case of using a virtual > environment in any case, but fixing the RPM should obviate the need > for that. > > That work was tracked here: > https://bugzilla.redhat.com/show_bug.cgi?id=1342218 > > 3) Even with the zaqar patch from 1) above applied (manually, to > site-packages), we hit an issue where the version of urllib3 (and a > few associated dependencies) are too old in RDO mitaka. So we have > this bug to track getting those packages updated: > > https://bugzilla.redhat.com/show_bug.cgi?id=1344148 > > You will note that apevec has builds for these upgrades queued up, and > they can be used by installing the testing repo for which he kindly > provided a repo definition file in comment #7. > > 4) When (on a oooq undercloud) the fix from 1) is applied and the > upgraded packages are installed from 3), one can get the tempest > configuration script to pass, but running the smoketests fails. I've > started a bug to track fixing that issue: > > https://bugzilla.redhat.com/show_bug.cgi?id=1345736 > > > I think that covers the issues that I am aware of right now. dmellado > may have additional input here. I wanted people to have an idea of the > issues since I will be on PTO Monday 13-Jun. > > It's essential for us to get the RPM behaving properly so we can > import it downstream like we are doing with the other mitaka and > newton builds for standing up OSP 9 & 10. > > Thanks for reading, hth. > > Steve Linabery (eggs) From marcin.juszkiewicz at linaro.org Wed Jun 15 10:10:30 2016 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Wed, 15 Jun 2016 11:10:30 +0100 Subject: [rdo-list] [CentOS-devel] Rebuilding OpenStack for AArch64 In-Reply-To: <5761206F.6000802@linaro.org> References: <575AA0FD.2060100@linaro.org> <575ABAC2.8040702@cern.ch> <5761206F.6000802@linaro.org> Message-ID: <57612996.7010205@linaro.org> On 15/06/16 10:31, Marcin Juszkiewicz wrote: >> 19:33:02: Reading metadata for json-c (0.11-4.el7.aarch64) >> 19:33:02: Package json-c has forbidden srpm json-c-0.11-4.el7.src.rpm >> """ > > This is because of 0.11-4.el7_0 on x86 compared to 0.11-4.el7 on > aarch64, right? Should be fixable if we rebuild json-c 0.11-4.el7_0 on > aarch64 and update repositories? > > Is there a tool to use which would list such differences? Checked a bit, compared list of packages in os/ repo and looks like many packages may need syncing ;( From chkumar246 at gmail.com Wed Jun 15 11:19:28 2016 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 15 Jun 2016 16:49:28 +0530 Subject: [rdo-list] RDo Bug Statistics [2016-06-15] Message-ID: # RDO Bugs on 2016-06-15 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 143 - Fixed (MODIFIED, POST, ON_QA): 45 ## Number of open bugs by component dib-utils [ 1] + distribution [ 7] ++++++++ Documentation [ 1] + instack [ 1] + instack-undercloud [ 9] ++++++++++ openstack-ceilometer [ 2] ++ openstack-cinder [ 2] ++ openstack-designate [ 1] + openstack-glance [ 1] + openstack-horizon [ 1] + openstack-ironic-disco... [ 1] + openstack-keystone [ 1] + openstack-neutron [ 3] +++ openstack-nova [ 5] +++++ openstack-packstack [ 34] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 7] ++++++++ openstack-sahara [ 2] ++ openstack-selinux [ 3] +++ openstack-tripleo [ 22] +++++++++++++++++++++++++ openstack-tripleo-heat... [ 2] ++ openstack-tripleo-imag... [ 1] + openstack-trove [ 1] + Package Review [ 14] ++++++++++++++++ python-novaclient [ 1] + rdo-manager [ 14] ++++++++++++++++ rdopkg [ 1] + RFEs [ 2] ++ tempest [ 3] +++ ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (143 bugs) ### dib-utils (1 bug) [1263779 ] http://bugzilla.redhat.com/1263779 (NEW) Component: dib-utils Last change: 2016-04-18 Summary: Packstack Ironic admin_url misconfigured in nova.conf ### distribution (7 bugs) [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2016-06-01 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages [1316169 ] http://bugzilla.redhat.com/1316169 (ASSIGNED) Component: distribution Last change: 2016-05-18 Summary: openstack-barbican-api missing pid dir or wrong pid file specified [1329341 ] http://bugzilla.redhat.com/1329341 (NEW) Component: distribution Last change: 2016-06-14 Summary: Tracker: Blockers and Review requests for new RDO Newton packages [1301751 ] http://bugzilla.redhat.com/1301751 (NEW) Component: distribution Last change: 2016-04-18 Summary: Move all logging to stdout/err to allow systemd throttling logging of errors [1290163 ] http://bugzilla.redhat.com/1290163 (NEW) Component: distribution Last change: 2016-05-17 Summary: Tracker: Blockers and Review requests for new RDO Mitaka packages [1337335 ] http://bugzilla.redhat.com/1337335 (NEW) Component: distribution Last change: 2016-05-25 Summary: Hiera >= 2.x packaging [1346240 ] http://bugzilla.redhat.com/1346240 (NEW) Component: distribution Last change: 2016-06-15 Summary: Erlang 18.3.3 update fails ### Documentation (1 bug) [1272108 ] http://bugzilla.redhat.com/1272108 (NEW) Component: Documentation Last change: 2016-04-18 Summary: [DOC] External network should be documents in RDO manager installation ### instack (1 bug) [1315827 ] http://bugzilla.redhat.com/1315827 (NEW) Component: instack Last change: 2016-05-09 Summary: openstack undercloud install fails with "Element pip- and-virtualenv already loaded." ### instack-undercloud (9 bugs) [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2016-04-18 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2016-04-18 Summary: wget is missing from qcow2 image fails instack-build- images script [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2016-04-18 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1271200 ] http://bugzilla.redhat.com/1271200 (ASSIGNED) Component: instack-undercloud Last change: 2016-04-18 Summary: Overcloud images contain Kilo repos [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2016-04-18 Summary: instack-build-images does not stop on certain errors [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2016-04-22 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1265334 ] http://bugzilla.redhat.com/1265334 (NEW) Component: instack-undercloud Last change: 2016-04-18 Summary: rdo-manager liberty instack undercloud puppet apply fails w/ missing package dep pyinotify [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2016-04-18 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2016-04-18 Summary: Installing instack undercloud on Fedora20 VM fails ### openstack-ceilometer (2 bugs) [1331510 ] http://bugzilla.redhat.com/1331510 (ASSIGNED) Component: openstack-ceilometer Last change: 2016-06-01 Summary: Gnocchi 2.0.2-1 release does not have Mitaka default configuration file [1265741 ] http://bugzilla.redhat.com/1265741 (NEW) Component: openstack-ceilometer Last change: 2016-04-27 Summary: python-redis is not installed with packstack allinone ### openstack-cinder (2 bugs) [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2016-04-19 Summary: Configuration file in share forces ignore of auth_uri [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2016-04-19 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage ### openstack-designate (1 bug) [1343663 ] http://bugzilla.redhat.com/1343663 (NEW) Component: openstack-designate Last change: 2016-06-07 Summary: openstack-designate are missing dependancies ### openstack-glance (1 bug) [1312466 ] http://bugzilla.redhat.com/1312466 (NEW) Component: openstack-glance Last change: 2016-04-19 Summary: Support for blueprint cinder-store-upload-download in glance_store ### openstack-horizon (1 bug) [1333508 ] http://bugzilla.redhat.com/1333508 (NEW) Component: openstack-horizon Last change: 2016-05-20 Summary: LBaaS v2 Dashboard UI ### openstack-ironic-discoverd (1 bug) [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2016-02-26 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (1 bug) [1337346 ] http://bugzilla.redhat.com/1337346 (NEW) Component: openstack-keystone Last change: 2016-06-01 Summary: CVE-2016-4911 openstack-keystone: Incorrect Audit IDs in Keystone Fernet Tokens can result in revocation bypass [openstack-rdo] ### openstack-neutron (3 bugs) [1065826 ] http://bugzilla.redhat.com/1065826 (ASSIGNED) Component: openstack-neutron Last change: 2016-04-19 Summary: [RFE] [neutron] neutron services needs more RPM granularity [1282403 ] http://bugzilla.redhat.com/1282403 (NEW) Component: openstack-neutron Last change: 2016-04-19 Summary: Errors when running tempest.api.network.test_ports with IPAM reference driver enabled [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2016-06-07 Summary: Use neutron-sanity-check in CI checks ### openstack-nova (5 bugs) [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2016-04-22 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1344315 ] http://bugzilla.redhat.com/1344315 (NEW) Component: openstack-nova Last change: 2016-06-09 Summary: SRIOV PF/VF allocation fails with NUMA aware flavor Edit [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2016-04-22 Summary: logrotate should copytruncate to avoid openstack logging to deleted files [1294747 ] http://bugzilla.redhat.com/1294747 (NEW) Component: openstack-nova Last change: 2016-05-16 Summary: Migration fails when the SRIOV PF is not online [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2016-05-11 Summary: Ensure translations are installed correctly and picked up at runtime ### openstack-packstack (34 bugs) [1200129 ] http://bugzilla.redhat.com/1200129 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-18 Summary: [RFE] add support for ceilometer workload partitioning via tooz/redis [1194678 ] http://bugzilla.redhat.com/1194678 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: On aarch64, nova.conf should default to vnc_enabled=False [1293693 ] http://bugzilla.redhat.com/1293693 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: Keystone setup fails on missing required parameter [1286995 ] http://bugzilla.redhat.com/1286995 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: PackStack should configure LVM filtering with LVM/iSCSI [1344706 ] http://bugzilla.redhat.com/1344706 (ASSIGNED) Component: openstack-packstack Last change: 2016-06-10 Summary: openstack-ceilometer-compute fails to send metrics [1063393 ] http://bugzilla.redhat.com/1063393 (ASSIGNED) Component: openstack-packstack Last change: 2016-05-18 Summary: RFE: Provide option to set bind_host/bind_port for API services [1297692 ] http://bugzilla.redhat.com/1297692 (ON_DEV) Component: openstack-packstack Last change: 2016-05-19 Summary: Raise MariaDB max connections limit [1302766 ] http://bugzilla.redhat.com/1302766 (NEW) Component: openstack-packstack Last change: 2016-05-19 Summary: Add Magnum support using puppet-magnum [1285494 ] http://bugzilla.redhat.com/1285494 (NEW) Component: openstack-packstack Last change: 2016-05-19 Summary: openstack- packstack-7.0.0-0.5.dev1661.gaf13b7e.el7.noarch cripples(?) httpd.conf [1316222 ] http://bugzilla.redhat.com/1316222 (ASSIGNED) Component: openstack-packstack Last change: 2016-05-18 Summary: Packstack installation failed due to wrong http config [1291492 ] http://bugzilla.redhat.com/1291492 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: Unfriendly behavior of IP filtering for VXLAN with EXCLUDE_SERVERS [1227298 ] http://bugzilla.redhat.com/1227298 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: Packstack should support MTU settings [1188491 ] http://bugzilla.redhat.com/1188491 (ASSIGNED) Component: openstack-packstack Last change: 2016-05-19 Summary: Packstack wording is unclear for demo and testing provisioning. [1208812 ] http://bugzilla.redhat.com/1208812 (ASSIGNED) Component: openstack-packstack Last change: 2016-06-15 Summary: add DiskFilter to scheduler_default_filters [1201612 ] http://bugzilla.redhat.com/1201612 (ASSIGNED) Component: openstack-packstack Last change: 2016-05-19 Summary: Interactive - Packstack asks for Tempest details even when Tempest install is declined [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2016-05-16 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-19 Summary: [RFE] Include Fedora cloud images in some nice way [1005073 ] http://bugzilla.redhat.com/1005073 (NEW) Component: openstack-packstack Last change: 2016-04-19 Summary: [RFE] Please add glance and nova lib folder config [903645 ] http://bugzilla.redhat.com/903645 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-18 Summary: RFE: Include the ability in PackStack to support SSL for all REST services and message bus communication [1239027 ] http://bugzilla.redhat.com/1239027 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: please move httpd log files to corresponding dirs [1324070 ] http://bugzilla.redhat.com/1324070 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: RFE: PackStack Support for LBaaSv2 [1168113 ] http://bugzilla.redhat.com/1168113 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-18 Summary: The warning message " NetworkManager is active " appears even when the NetworkManager is inactive [1292271 ] http://bugzilla.redhat.com/1292271 (ASSIGNED) Component: openstack-packstack Last change: 2016-05-19 Summary: Receive Msg 'Error: Could not find user glance' [1116019 ] http://bugzilla.redhat.com/1116019 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-18 Summary: AMQP1.0 server configurations needed [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2016-05-18 Summary: [RFE] SPICE support in packstack [1338496 ] http://bugzilla.redhat.com/1338496 (NEW) Component: openstack-packstack Last change: 2016-05-31 Summary: Failed to install with packstack [1312487 ] http://bugzilla.redhat.com/1312487 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-18 Summary: Packstack with Swift Glance backend does not seem to work [1184806 ] http://bugzilla.redhat.com/1184806 (NEW) Component: openstack-packstack Last change: 2016-04-28 Summary: [RFE] Packstack should support deploying Nova and Glance with RBD images and Ceph as a backend [1172310 ] http://bugzilla.redhat.com/1172310 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-19 Summary: support Keystone LDAP [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2016-04-19 Summary: swift: Admin user does not have permissions to see containers created by glance service [1286828 ] http://bugzilla.redhat.com/1286828 (NEW) Component: openstack-packstack Last change: 2016-05-19 Summary: Packstack should have the option to install QoS (neutron) [1172467 ] http://bugzilla.redhat.com/1172467 (NEW) Component: openstack-packstack Last change: 2016-04-19 Summary: New user cannot retrieve container listing [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-18 Summary: API services has all admin permission instead of service [1344663 ] http://bugzilla.redhat.com/1344663 (ASSIGNED) Component: openstack-packstack Last change: 2016-06-10 Summary: packstack install fails if CONFIG_HEAT_CFN_INSTALL=y ### openstack-puppet-modules (7 bugs) [1318332 ] http://bugzilla.redhat.com/1318332 (NEW) Component: openstack-puppet-modules Last change: 2016-04-19 Summary: Cinder workaround should be removed [1297535 ] http://bugzilla.redhat.com/1297535 (ASSIGNED) Component: openstack-puppet-modules Last change: 2016-04-18 Summary: Undercloud installation fails ::aodh::keystone::auth not found for instack [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2016-04-18 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1316856 ] http://bugzilla.redhat.com/1316856 (NEW) Component: openstack-puppet-modules Last change: 2016-04-28 Summary: packstack fails to configure ovs bridge for CentOS [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2016-04-18 Summary: trove guestagent config mods for integration testing [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2016-05-18 Summary: Offset Swift ports to 6200 [1289761 ] http://bugzilla.redhat.com/1289761 (NEW) Component: openstack-puppet-modules Last change: 2016-05-25 Summary: PackStack installs Nova crontab that nova user can't run ### openstack-sahara (2 bugs) [1305790 ] http://bugzilla.redhat.com/1305790 (NEW) Component: openstack-sahara Last change: 2016-02-09 Summary: Failure to launch Caldera 5.0.4 Hadoop Cluster via Sahara Wizards on RDO Liberty [1305419 ] http://bugzilla.redhat.com/1305419 (NEW) Component: openstack-sahara Last change: 2016-02-10 Summary: Failure to launch Hadoop HDP 2.0.6 Cluster via Sahara Wizards on RDO Liberty ### openstack-selinux (3 bugs) [1320043 ] http://bugzilla.redhat.com/1320043 (NEW) Component: openstack-selinux Last change: 2016-04-19 Summary: rootwrap-daemon can't start after reboot due to AVC denial [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2016-04-18 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1341738 ] http://bugzilla.redhat.com/1341738 (NEW) Component: openstack-selinux Last change: 2016-06-01 Summary: AVC: beam.smp tries to write in SSL certificate ### openstack-tripleo (22 bugs) [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1344620 ] http://bugzilla.redhat.com/1344620 (NEW) Component: openstack-tripleo Last change: 2016-06-10 Summary: Nova instance live migration fails: migrateToURI3() got an unexpected keyword argument 'bandwidth' [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1344507 ] http://bugzilla.redhat.com/1344507 (NEW) Component: openstack-tripleo Last change: 2016-06-09 Summary: Nova novnc console doesn't load 2/3 times: Failed to connect to server (code: 1006) [1344495 ] http://bugzilla.redhat.com/1344495 (NEW) Component: openstack-tripleo Last change: 2016-06-09 Summary: Horizon: Error: Unable to retrieve project list and Error: Unable to retrieve domain list. [1329095 ] http://bugzilla.redhat.com/1329095 (NEW) Component: openstack-tripleo Last change: 2016-04-22 Summary: mariadb and keystone down after an upgrade from liberty to mitaka [1344398 ] http://bugzilla.redhat.com/1344398 (NEW) Component: openstack-tripleo Last change: 2016-06-09 Summary: SSL enabled undercloud doesn't configure https protocol for several endpoints [1343634 ] http://bugzilla.redhat.com/1343634 (NEW) Component: openstack-tripleo Last change: 2016-06-07 Summary: controller-no-external.yaml template still creates external network and fails to deploy [1344467 ] http://bugzilla.redhat.com/1344467 (NEW) Component: openstack-tripleo Last change: 2016-06-09 Summary: Unable to launch instance: Invalid: Volume sets discard option, qemu (1, 6, 0) or later is required. [1344442 ] http://bugzilla.redhat.com/1344442 (NEW) Component: openstack-tripleo Last change: 2016-06-09 Summary: Ceilometer central fails to start: ImportError: No module named redis [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1277990 ] http://bugzilla.redhat.com/1277990 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: openstack-ironic-inspector-dnsmasq.service: failed to start during undercloud installation [1344447 ] http://bugzilla.redhat.com/1344447 (NEW) Component: openstack-tripleo Last change: 2016-06-09 Summary: Openstack-gnocchi-statsd fails to start; ImportError: Your rados python module does not support omap feature. Install 'cradox' (recommended) or upgrade 'python- rados' >= 9.1.0 [1277980 ] http://bugzilla.redhat.com/1277980 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: missing python-proliantutils [1344451 ] http://bugzilla.redhat.com/1344451 (NEW) Component: openstack-tripleo Last change: 2016-06-09 Summary: HAProxy logs show up in the os-collect-config journal [1334259 ] http://bugzilla.redhat.com/1334259 (NEW) Component: openstack-tripleo Last change: 2016-05-09 Summary: openstack overcloud image upload fails with "Required file "./ironic-python-agent.initramfs" does not exist." [1340865 ] http://bugzilla.redhat.com/1340865 (NEW) Component: openstack-tripleo Last change: 2016-06-07 Summary: Tripleo QuickStart HA deployment attempts constantly crash [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: User can not login into the overcloud horizon using the proper credentials [1303614 ] http://bugzilla.redhat.com/1303614 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: overcloud deployment failed AttributeError: 'Proxy' object has no attribute 'api' [1341093 ] http://bugzilla.redhat.com/1341093 (NEW) Component: openstack-tripleo Last change: 2016-06-01 Summary: Tripleo QuickStart HA deployment attempts constantly crash [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI ### openstack-tripleo-heat-templates (2 bugs) [1342145 ] http://bugzilla.redhat.com/1342145 (NEW) Component: openstack-tripleo-heat-templates Last change: 2016-06-02 Summary: Deploying Manila is not possible due to missing template [1266027 ] http://bugzilla.redhat.com/1266027 (NEW) Component: openstack-tripleo-heat-templates Last change: 2016-04-18 Summary: TripleO should use pymysql database driver since Liberty ### openstack-tripleo-image-elements (1 bug) [1303567 ] http://bugzilla.redhat.com/1303567 (NEW) Component: openstack-tripleo-image-elements Last change: 2016-04-18 Summary: Overcloud deployment fails using Ceph ### openstack-trove (1 bug) [1327068 ] http://bugzilla.redhat.com/1327068 (NEW) Component: openstack-trove Last change: 2016-05-24 Summary: trove guest agent should create a sudoers entry ### Package Review (14 bugs) [1326586 ] http://bugzilla.redhat.com/1326586 (NEW) Component: Package Review Last change: 2016-04-13 Summary: Review request: Sensu [1272524 ] http://bugzilla.redhat.com/1272524 (ASSIGNED) Component: Package Review Last change: 2016-05-19 Summary: Review Request: openstack-mistral - workflow Service for OpenStack cloud [1342987 ] http://bugzilla.redhat.com/1342987 (NEW) Component: Package Review Last change: 2016-06-06 Summary: Review Request: openstack-vitrage - OpenStack RCA (Root Cause Analysis) Engine [1345815 ] http://bugzilla.redhat.com/1345815 (NEW) Component: Package Review Last change: 2016-06-14 Summary: Review Request: openstack-cloudkitty-ui - CloudKitty dashboard [1344368 ] http://bugzilla.redhat.com/1344368 (NEW) Component: Package Review Last change: 2016-06-09 Summary: Review Request: openstack-ironic-ui - OpenStack Ironic Dashboard [1341687 ] http://bugzilla.redhat.com/1341687 (NEW) Component: Package Review Last change: 2016-06-03 Summary: Review request: openstack-neutron-lbaas-ui [1272513 ] http://bugzilla.redhat.com/1272513 (ASSIGNED) Component: Package Review Last change: 2016-05-20 Summary: Review Request: Murano - is an application catalog for OpenStack [1329125 ] http://bugzilla.redhat.com/1329125 (ASSIGNED) Component: Package Review Last change: 2016-04-26 Summary: Review Request: python-oslo-privsep - OpenStack library for privilege separation [1331486 ] http://bugzilla.redhat.com/1331486 (NEW) Component: Package Review Last change: 2016-05-24 Summary: Tracker bugzilla for puppet packages in RDO Newton cycle [1312328 ] http://bugzilla.redhat.com/1312328 (NEW) Component: Package Review Last change: 2016-05-19 Summary: New Package: openstack-ironic-staging-drivers [1344297 ] http://bugzilla.redhat.com/1344297 (NEW) Component: Package Review Last change: 2016-06-09 Summary: Watcher service package [1318765 ] http://bugzilla.redhat.com/1318765 (NEW) Component: Package Review Last change: 2016-06-07 Summary: Review Request: openstack-sahara-tests - Sahara Scenario Test Framework [1342227 ] http://bugzilla.redhat.com/1342227 (ASSIGNED) Component: Package Review Last change: 2016-06-06 Summary: Review Request: python-designate-tests-tempest - Tempest Integration of Designate [1279513 ] http://bugzilla.redhat.com/1279513 (ASSIGNED) Component: Package Review Last change: 2016-04-18 Summary: New Package: python-dracclient ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2016-05-02 Summary: Missing versioned dependency on python-six ### rdo-manager (14 bugs) [1306350 ] http://bugzilla.redhat.com/1306350 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: With RDO-manager, if not configured, the first nic on compute nodes gets addresses from dhcp as a default [1272376 ] http://bugzilla.redhat.com/1272376 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: Duplicate nova hypervisors after rebooting compute nodes [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2016-04-18 Summary: No way to increase yum timeouts when building images [1270370 ] http://bugzilla.redhat.com/1270370 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: [RDO-Manager] bulk introspection moving the nodes from available to manageable too quickly [getting: NodeLocked:] [1273541 ] http://bugzilla.redhat.com/1273541 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: RDO-Manager needs epel.repo enabled (otherwise undercloud deployment fails.) [1292253 ] http://bugzilla.redhat.com/1292253 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: Production + EPEL + yum-plugin-priorities results in wrong version of hiera [1271726 ] http://bugzilla.redhat.com/1271726 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: 1 of the overcloud VMs (nova) is stack in spawning state [1270910 ] http://bugzilla.redhat.com/1270910 (ASSIGNED) Component: rdo-manager Last change: 2016-04-18 Summary: IP address from external subnet gets assigned to br-ex when using default single-nic-vlans templates [1306364 ] http://bugzilla.redhat.com/1306364 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: With RDO-manager, using bridge mappings, Neutron opensvswitch-agent plugin's config file don't gets populated correctly [1273680 ] http://bugzilla.redhat.com/1273680 (ASSIGNED) Component: rdo-manager Last change: 2016-04-18 Summary: HA overcloud with network isolation deployment fails [1270805 ] http://bugzilla.redhat.com/1270805 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: Glance client returning 'Expected endpoint' [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: there is a newer image that can be used to deploy openstack [1294683 ] http://bugzilla.redhat.com/1294683 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: instack-undercloud: "openstack undercloud install" throws errors and then gets stuck due to selinux. [1271289 ] http://bugzilla.redhat.com/1271289 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: overcloud-novacompute stuck in spawning state ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (2 bugs) [1193886 ] http://bugzilla.redhat.com/1193886 (ASSIGNED) Component: RFEs Last change: 2016-04-18 Summary: RFE: wait for DB after boot [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2016-05-20 Summary: [RFE] Provide easy to use upgrade tool ### tempest (3 bugs) [1345736 ] http://bugzilla.redhat.com/1345736 (NEW) Component: tempest Last change: 2016-06-13 Summary: Installing mitaka openstack-tempest RDO build with some workarounds fails to run tempest smoke tests [1344339 ] http://bugzilla.redhat.com/1344339 (NEW) Component: tempest Last change: 2016-06-14 Summary: install_venv script fails to open requirements file [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (45 bugs) ### distribution (5 bugs) [1328980 ] http://bugzilla.redhat.com/1328980 (MODIFIED) Component: distribution Last change: 2016-04-21 Summary: Log handler repeatedly crashes [1344148 ] http://bugzilla.redhat.com/1344148 (MODIFIED) Component: distribution Last change: 2016-06-13 Summary: RDO mitaka openstack-tempest build requires updated python-urllib3 [1336566 ] http://bugzilla.redhat.com/1336566 (ON_QA) Component: distribution Last change: 2016-05-20 Summary: Paramiko needs to be updated to 2.0 to match upstream requirement [1317971 ] http://bugzilla.redhat.com/1317971 (POST) Component: distribution Last change: 2016-05-23 Summary: openstack-cloudkitty-common should have a /etc/cloudkitty/api_paste.ini [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2016-04-18 Summary: Tuskar Fails After Remove/Reinstall Of RDO ### instack-undercloud (1 bug) [1270033 ] http://bugzilla.redhat.com/1270033 (POST) Component: instack-undercloud Last change: 2016-05-05 Summary: [RDO-Manager] Node inspection fails when changing the default 'inspection_iprange' value in undecloud.conf. ### openstack-ceilometer (1 bug) [1287252 ] http://bugzilla.redhat.com/1287252 (POST) Component: openstack-ceilometer Last change: 2016-04-18 Summary: openstack-ceilometer-alarm-notifier does not start: unit file is missing ### openstack-glance (1 bug) [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2016-04-19 Summary: Glance api ssl issue ### openstack-ironic-discoverd (1 bug) [1322892 ] http://bugzilla.redhat.com/1322892 (MODIFIED) Component: openstack-ironic-discoverd Last change: 2016-06-13 Summary: No valid interfaces found during introspection ### openstack-keystone (2 bugs) [1341332 ] http://bugzilla.redhat.com/1341332 (POST) Component: openstack-keystone Last change: 2016-06-01 Summary: keystone logrotate configuration should use size configuration [1280530 ] http://bugzilla.redhat.com/1280530 (MODIFIED) Component: openstack-keystone Last change: 2016-05-20 Summary: Fernet tokens cannot read key files with SELInux enabled ### openstack-neutron (1 bug) [1334797 ] http://bugzilla.redhat.com/1334797 (POST) Component: openstack-neutron Last change: 2016-06-15 Summary: Ensure translations are installed correctly and picked up at runtime ### openstack-nova (1 bug) [1301156 ] http://bugzilla.redhat.com/1301156 (POST) Component: openstack-nova Last change: 2016-04-22 Summary: openstack-nova missing specfile requires on castellan>=0.3.1 ### openstack-packstack (20 bugs) [1335612 ] http://bugzilla.redhat.com/1335612 (MODIFIED) Component: openstack-packstack Last change: 2016-05-31 Summary: CONFIG_USE_SUBNETS=y won't work correctly with VLAN [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: packstack requires 2 runs to install ceilometer [1288179 ] http://bugzilla.redhat.com/1288179 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: Mitaka: Packstack image provisioning fails with "Store filesystem could not be configured correctly" [1018900 ] http://bugzilla.redhat.com/1018900 (MODIFIED) Component: openstack-packstack Last change: 2016-05-18 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1285314 ] http://bugzilla.redhat.com/1285314 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: Packstack needs to support aodh services since Mitaka [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1302275 ] http://bugzilla.redhat.com/1302275 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: neutron-l3-agent does not start on Mitaka-2 when enabling FWaaS [1302256 ] http://bugzilla.redhat.com/1302256 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: neutron-server does not start on Mitaka-2 when enabling LBaaS [1266028 ] http://bugzilla.redhat.com/1266028 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: Packstack should use pymysql database driver since Liberty [1296899 ] http://bugzilla.redhat.com/1296899 (POST) Component: openstack-packstack Last change: 2016-06-15 Summary: Swift's proxy-server is not configured to use ceilometer [1282746 ] http://bugzilla.redhat.com/1282746 (POST) Component: openstack-packstack Last change: 2016-05-18 Summary: Swift's proxy-server is not configured to use ceilometer [1150652 ] http://bugzilla.redhat.com/1150652 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: PackStack does not provide an option to register hosts to Red Hat Satellite 6 [1297833 ] http://bugzilla.redhat.com/1297833 (POST) Component: openstack-packstack Last change: 2016-04-19 Summary: VPNaaS should use libreswan driver instead of openswan by default [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-packstack Last change: 2016-05-18 Summary: Horizon help url in RDO points to the RHOS documentation [1187412 ] http://bugzilla.redhat.com/1187412 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: Script wording for service installation should be consistent [1255369 ] http://bugzilla.redhat.com/1255369 (POST) Component: openstack-packstack Last change: 2016-05-19 Summary: Improve session settings for horizon [1298245 ] http://bugzilla.redhat.com/1298245 (MODIFIED) Component: openstack-packstack Last change: 2016-04-18 Summary: Add possibility to change DEFAULT/api_paste_config in trove.conf [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1124982 ] http://bugzilla.redhat.com/1124982 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: Help text for SSL is incorrect regarding passphrase on the cert [1330289 ] http://bugzilla.redhat.com/1330289 (POST) Component: openstack-packstack Last change: 2016-05-21 Summary: Failure to install Controller/Network&&Compute Cluster on RDO Mitaka with keystone API V3 ### openstack-puppet-modules (1 bug) [1343565 ] http://bugzilla.redhat.com/1343565 (POST) Component: openstack-puppet-modules Last change: 2016-06-13 Summary: Use alternative puppet-certmonger ### openstack-utils (1 bug) [1211989 ] http://bugzilla.redhat.com/1211989 (POST) Component: openstack-utils Last change: 2016-04-18 Summary: openstack-status shows 'disabled on boot' for the mysqld service ### Package Review (4 bugs) [1323219 ] http://bugzilla.redhat.com/1323219 (ON_QA) Component: Package Review Last change: 2016-05-12 Summary: Review Request: openstack-trove-ui - OpenStack Dashboard plugin for Trove project [1318310 ] http://bugzilla.redhat.com/1318310 (POST) Component: Package Review Last change: 2016-06-07 Summary: Review Request: openstack-magnum-ui - OpenStack Magnum UI Horizon plugin [1331952 ] http://bugzilla.redhat.com/1331952 (POST) Component: Package Review Last change: 2016-06-01 Summary: Review Request: openstack-mistral-ui - OpenStack Mistral Dashboard [1323222 ] http://bugzilla.redhat.com/1323222 (ON_QA) Component: Package Review Last change: 2016-05-12 Summary: Review request for openstack-sahara-ui ### python-keystoneclient (1 bug) [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2016-04-19 Summary: user-get fails when using IDs which are not UUIDs ### rdo-manager (2 bugs) [1271335 ] http://bugzilla.redhat.com/1271335 (POST) Component: rdo-manager Last change: 2016-06-09 Summary: [RFE] Support explicit configuration of L2 population [1268990 ] http://bugzilla.redhat.com/1268990 (POST) Component: rdo-manager Last change: 2016-04-18 Summary: missing from docs Build images fails without : export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-deps.repo" ### rdo-manager-cli (2 bugs) [1273197 ] http://bugzilla.redhat.com/1273197 (POST) Component: rdo-manager-cli Last change: 2016-04-18 Summary: VXLAN should be default neutron network type [1278972 ] http://bugzilla.redhat.com/1278972 (POST) Component: rdo-manager-cli Last change: 2016-04-18 Summary: rdo-manager liberty delorean dib failing w/ "No module named passlib.utils" ### tempest (1 bug) [1342218 ] http://bugzilla.redhat.com/1342218 (MODIFIED) Component: tempest Last change: 2016-06-03 Summary: RDO openstack-tempest RPM should remove requirements.txt from source From javier.pena at redhat.com Wed Jun 15 16:02:06 2016 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 15 Jun 2016 12:02:06 -0400 (EDT) Subject: [rdo-list] [Meeting] RDO meeting (2016-06-15) Minutes Message-ID: <1544118739.15557404.1466006526584.JavaMail.zimbra@redhat.com> ============================== #rdo: RDO meeting (2016-06-15) ============================== Meeting started by jpena at 15:00:25 UTC. The full logs are available at https://meetbot.fedoraproject.org/rdo/2016-06-15/rdo_meeting_(2016-06-15).2016-06-15-15.00.log.html . Meeting summary --------------- * roll call (jpena, 15:00:57) * DLRN instance migration to ci.centos infra (jpena, 15:02:38) * LINK: https://github.com/rdo-infra/ansible-role-rdomonitoring/blob/master/files/check-delorean-builds.py (dmsimard, 15:06:33) * ACTION: jpena to coordinate dlrn server switch once https://review.rdoproject.org/r/1407 is merged and tested in the new server (jpena, 15:10:47) * openstack/packstack stable/kilo EOL (jpena, 15:12:35) * ACTION: apevec and imcsk8 to coordinate openstack/packstack eol-kilo after checking upstream Launchpad bugs (apevec, 15:19:14) * graylist review.rdoproject.org (jpena, 15:21:21) * ACTION: fbo followup with reverse dns issue (fbo, 15:34:11) * notify trello board changes in #rdo? (jpena, 15:34:33) * ACTION: number80 to investigate trobot on #rdo-dev (apevec, 15:48:41) * Doc Day - https://etherpad.openstack.org/p/rdo-doc-day - June 16, 17 (jpena, 15:48:54) * ACTION: everyone to join the RDO Doc Day (jpena, 15:50:58) * Chair for next meeting (jpena, 15:51:21) * ACTION: imcsk8 to chair next meeting (jpena, 15:52:17) * open floor (jpena, 15:52:28) Meeting ended at 16:00:47 UTC. Action Items ------------ * jpena to coordinate dlrn server switch once https://review.rdoproject.org/r/1407 is merged and tested in the new server * apevec and imcsk8 to coordinate openstack/packstack eol-kilo after checking upstream Launchpad bugs * fbo followup with reverse dns issue * number80 to investigate trobot on #rdo-dev * everyone to join the RDO Doc Day * imcsk8 to chair next meeting Action Items, by person ----------------------- * apevec * apevec and imcsk8 to coordinate openstack/packstack eol-kilo after checking upstream Launchpad bugs * fbo * fbo followup with reverse dns issue * imcsk8 * apevec and imcsk8 to coordinate openstack/packstack eol-kilo after checking upstream Launchpad bugs * imcsk8 to chair next meeting * jpena * jpena to coordinate dlrn server switch once https://review.rdoproject.org/r/1407 is merged and tested in the new server * number80 * number80 to investigate trobot on #rdo-dev * openstack * apevec and imcsk8 to coordinate openstack/packstack eol-kilo after checking upstream Launchpad bugs * **UNASSIGNED** * everyone to join the RDO Doc Day People Present (lines said) --------------------------- * dmsimard (58) * jpena (52) * apevec (49) * Duck (21) * rbowen (17) * number80 (14) * imcsk8 (12) * fbo (10) * misc (8) * trown|mtg (6) * zodbot (6) * chandankumar (6) * openstack (4) * jayg (3) * EmilienM (3) * jschlueter (2) * amoralej (2) * pkovar (1) * panda (1) * trown (1) * hrybacki (1) * Dog (1) * weshay (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From adam.huffman at gmail.com Thu Jun 16 10:49:22 2016 From: adam.huffman at gmail.com (Adam Huffman) Date: Thu, 16 Jun 2016 11:49:22 +0100 Subject: [rdo-list] Late test day failure In-Reply-To: References: Message-ID: Hi Wesley, As I've indicated in the review, I can't test this properly because quickstart.sh fails when I use the --no-clone option: bash quickstart.sh --no-clone $VIRTHOST Installing OpenStack mitaka on host scaleway1 Using directory /home/adam/.quickstart for a local working directory + export ANSIBLE_CONFIG=./ansible.cfg + ANSIBLE_CONFIG=./ansible.cfg + export ANSIBLE_INVENTORY=/home/adam/.quickstart/hosts + ANSIBLE_INVENTORY=/home/adam/.quickstart/hosts + '[' 0 = 0 ']' + rm -f /home/adam/.quickstart/hosts + '[' scaleway1 = localhost ']' + '[' '' = 1 ']' + VERBOSITY=vv + ansible-playbook -vv /home/adam/.quickstart/playbooks/quickstart.yml -e @./config/general_config/minimal.yml -e ansible_python_interpreter=/usr/bin/python -e @/home/adam/.quickstart/config/release/mitaka.yml -e local_working_dir=/home/adam/.quickstart -e virthost=scaleway1 -t untagged,provision,environment,undercloud-scripts,overcloud-scripts Using /etc/ansible/ansible.cfg as config file ERROR! the file_name '/home/adam/config/general_config/minimal.yml' does not exist, or is not readable Cheers, Adam On Mon, Jun 13, 2016 at 2:37 PM, Wesley Hayutin wrote: > Adam, > Any chance you can pull in the following change and see if this resolves > your issue? > You will need to use the quickstart.sh --no-clone option after pulling this > change. > > https://review.openstack.org/#/c/328984/ > > On Sat, Jun 11, 2016 at 3:23 AM, Adam Huffman > wrote: >> >> I'm following the TripleO quickstart instructions at: >> >> https://www.rdoproject.org/tripleo/ >> >> There is a failure quite early on: >> >> TASK [provision/remote : Grant libvirt privileges to non-root user] >> ************ >> task path: >> /home/adam/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/remote/tasks/main.yml:37 >> Saturday 11 June 2016 08:19:00 +0100 (0:00:01.699) 0:01:02.207 >> ********* >> fatal: [scaleway1]: FAILED! => {"changed": true, "failed": true, >> "msg": "Destination directory /etc/polkit-1/localauthority/50-local.d >> does not exist"} >> >> NO MORE HOSTS LEFT >> ************************************************************* >> >> PLAY RECAP >> ********************************************************************* >> localhost : ok=4 changed=2 unreachable=0 >> failed=0 >> scaleway1 : ok=5 changed=3 unreachable=0 >> failed=1 >> >> Saturday 11 June 2016 08:19:02 +0100 (0:00:02.017) 0:01:04.225 >> ********* >> >> =============================================================================== >> TASK: setup ------------------------------------------------------------ >> 25.28s >> TASK: setup ------------------------------------------------------------ >> 14.68s >> TASK: provision/remote : Create virthost access key --------------------- >> 9.28s >> TASK: provision/remote : Grant libvirt privileges to non-root user ------ >> 2.02s >> TASK: provision/remote : Configure non-root user authorized_keys -------- >> 1.70s >> TASK: provision/remote : Create non-root user --------------------------- >> 1.59s >> TASK: provision/local : Ensure local working dir exists ----------------- >> 1.54s >> TASK: provision/teardown : Get UID of non-root user --------------------- >> 1.44s >> TASK: provision/local : Create empty ssh config file -------------------- >> 1.14s >> TASK: provision/local : Check that virthost is set ---------------------- >> 0.84s >> TASK: provision/local : Add the virthost to the inventory --------------- >> 0.82s >> TASK: provision/teardown : Check for processes owned by non-root user --- >> 0.68s >> TASK: provision/teardown : Wait for processes to exit ------------------- >> 0.63s >> TASK: provision/teardown : Remove non-root user ephemeral files --------- >> 0.61s >> TASK: provision/teardown : Kill (SIGKILL) all processes owned by >> non-root user --- 0.60s >> TASK: provision/teardown : Remove non-root user account ----------------- >> 0.57s >> TASK: provision/teardown : Kill (SIGTERM) all processes owned by >> non-root user --- 0.57s >> >> The host is a CentOS 7 machine. >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > From ckdwibedy at gmail.com Thu Jun 16 12:12:14 2016 From: ckdwibedy at gmail.com (Chinmaya Dwibedy) Date: Thu, 16 Jun 2016 17:42:14 +0530 Subject: [rdo-list] Issue with assigning multiple VFs to VM instance Message-ID: Hi All, I have installed open-stack openstack-mitaka release on CentO7 system . It has two Intel QAT devices. There are 32 VF devices available per QAT Device/DH895xCC device. [root at localhost nova(keystone_admin)]# lspci -nn | grep 0435 83:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT [8086:0435] 88:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT [8086:0435] [root at localhost nova(keystone_admin)]# cat /sys/bus/pci/devices/0000\:88\:00.0/sriov_numvfs 32 [root at localhost nova(keystone_admin)]# cat /sys/bus/pci/devices/0000\:83\:00.0/sriov_numvfs 32 [root at localhost nova(keystone_admin)]# Changed the nova configuration (as stated below) for exposing VF ( via PCI-passthrough) to the instances. pci_alias = {"name": "QuickAssist", "product_id": "0443", "vendor_id": "8086", "device_type": "type-VF"} pci_passthrough_whitelist = [{"vendor_id":"8086","product_id":"0443"}}] Restarted the nova compute, nova API and nova scheduler service service openstack-nova-compute restart;service openstack-nova-api restart;systemctl restart openstack-nova-scheduler; scheduler_available_filters=nova.scheduler.filters.all_filters scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter Thereafter it shows all the available VFs (64) in nova database upon select * from pci_devices. Set the flavor 4 to allow passing two VFs to instances. [root at localhost nova(keystone_admin)]# nova flavor-show 4 +----------------------------+------------------------------------------------------------+ | Property | Value | +----------------------------+------------------------------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 80 | | extra_specs | {"pci_passthrough:alias": "QuickAssist:2"} | | id | 4 | | name | m1.large | | os-flavor-access:is_public | True | | ram | 8192 | | rxtx_factor | 1.0 | | swap | | | vcpus | 4 | +----------------------------+------------------------------------------------------------+ [root at localhost nova(keystone_admin)]# Also when I launch an instance using this new flavor, it goes into an error state nova boot --flavor 4 --key_name oskey1 --image bc859dc5-103b-428b-814f-d36e59009454 --nic net-id=e2ca118d-1f25-47de-8524-bb2a2635c4be TEST Here goes the output of nova-conductor.log 2016-06-16 07:55:34.640 5094 WARNING nova.scheduler.utils [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 150, in inner return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 104, in select_destinations dests = self.driver.select_destinations(ctxt, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 74, in select_destinations raise exception.NoValidHost(reason=reason) NoValidHost: No valid host was found. There are not enough hosts available. Here goes the output of nova-compute.log 2016-06-16 07:57:32.502 170789 INFO nova.compute.resource_tracker [req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Total usable vcpus: 36, total allocated vcpus: 16 2016-06-16 07:57:32.502 170789 INFO nova.compute.resource_tracker [req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Final resource view: name=localhost phys_ram=128721MB used_ram=33280MB phys_disk=49GB used_disk=320GB total_vcpus=36 used_vcpus=16 pci_stats=[PciDevicePool(count=0,numa_node=0,product_id='10fb',tags={dev_type='type-PF'},vendor_id='8086'), PciDevicePool(count=63,numa_node=1,product_id='0443',tags={dev_type='type-VF'},vendor_id='8086')] 2016-06-16 07:57:33.803 170789 INFO nova.compute.resource_tracker [req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Compute_service record updated for localhost:localhost Here goes the output of nova-scheduler.log 2016-06-16 07:55:34.636 171018 WARNING nova.scheduler.host_manager [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Host localhost has more disk space than database expected (-141 GB > -271 GB) 2016-06-16 07:55:34.637 171018 INFO nova.filters [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Filter PciPassthroughFilter returned 0 hosts 2016-06-16 07:55:34.638 171018 INFO nova.filters [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Filtering removed all hosts for the request with instance ID '4f68c680-5a17-4a38-a6df-5cdb6d76d75b'. Filter results: ['RamFilter: (start: 1, end: 1)', 'ComputeFilter: (start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)', 'ComputeCapabilitiesFilter: (start: 1, end: 1)', 'ImagePropertiesFilter: (start: 1, end: 1)', 'PciPassthroughFilter: (start: 1, end: 0)'] 2016-06-16 07:56:14.743 171018 INFO nova.scheduler.host_manager [req-64a8dc31-f2ab-4d93-8579-6b9f8210ece7 - - - - -] Successfully synced instances from host 'localhost'. 2016-06-16 07:58:17.748 171018 INFO nova.scheduler.host_manager [req-152ac777-1f77-433d-8493-6cd86ab3e0fc - - - - -] Successfully synced instances from host 'localhost'. Note that, If I set the flavor as (#nova flavor-key 4 set "pci_passthrough:alias"="QuickAssist:1") , it assigns a single VF to VM instance. I think, multiple PFs can be assigned per VM. Can anyone please suggest , where I am wrong and the way to solve this ? Thank you in advance for your support and help. Regards, Chinmaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From trown at redhat.com Thu Jun 16 12:26:36 2016 From: trown at redhat.com (John Trowbridge) Date: Thu, 16 Jun 2016 08:26:36 -0400 Subject: [rdo-list] Late test day failure In-Reply-To: References: Message-ID: <57629AFC.3090608@redhat.com> On 06/16/2016 06:49 AM, Adam Huffman wrote: > Hi Wesley, > > As I've indicated in the review, I can't test this properly because > quickstart.sh fails when I use the --no-clone option: > > bash quickstart.sh --no-clone $VIRTHOST > > > > Installing OpenStack mitaka on host scaleway1 > Using directory /home/adam/.quickstart for a local working directory > + export ANSIBLE_CONFIG=./ansible.cfg > + ANSIBLE_CONFIG=./ansible.cfg > + export ANSIBLE_INVENTORY=/home/adam/.quickstart/hosts > + ANSIBLE_INVENTORY=/home/adam/.quickstart/hosts > + '[' 0 = 0 ']' > + rm -f /home/adam/.quickstart/hosts > + '[' scaleway1 = localhost ']' > + '[' '' = 1 ']' > + VERBOSITY=vv > + ansible-playbook -vv /home/adam/.quickstart/playbooks/quickstart.yml > -e @./config/general_config/minimal.yml -e > ansible_python_interpreter=/usr/bin/python -e > @/home/adam/.quickstart/config/release/mitaka.yml -e > local_working_dir=/home/adam/.quickstart -e virthost=scaleway1 -t > untagged,provision,environment,undercloud-scripts,overcloud-scripts > Using /etc/ansible/ansible.cfg as config file > ERROR! the file_name '/home/adam/config/general_config/minimal.yml' > does not exist, or is not readable > In order to use --no-clone, you must run quickstart.sh from the checked out copy of the repo. It looks like it was run from $HOME, which is why it failed there. > Cheers, > Adam > > On Mon, Jun 13, 2016 at 2:37 PM, Wesley Hayutin wrote: >> Adam, >> Any chance you can pull in the following change and see if this resolves >> your issue? >> You will need to use the quickstart.sh --no-clone option after pulling this >> change. >> >> https://review.openstack.org/#/c/328984/ >> >> On Sat, Jun 11, 2016 at 3:23 AM, Adam Huffman >> wrote: >>> >>> I'm following the TripleO quickstart instructions at: >>> >>> https://www.rdoproject.org/tripleo/ >>> >>> There is a failure quite early on: >>> >>> TASK [provision/remote : Grant libvirt privileges to non-root user] >>> ************ >>> task path: >>> /home/adam/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/remote/tasks/main.yml:37 >>> Saturday 11 June 2016 08:19:00 +0100 (0:00:01.699) 0:01:02.207 >>> ********* >>> fatal: [scaleway1]: FAILED! => {"changed": true, "failed": true, >>> "msg": "Destination directory /etc/polkit-1/localauthority/50-local.d >>> does not exist"} >>> >>> NO MORE HOSTS LEFT >>> ************************************************************* >>> >>> PLAY RECAP >>> ********************************************************************* >>> localhost : ok=4 changed=2 unreachable=0 >>> failed=0 >>> scaleway1 : ok=5 changed=3 unreachable=0 >>> failed=1 >>> >>> Saturday 11 June 2016 08:19:02 +0100 (0:00:02.017) 0:01:04.225 >>> ********* >>> >>> =============================================================================== >>> TASK: setup ------------------------------------------------------------ >>> 25.28s >>> TASK: setup ------------------------------------------------------------ >>> 14.68s >>> TASK: provision/remote : Create virthost access key --------------------- >>> 9.28s >>> TASK: provision/remote : Grant libvirt privileges to non-root user ------ >>> 2.02s >>> TASK: provision/remote : Configure non-root user authorized_keys -------- >>> 1.70s >>> TASK: provision/remote : Create non-root user --------------------------- >>> 1.59s >>> TASK: provision/local : Ensure local working dir exists ----------------- >>> 1.54s >>> TASK: provision/teardown : Get UID of non-root user --------------------- >>> 1.44s >>> TASK: provision/local : Create empty ssh config file -------------------- >>> 1.14s >>> TASK: provision/local : Check that virthost is set ---------------------- >>> 0.84s >>> TASK: provision/local : Add the virthost to the inventory --------------- >>> 0.82s >>> TASK: provision/teardown : Check for processes owned by non-root user --- >>> 0.68s >>> TASK: provision/teardown : Wait for processes to exit ------------------- >>> 0.63s >>> TASK: provision/teardown : Remove non-root user ephemeral files --------- >>> 0.61s >>> TASK: provision/teardown : Kill (SIGKILL) all processes owned by >>> non-root user --- 0.60s >>> TASK: provision/teardown : Remove non-root user account ----------------- >>> 0.57s >>> TASK: provision/teardown : Kill (SIGTERM) all processes owned by >>> non-root user --- 0.57s >>> >>> The host is a CentOS 7 machine. >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From bderzhavets at hotmail.com Thu Jun 16 15:16:38 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 16 Jun 2016 15:16:38 +0000 Subject: [rdo-list] Attempt of Virtual TripleO deployment based on upstream docs In-Reply-To: <57629AFC.3090608@redhat.com> References: , <57629AFC.3090608@redhat.com> Message-ID: Using links :- http://docs.openstack.org/developer/tripleo-docs/environments/environments.html http://docs.openstack.org/developer/tripleo-docs/installation/installation.html http://docs.openstack.org/developer/tripleo-docs/basic_deployment/basic_deployment_cli.html#upload-images 1. Deployment based on Mitaka trunks hangs running `openstack undercloud install` Attempting to start nova-compute :- 1. nova-compute.log reports error connecting 127.0.0.1:5672 2. `netstat -antp` reports port 5672 bind to 192.0.2.1 and nothing else Looks as bug for me. 2. Deployment based on Liberty stable branch fails to build all images required for upload I follow https://bugzilla.redhat.com/show_bug.cgi?id=1273647 stack$ export RDO_RELEASE='liberty' stack$ openstack overcloud image build --all It doesn't help. Version of python-tripleoclient is high enough. Thanks. Boris. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shevel.andrey at gmail.com Thu Jun 16 15:44:01 2016 From: shevel.andrey at gmail.com (Andrey Shevel) Date: Thu, 16 Jun 2016 18:44:01 +0300 Subject: [rdo-list] mitaka installation Message-ID: trying to install openstack mitaka I met problems with the procedure described on https://www.rdoproject.org/install/quickstart/ final messages from packstack --allinone are below ========================================= 193.124.84.22_prescript.pp: [ DONE ] Applying 193.124.84.22_amqp.pp Applying 193.124.84.22_mariadb.pp 193.124.84.22_amqp.pp: [ DONE ] 193.124.84.22_mariadb.pp: [ DONE ] Applying 193.124.84.22_apache.pp 193.124.84.22_apache.pp: [ DONE ] Applying 193.124.84.22_keystone.pp Applying 193.124.84.22_glance.pp Applying 193.124.84.22_cinder.pp 193.124.84.22_keystone.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 193.124.84.22_keystone.pp Error: Could not prefetch keystone_role provider 'openstack': Could not authenticate You will find full trace in log /var/tmp/packstack/20160616-133447-C9hfh9/manifests/193.124.84.22_keystone.pp.log Please check log file /var/tmp/packstack/20160616-133447-C9hfh9/openstack-setup.log for more information Additional information: * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components. * File /root/keystonerc_admin has been created on OpenStack client host 193.124.84.22. To use the command line tools you need to source the file. * To access the OpenStack Dashboard browse to http://193.124.84.22/dashboard . Please, find your login credentials stored in the keystonerc_admin in your home directory. * To use Nagios, browse to http://193.124.84.22/nagios username: nagiosadmin, password: f9d91344c4bb4f21 =============== the file keystone_role/openstack.rb is from package openstack-puppet-modules-8.0.4-1.el7.noarch (from repo openstack-mitaka) I repeated several times with full reinstallition (yum erase [everything from repo openstack-mitaka], deletion all conf directories, yum install) Additional info cat /etc/os-release NAME="Scientific Linux" VERSION="7.2 (Nitrogen)" ID="rhel" ID_LIKE="fedora" VERSION_ID="7.2" PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA" HOME_URL="http://www.scientificlinux.org//" BUG_REPORT_URL="mailto:scientific-linux-devel at listserv.fnal.gov" REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.2 REDHAT_SUPPORT_PRODUCT="Scientific Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.2" Somebody has ideas how to fix it ? Many thanks in advance. -- Andrey Y Shevel From bderzhavets at hotmail.com Thu Jun 16 16:06:31 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 16 Jun 2016 16:06:31 +0000 Subject: [rdo-list] Attempt of Tripleo Quickstart deployments hang downloding undercload.qcow2 Message-ID: What actually happened, I created fresh CentOS 7.2 instance on the same server, so /var/cache/tripleo-quickstart/images became empty. Then piece of code 73 - 75 lines of /home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/fetch_image.yml =============================================================== - name: Get image command: > curl -sf -C- -o _{{ image.name }}.{{ image.type}} {{ image.url }} args: chdir: "{{ image_cache_dir }}" register: curl_result until: curl_result.rc not in [18, 56] retries: 20 delay: 5 ================================================================ doesn't loop for me any longer ( as happened before, since late May 2016 ) It sits on first curl invocation with spead 2.85MB/min , e.g. 10-12 hr for download 2.5 GB. Due to "-C-" option used by curl in about several days via restarting `bash quickstart $VIRTHOST` on regular basis undercloud.qcow2 will be downloaded, it didn't happen just 2-3 days ago probably due to using cached copy of undercloud.qcow2. However , that was a point of time in the past ( around 05/25 ) when I picked up undecloud.qcow2 the first time with no problems. I also believe that all what I did was just the help to solve the issue with telemetry services failure :- https://www.redhat.com/archives/rdo-list/2016-June/msg00036.html Appears that my ISP has problem with only one site's downloading speed . This site is http://artifacts.ci.centos.org Link to to file http://artifacts.ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/undercloud.qcow2 Thanks. Boris. -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at redhat.com Thu Jun 16 18:39:29 2016 From: apevec at redhat.com (Alan Pevec) Date: Thu, 16 Jun 2016 20:39:29 +0200 Subject: [rdo-list] mitaka installation In-Reply-To: References: Message-ID: > ERROR : Error appeared during Puppet run: 193.124.84.22_keystone.pp > Error: Could not prefetch keystone_role provider 'openstack': Could > not authenticate > You will find full trace in log > /var/tmp/packstack/20160616-133447-C9hfh9/manifests/193.124.84.22_keystone.pp.log ^ please paste this file so we can see more details about the error From apevec at redhat.com Thu Jun 16 18:44:47 2016 From: apevec at redhat.com (Alan Pevec) Date: Thu, 16 Jun 2016 20:44:47 +0200 Subject: [rdo-list] Attempt of Tripleo Quickstart deployments hang downloding undercload.qcow2 In-Reply-To: References: Message-ID: > Link to to file > http://artifacts.ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/undercloud.qcow2 Please try http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/undercloud.qcow2 John, can you change default URL to buildlogs? Cheers, Alan From bderzhavets at hotmail.com Thu Jun 16 19:29:05 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 16 Jun 2016 19:29:05 +0000 Subject: [rdo-list] Attempt of Tripleo Quickstart deployments hang downloding undercload.qcow2 In-Reply-To: References: , Message-ID: ________________________________ From: Alan Pevec Sent: Thursday, June 16, 2016 2:44 PM To: Boris Derzhavets; John Trowbridge Cc: rdo-list Subject: Re: [rdo-list] Attempt of Tripleo Quickstart deployments hang downloding undercload.qcow2 > Link to to file > http://artifacts.ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/undercloud.qcow2 Please try http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/undercloud.qcow2 Just fine [boris at ServerCentOS72 ~]$ wget http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/undercloud.qcow2 --2016-06-16 22:23:31-- http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/undercloud.qcow2 Resolving buildlogs.centos.org (buildlogs.centos.org)... 136.243.75.209, 2a01:4f8:212:29d0::1 Connecting to buildlogs.centos.org (buildlogs.centos.org)|136.243.75.209|:80... connected. HTTP request sent, awaiting response... 302 Found Location: http://buildlogs.cdn.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/undercloud.qcow2 [following] --2016-06-16 22:23:31-- http://buildlogs.cdn.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/undercloud.qcow2 Resolving buildlogs.cdn.centos.org (buildlogs.cdn.centos.org)... 46.46.180.69 Connecting to buildlogs.cdn.centos.org (buildlogs.cdn.centos.org)|46.46.180.69|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 2701548544 (2.5G) Saving to: 'undercloud.qcow2' 100%[=============================================================>] 2,701,548,544 10.1MB/s in 4m 17s 2016-06-16 22:27:49 (10.0 MB/s) - 'undercloud.qcow2' saved [2701548544/2701548544] Thank you Boris John, can you change default URL to buildlogs? Cheers, Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From trown at redhat.com Thu Jun 16 20:36:58 2016 From: trown at redhat.com (John Trowbridge) Date: Thu, 16 Jun 2016 16:36:58 -0400 Subject: [rdo-list] Attempt of Tripleo Quickstart deployments hang downloding undercload.qcow2 In-Reply-To: References: Message-ID: <57630DEA.4080806@redhat.com> On 06/16/2016 02:44 PM, Alan Pevec wrote: >> Link to to file >> http://artifacts.ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/undercloud.qcow2 > > Please try http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/mitaka/delorean/undercloud.qcow2 > > John, can you change default URL to buildlogs? > That is unfortunately blocked by solving the md5sum propagates faster than the image problem. We previously switched to buildlogs, but had to revert because downloads were failing due to bad md5, but really it was just a new md5 and an old image combination. > Cheers, > Alan > From shevel.andrey at gmail.com Fri Jun 17 08:08:04 2016 From: shevel.andrey at gmail.com (Andrey Shevel) Date: Fri, 17 Jun 2016 11:08:04 +0300 Subject: [rdo-list] mitaka installation In-Reply-To: References: Message-ID: The file REINSTALL.... is script to reinstall Openstack-mitaka On Thu, Jun 16, 2016 at 9:39 PM, Alan Pevec wrote: >> ERROR : Error appeared during Puppet run: 193.124.84.22_keystone.pp >> Error: Could not prefetch keystone_role provider 'openstack': Could >> not authenticate >> You will find full trace in log >> /var/tmp/packstack/20160616-133447-C9hfh9/manifests/193.124.84.22_keystone.pp.log > > ^ please paste this file so we can see more details about the error -- Andrey Y Shevel -------------- next part -------------- A non-text attachment was scrubbed... Name: openstack-setup.log Type: application/octet-stream Size: 8329 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 193.124.84.22_keystone.pp.log Type: application/octet-stream Size: 24450 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: REINSTALL-csd2-From-RDO.bash-2016-06-16_19-48-33 Type: application/octet-stream Size: 22061 bytes Desc: not available URL: From apevec at redhat.com Fri Jun 17 09:00:31 2016 From: apevec at redhat.com (Alan Pevec) Date: Fri, 17 Jun 2016 11:00:31 +0200 Subject: [rdo-list] Attempt of Tripleo Quickstart deployments hang downloding undercload.qcow2 In-Reply-To: <57630DEA.4080806@redhat.com> References: <57630DEA.4080806@redhat.com> Message-ID: On Thu, Jun 16, 2016 at 10:36 PM, John Trowbridge wrote: >> John, can you change default URL to buildlogs? > That is unfortunately blocked by solving the md5sum propagates faster > than the image problem. We previously switched to buildlogs, but had to > revert because downloads were failing due to bad md5, but really it was > just a new md5 and an old image combination. Bummer, didn't know about that issue, is it tracked somewhere? I've added it to the checklist on https://trello.com/c/6O18whhA/146-migrate-dlrn-instance-to-centos-infrastructure Quick idea: change download format to tarball which includes image + md5 ? Cheers, Alan From apevec at redhat.com Fri Jun 17 09:13:38 2016 From: apevec at redhat.com (Alan Pevec) Date: Fri, 17 Jun 2016 11:13:38 +0200 Subject: [rdo-list] Attempt of Tripleo Quickstart deployments hang downloding undercload.qcow2 In-Reply-To: References: <57630DEA.4080806@redhat.com> Message-ID: > Quick idea: change download format to tarball which includes image + md5 ? I've piggy-backed it in https://bugs.launchpad.net/tripleo-quickstart/+bug/1579921 could we raise priority for this RFE ? Cheers, Alan From bderzhavets at hotmail.com Fri Jun 17 10:35:31 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 17 Jun 2016 10:35:31 +0000 Subject: [rdo-list] mitaka installation In-Reply-To: References: , Message-ID: I have well tested workaround for CONFIG_KEYSTONE_API_VERSION=v3 based on back porting 2 recent upstream commits to stable RDO Mitaka. When you run `packstack --alinone` Keystone API is v2.0 by default not v3. So you might be focused on v2.0, otherwise let me know. I have detailed notes been done for back port ( one more time thanks to Javier Pena for upstream work ) Boris. ________________________________ From: rdo-list-bounces at redhat.com on behalf of Andrey Shevel Sent: Friday, June 17, 2016 4:08 AM To: alan.pevec at redhat.com Cc: rdo-list Subject: Re: [rdo-list] mitaka installation The file REINSTALL.... is script to reinstall Openstack-mitaka On Thu, Jun 16, 2016 at 9:39 PM, Alan Pevec wrote: >> ERROR : Error appeared during Puppet run: 193.124.84.22_keystone.pp >> Error: Could not prefetch keystone_role provider 'openstack': Could >> not authenticate >> You will find full trace in log >> /var/tmp/packstack/20160616-133447-C9hfh9/manifests/193.124.84.22_keystone.pp.log > > ^ please paste this file so we can see more details about the error -- Andrey Y Shevel -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckdwibedy at gmail.com Fri Jun 17 13:10:58 2016 From: ckdwibedy at gmail.com (Chinmaya Dwibedy) Date: Fri, 17 Jun 2016 18:40:58 +0530 Subject: [rdo-list] Mouse does not work in a hosted virtual machine using openstack-mitaka release Message-ID: Hi All, I have installed openstack-mitaka release on stag48 (CentO7 system) and created VMs (fedora 20) . Logged in using VM's instance console using horizon dashboard. The mouse does not function within a virtual machine. Can anyone suggest how to enable this ? Regards, Chinmaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier.pena at redhat.com Fri Jun 17 14:45:15 2016 From: javier.pena at redhat.com (Javier Pena) Date: Fri, 17 Jun 2016 10:45:15 -0400 (EDT) Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> <314502750.13808285.1465460613103.JavaMail.zimbra@redhat.com> Message-ID: <1021183242.16736095.1466174715123.JavaMail.zimbra@redhat.com> ----- Original Message ----- > > We could take an easier way and assume we only have 3 roles, as in the > > current refactored code: controller, network, compute. The logic would > > then be: > > - By default we install everything, so all in one > > - If our host is not CONFIG_CONTROLLER_HOST but is part of > > CONFIG_NETWORK_HOSTS, we apply the network manifest > > - Same as above if our host is part of CONFIG_COMPUTE_HOSTS > > > > Of course, the last two options would assume a first server is installed as > > controller. > > > > This would allow us to reuse the same answer file on all runs (one per host > > as you proposed), eliminate the ssh code as we are always running locally, > > and make some assumptions in the python code, like expecting OPM to be > > deployed and such. A contributed ansible wrapper to automate the runs > > would be straightforward to create. > > > > What do you think? Would it be worth the effort? > > +2 I like that proposal a lot! An ansible wrapper is then just an > example playbook in docs but could be done w/o ansible as well, > manually or using some other remote execution tooling of user's > choice. > Now that the phase 1 refactor is under review and passing CI, I think it's time to come to a conclusion on this. This option looks like the best compromise between keeping it simple and dropping the least possible amount of features. So unless someone has a better idea, I'll work on that as soon as the current review is merged. Regards, Javier > Alan > From bderzhavets at hotmail.com Fri Jun 17 15:15:57 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Fri, 17 Jun 2016 15:15:57 +0000 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: <1021183242.16736095.1466174715123.JavaMail.zimbra@redhat.com> References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <132686911.57987231.1465414072260.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> <314502750.13808285.1465460613103.JavaMail.zimbra@redhat.com> , <1021183242.16736095.1466174715123.JavaMail.zimbra@redhat.com> Message-ID: ________________________________ From: rdo-list-bounces at redhat.com on behalf of Javier Pena Sent: Friday, June 17, 2016 10:45 AM To: rdo-list Cc: alan pevec Subject: Re: [rdo-list] Packstack refactor and future ideas ----- Original Message ----- > > We could take an easier way and assume we only have 3 roles, as in the > > current refactored code: controller, network, compute. The logic would > > then be: > > - By default we install everything, so all in one > > - If our host is not CONFIG_CONTROLLER_HOST but is part of > > CONFIG_NETWORK_HOSTS, we apply the network manifest > > - Same as above if our host is part of CONFIG_COMPUTE_HOSTS > > > > Of course, the last two options would assume a first server is installed as > > controller. > > > > This would allow us to reuse the same answer file on all runs (one per host > > as you proposed), eliminate the ssh code as we are always running locally, > > and make some assumptions in the python code, like expecting OPM to be > > deployed and such. A contributed ansible wrapper to automate the runs > > would be straightforward to create. > > > > What do you think? Would it be worth the effort? > > +2 I like that proposal a lot! An ansible wrapper is then just an > example playbook in docs but could be done w/o ansible as well, > manually or using some other remote execution tooling of user's > choice. > Now that the phase 1 refactor is under review and passing CI, I think it's time to come to a conclusion on this. This option looks like the best compromise between keeping it simple and dropping the least possible amount of features. So unless someone has a better idea, I'll work on that as soon as the current review is merged. Would it be possible :- - By default we install everything, so all in one - If our host is not CONFIG_CONTROLLER_HOST but is part of CONFIG_NETWORK_HOSTS, we apply the network manifest - Same as above if our host is part of CONFIG_COMPUTE_HOSTS - If our host is not CONFIG_CONTROLLER_HOST but is part of CONFIG_STORAGE_HOSTS, we apply the storage manifest Just one more role. May we have 4 roles ? Thanks Boris. Regards, Javier > Alan > _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list rdo-list Info Page - Red Hat www.redhat.com The rdo-list mailing list provides a forum for discussions about installing, running, and using OpenStack on Red Hat based distributions. To see the collection of ... To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at gbraad.nl Mon Jun 20 07:22:41 2016 From: me at gbraad.nl (Gerard Braad) Date: Mon, 20 Jun 2016 15:22:41 +0800 Subject: [rdo-list] [tripleo] [oooq] Baremetal deployment using quickstart; unregistered drivers after reboot Message-ID: Hi, I have been performing baremetal deployments using the TripleO quickstart (Mitaka), however I have noticed an issue after reboot of the undercloud... As reported before, the undercloud machine has to be rebooted once in a while as it looses it's network connectivity. This is already annoying, but issuing 'sudo su - stack -c "virsh reboot undercloud"` works around this for now. But the next problems seems to be that after the reboot `ironic-conductor` does not come up correctly. It seems to hang on connection to rabbitmq. When a new introspection of deployment is triggered I get a: "No valid host was found. Reason: No conductor service registered which supports driver pxe_ipmitool" After performing: sudo systemctl restart openstack-ironic-conductor cat /var/log/ironic/ironic-conductor.log |grep enabled I get the correct registration back: 2016-06-20 07:15:20.248 3501 DEBUG oslo_service.service [-] enabled_drivers = ['pxe_ipmitool', 'pxe_ssh', 'pxe_drac', 'pxe_ilo', 'pxe_wol', 'pxe_amt'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2521 Has this been observed by others? regards, Gerard -- Gerard Braad | http://gbraad.nl [ Doing Open Source Matters ] From me at gbraad.nl Mon Jun 20 09:30:13 2016 From: me at gbraad.nl (Gerard Braad) Date: Mon, 20 Jun 2016 17:30:13 +0800 Subject: [rdo-list] [tripleo] [oooq] Deployment to baremetal fails; "No valid host was found" Message-ID: Hi, as mentioned in a previous email, I am deploying baremetal nodes using the quickstart. At the moment I can introspect nodes correctly, but am unable to deploy to them. I performed the checks as mentioned in /tripleo-docs/doc/source/troubleshooting/troubleshooting-overcloud.rst: The flavor list I have is unchanged: [stack at undercloud ~]$ openstack flavor list +--------------------------------------+---------------+------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +--------------------------------------+---------------+------+------+-----------+-------+-----------+ | 2e72ffb5-c6d7-46fd-ad75-448c0ad6855f | baremetal | 4096 | 40 | 0 | 1 | True | | 6b8b37e4-618d-4841-b5e3-f556ef27fd4d | oooq_compute | 8192 | 49 | 0 | 1 | True | | 973b58c3-8730-4b1f-96b2-fda253c15dbc | oooq_control | 8192 | 49 | 0 | 1 | True | | e22dc516-f53f-4a71-9793-29c614999801 | oooq_ceph | 8192 | 49 | 0 | 1 | True | | e3dce62a-ac8d-41ba-9f97-84554b247faa | block-storage | 4096 | 40 | 0 | 1 | True | | f5fe9ba6-cf5c-4ef3-adc2-34f3b4381915 | control | 4096 | 40 | 0 | 1 | True | | fabf81d8-44cb-4c25-8ed0-2afd124425db | compute | 4096 | 40 | 0 | 1 | True | | fe512696-2294-40cb-9d20-12415f45c1a9 | ceph-storage | 4096 | 40 | 0 | 1 | True | | ffc859af-dbfd-4e27-99fb-9ab02f4afa79 | swift-storage | 4096 | 40 | 0 | 1 | True | +--------------------------------------+---------------+------+------+-----------+-------+-----------+ In instackenv.json the nodes have been assigned as: [stack at undercloud ~]$ cat instackenv.json { "nodes":[ { "_comment": "ooo1", "pm_type":"pxe_ipmitool", "mac": [ "00:26:9e:9b:c3:36" ], "cpu": "16", "memory": "65536", "disk": "370", "arch": "x86_64", "pm_user":"root", "pm_password":"admin", "pm_addr":"10.0.108.126", "capabilities": "profile:control,boot_option:local" }, { "_comment": "ooo2", "pm_type":"pxe_ipmitool", "mac": [ "00:26:9e:9c:38:a6" ], "cpu": "16", "memory": "65536", "disk": "370", "arch": "x86_64", "pm_user":"root", "pm_password":"admin", "pm_addr":"10.0.108.127", "capabilities": "profile:compute,boot_option:local" } ] } [stack at undercloud ~]$ ironic node-list +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | 0956df36-b642-44b8-a67f-0df88270372b | None | None | power off | manageable | False | | cc311355-f373-4e5c-99be-31ba3185639d | None | None | power off | manageable | False | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ And manually I perform the introspection: [stack at undercloud ~]$ openstack baremetal introspection bulk start Setting nodes for introspection to manageable... Starting introspection of node: 0956df36-b642-44b8-a67f-0df88270372b Starting introspection of node: cc311355-f373-4e5c-99be-31ba3185639d Waiting for introspection to finish... Introspection for UUID 0956df36-b642-44b8-a67f-0df88270372b finished successfully. Introspection for UUID cc311355-f373-4e5c-99be-31ba3185639d finished successfully. Setting manageable nodes to available... Node 0956df36-b642-44b8-a67f-0df88270372b has been set to available. Node cc311355-f373-4e5c-99be-31ba3185639d has been set to available. Introspection completed. [stack at undercloud ~]$ ironic node-list +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | 0956df36-b642-44b8-a67f-0df88270372b | None | None | power off | available | False | | cc311355-f373-4e5c-99be-31ba3185639d | None | None | power off | available | False | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ After this, I start the deployment. I have defined the compute and control flavor to be of the respective type. [stack at undercloud ~]$ ./overcloud-deploy.sh + openstack overcloud deploy --templates --timeout 60 --control-scale 1 --control-flavor control --compute-scale 1 --compute-flavor compute --ntp-server pool.ntp.org -e /tmp/deploy_env.yaml Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates 2016-06-20 08:18:33 [overcloud]: CREATE_IN_PROGRESS Stack CREATE started 2016-06-20 08:18:33 [HorizonSecret]: CREATE_IN_PROGRESS state changed 2016-06-20 08:18:33 [RabbitCookie]: CREATE_IN_PROGRESS state changed 2016-06-20 08:18:33 [PcsdPassword]: CREATE_IN_PROGRESS state changed 2016-06-20 08:18:33 [MysqlClusterUniquePart]: CREATE_IN_PROGRESS state changed 2016-06-20 08:18:33 [MysqlRootPassword]: CREATE_IN_PROGRESS state changed 2016-06-20 08:18:33 [Networks]: CREATE_IN_PROGRESS state changed 2016-06-20 08:18:34 [VipConfig]: CREATE_IN_PROGRESS state changed 2016-06-20 08:18:34 [HeatAuthEncryptionKey]: CREATE_IN_PROGRESS state changed 2016-06-20 08:18:34 [overcloud-VipConfig-i4dgmk37z6hg]: CREATE_IN_PROGRESS Stack CREATE started 2016-06-20 08:18:34 [overcloud-Networks-4pb3htxq7rkd]: CREATE_IN_PROGRESS Stack CREATE started 2016-06-20 08:19:06 [Controller]: CREATE_FAILED ResourceInError: resources.Controller: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" 2016-06-20 08:19:06 [Controller]: DELETE_IN_PROGRESS state changed 2016-06-20 08:19:06 [NovaCompute]: CREATE_FAILED ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" 2016-06-20 08:19:06 [NovaCompute]: DELETE_IN_PROGRESS state changed 2016-06-20 08:19:09 [Controller]: DELETE_COMPLETE state changed 2016-06-20 08:19:09 [NovaCompute]: DELETE_COMPLETE state changed 2016-06-20 08:19:12 [Controller]: CREATE_IN_PROGRESS state changed 2016-06-20 08:19:12 [NovaCompute]: CREATE_IN_PROGRESS state changed 2016-06-20 08:19:14 [Controller]: CREATE_FAILED ResourceInError: resources.Controller: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" 2016-06-20 08:19:14 [Controller]: DELETE_IN_PROGRESS state changed 2016-06-20 08:19:14 [NovaCompute]: CREATE_FAILED ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" 2016-06-20 08:19:14 [NovaCompute]: DELETE_IN_PROGRESS state changed But as you can see, the deployment fails. I check the introspection information and verify that the disk, local memory and cpus are matching or exceeding the flavor: [stack at undercloud ~]$ ironic node-show 0956df36-b642-44b8-a67f-0df88270372b +------------------------+-------------------------------------------------------------------------+ | Property | Value | +------------------------+-------------------------------------------------------------------------+ | chassis_uuid | | | clean_step | {} | | console_enabled | False | | created_at | 2016-06-20T05:51:17+00:00 | | driver | pxe_ipmitool | | driver_info | {u'ipmi_password': u'******', u'ipmi_address': u'10.0.108.126', | | | u'ipmi_username': u'root', u'deploy_kernel': | | | u'07c794a6-b427-4e75-ba58-7c555abbf2f8', u'deploy_ramdisk': u'67a66b7b- | | | 637f-4b25-bcef-ed39ae32a1f4'} | | driver_internal_info | {} | | extra | {u'hardware_swift_object': u'extra_hardware-0956df36-b642-44b8-a67f- | | | 0df88270372b'} | | inspection_finished_at | None | | inspection_started_at | None | | instance_info | {} | | instance_uuid | None | | last_error | None | | maintenance | False | | maintenance_reason | None | | name | None | | power_state | power off | | properties | {u'memory_mb': u'65536', u'cpu_arch': u'x86_64', u'local_gb': u'371', | | | u'cpus': u'16', u'capabilities': u'profile:control,boot_option:local'} | | provision_state | available | | provision_updated_at | 2016-06-20T07:32:46+00:00 | | raid_config | | | reservation | None | | target_power_state | None | | target_provision_state | None | | target_raid_config | | | updated_at | 2016-06-20T07:32:46+00:00 | | uuid | 0956df36-b642-44b8-a67f-0df88270372b | +------------------------+-------------------------------------------------------------------------+ And also the hypervisor stats are set, but only matching the node count. [stack at undercloud ~]$ nova hypervisor-stats +----------------------+-------+ | Property | Value | +----------------------+-------+ | count | 2 | | current_workload | 0 | | disk_available_least | 0 | | free_disk_gb | 0 | | free_ram_mb | 0 | | local_gb | 0 | | local_gb_used | 0 | | memory_mb | 0 | | memory_mb_used | 0 | | running_vms | 0 | | vcpus | 0 | | vcpus_used | 0 | +----------------------+-------+ Registering the nodes as profile:baremetal has the same effect. What other parameters are used in making the decision if a node can be deployed to? I probably miss a small detail... what can I check to make sure the deployment starts? regards, Gerard -- Gerard Braad | http://gbraad.nl [ Doing Open Source Matters ] From javier.pena at redhat.com Mon Jun 20 11:44:54 2016 From: javier.pena at redhat.com (Javier Pena) Date: Mon, 20 Jun 2016 07:44:54 -0400 (EDT) Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> <314502750.13808285.1465460613103.JavaMail.zimbra@redhat.com> <1021183242.16736095.1466174715123.JavaMail.zimbra@redhat.com> Message-ID: <227969699.215194.1466423093998.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: rdo-list-bounces at redhat.com on behalf of > Javier Pena > Sent: Friday, June 17, 2016 10:45 AM > To: rdo-list > Cc: alan pevec > Subject: Re: [rdo-list] Packstack refactor and future ideas > ----- Original Message ----- > > > We could take an easier way and assume we only have 3 roles, as in the > > > current refactored code: controller, network, compute. The logic would > > > then be: > > > - By default we install everything, so all in one > > > - If our host is not CONFIG_CONTROLLER_HOST but is part of > > > CONFIG_NETWORK_HOSTS, we apply the network manifest > > > - Same as above if our host is part of CONFIG_COMPUTE_HOSTS > > > > > > Of course, the last two options would assume a first server is installed > > > as > > > controller. > > > > > > This would allow us to reuse the same answer file on all runs (one per > > > host > > > as you proposed), eliminate the ssh code as we are always running > > > locally, > > > and make some assumptions in the python code, like expecting OPM to be > > > deployed and such. A contributed ansible wrapper to automate the runs > > > would be straightforward to create. > > > > > > What do you think? Would it be worth the effort? > > > > +2 I like that proposal a lot! An ansible wrapper is then just an > > example playbook in docs but could be done w/o ansible as well, > > manually or using some other remote execution tooling of user's > > choice. > > > Now that the phase 1 refactor is under review and passing CI, I think it's > time to come to a conclusion on this. > This option looks like the best compromise between keeping it simple and > dropping the least possible amount of features. So unless someone has a > better idea, I'll work on that as soon as the current review is merged. > > Would it be possible :- > > - By default we install everything, so all in one > - If our host is not CONFIG_CONTROLLER_HOST but is part of > CONFIG_NETWORK_HOSTS, we apply the network manifest > - Same as above if our host is part of CONFIG_COMPUTE_HOSTS > - If our host is not CONFIG_CONTROLLER_HOST but is part of > CONFIG_STORAGE_HOSTS , we apply the storage manifest > > Just one more role. May we have 4 roles ? This is a tricky one. There used to be support for separate CONFIG_STORAGE_HOSTS, but I think it has been removed (or at least not tested for quite a long time). This would need to be a follow-up review, if it is finally decided to do so. Regards, Javier > Thanks > Boris. > Regards, > Javier > > Alan > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > rdo-list Info Page - Red Hat > www.redhat.com > The rdo-list mailing list provides a forum for discussions about installing, > running, and using OpenStack on Red Hat based distributions. To see the > collection of ... > To unsubscribe: rdo-list-unsubscribe at redhat.com > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com From ckdwibedy at gmail.com Mon Jun 20 11:53:19 2016 From: ckdwibedy at gmail.com (Chinmaya Dwibedy) Date: Mon, 20 Jun 2016 17:23:19 +0530 Subject: [rdo-list] Issue with assigning multiple VFs to VM instance In-Reply-To: References: Message-ID: Hi , Can anyone please suggest how to assign multiple VF devices to VM instance using open-stack openstack-mitaka release? Thank you in advance for your time and support. Regards, Chinmaya On Thu, Jun 16, 2016 at 5:42 PM, Chinmaya Dwibedy wrote: > Hi All, > > > I have installed open-stack openstack-mitaka release on CentO7 system . > It has two Intel QAT devices. There are 32 VF devices available per QAT > Device/DH895xCC device. > > > > [root at localhost nova(keystone_admin)]# lspci -nn | grep 0435 > > 83:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT > [8086:0435] > > 88:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT > [8086:0435] > > [root at localhost nova(keystone_admin)]# cat > /sys/bus/pci/devices/0000\:88\:00.0/sriov_numvfs > > 32 > > [root at localhost nova(keystone_admin)]# cat > /sys/bus/pci/devices/0000\:83\:00.0/sriov_numvfs > > 32 > > [root at localhost nova(keystone_admin)]# > > > Changed the nova configuration (as stated below) for exposing VF ( via > PCI-passthrough) to the instances. > > > pci_alias = {"name": "QuickAssist", "product_id": "0443", "vendor_id": > "8086", "device_type": "type-VF"} > > pci_passthrough_whitelist = [{"vendor_id":"8086","product_id":"0443"}}] > > Restarted the nova compute, nova API and nova scheduler service > > service openstack-nova-compute restart;service openstack-nova-api > restart;systemctl restart openstack-nova-scheduler; > > scheduler_available_filters=nova.scheduler.filters.all_filters > > > scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter > > > Thereafter it shows all the available VFs (64) in nova database upon > select * from pci_devices. Set the flavor 4 to allow passing two VFs to > instances. > > > [root at localhost nova(keystone_admin)]# nova flavor-show 4 > > > +----------------------------+------------------------------------------------------------+ > > | Property | > Value | > > > +----------------------------+------------------------------------------------------------+ > > | OS-FLV-DISABLED:disabled | > False | > > | OS-FLV-EXT-DATA:ephemeral | > 0 | > > | disk | 80 > | > > | extra_specs | {"pci_passthrough:alias": "QuickAssist:2"} | > > | id | > 4 | > > | name | > m1.large | > > | os-flavor-access:is_public | > True | > > | ram | > 8192 | > > | rxtx_factor | > 1.0 | > > | swap > | | > > | vcpus | > 4 | > > > +----------------------------+------------------------------------------------------------+ > > [root at localhost nova(keystone_admin)]# > > > > Also when I launch an instance using this new flavor, it goes into an > error state > > > > nova boot --flavor 4 --key_name oskey1 --image > bc859dc5-103b-428b-814f-d36e59009454 --nic > net-id=e2ca118d-1f25-47de-8524-bb2a2635c4be TEST > > > > > Here goes the output of nova-conductor.log > > > > 2016-06-16 07:55:34.640 5094 WARNING nova.scheduler.utils > [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 > 4bc608763cee41d9a8df26d3ef919825 - - -] Failed to > compute_task_build_instances: No valid host was found. There are not enough > hosts available. > > Traceback (most recent call last): > > > > File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", > line 150, in inner > > return func(*args, **kwargs) > > > > File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line > 104, in select_destinations > > dests = self.driver.select_destinations(ctxt, spec_obj) > > > > File > "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line > 74, in select_destinations > > raise exception.NoValidHost(reason=reason) > > > > NoValidHost: No valid host was found. There are not enough hosts available. > > > Here goes the output of nova-compute.log > > > > 2016-06-16 07:57:32.502 170789 INFO nova.compute.resource_tracker > [req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Total usable vcpus: > 36, total allocated vcpus: 16 > > 2016-06-16 07:57:32.502 170789 INFO nova.compute.resource_tracker > [req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Final resource view: > name=localhost phys_ram=128721MB used_ram=33280MB phys_disk=49GB > used_disk=320GB total_vcpus=36 used_vcpus=16 > pci_stats=[PciDevicePool(count=0,numa_node=0,product_id='10fb',tags={dev_type='type-PF'},vendor_id='8086'), > PciDevicePool(count=63,numa_node=1,product_id='0443',tags={dev_type='type-VF'},vendor_id='8086')] > > 2016-06-16 07:57:33.803 170789 INFO nova.compute.resource_tracker > [req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Compute_service record > updated for localhost:localhost > > > > Here goes the output of nova-scheduler.log > > > 2016-06-16 07:55:34.636 171018 WARNING nova.scheduler.host_manager > [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 > 4bc608763cee41d9a8df26d3ef919825 - - -] Host localhost has more disk space > than database expected (-141 GB > -271 GB) > > 2016-06-16 07:55:34.637 171018 INFO nova.filters > [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 > 4bc608763cee41d9a8df26d3ef919825 - - -] Filter PciPassthroughFilter > returned 0 hosts > > 2016-06-16 07:55:34.638 171018 INFO nova.filters > [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 > 4bc608763cee41d9a8df26d3ef919825 - - -] Filtering removed all hosts for the > request with instance ID '4f68c680-5a17-4a38-a6df-5cdb6d76d75b'. Filter > results: ['RamFilter: (start: 1, end: 1)', 'ComputeFilter: (start: 1, end: > 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)', > 'ComputeCapabilitiesFilter: (start: 1, end: 1)', 'ImagePropertiesFilter: > (start: 1, end: 1)', 'PciPassthroughFilter: (start: 1, end: 0)'] > > 2016-06-16 07:56:14.743 171018 INFO nova.scheduler.host_manager > [req-64a8dc31-f2ab-4d93-8579-6b9f8210ece7 - - - - -] Successfully synced > instances from host 'localhost'. > > 2016-06-16 07:58:17.748 171018 INFO nova.scheduler.host_manager > [req-152ac777-1f77-433d-8493-6cd86ab3e0fc - - - - -] Successfully synced > instances from host 'localhost'. > > > > Note that, If I set the flavor as (#nova flavor-key 4 set > "pci_passthrough:alias"="QuickAssist:1") , it assigns a single VF to VM > instance. I think, multiple PFs can be assigned per VM. Can anyone please > suggest , where I am wrong and the way to solve this ? Thank you in advance > for your support and help. > > > > > Regards, > > Chinmaya > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckdwibedy at gmail.com Mon Jun 20 11:56:01 2016 From: ckdwibedy at gmail.com (Chinmaya Dwibedy) Date: Mon, 20 Jun 2016 17:26:01 +0530 Subject: [rdo-list] Mouse does not work in a hosted virtual machine using openstack-mitaka release In-Reply-To: References: Message-ID: Hi , Can anyone please suggest how to enable mouse in VM instance ? Thank you in advance for your time and support. Regards, Chinmaya On Fri, Jun 17, 2016 at 6:40 PM, Chinmaya Dwibedy wrote: > Hi All, > > > I have installed openstack-mitaka release on stag48 (CentO7 system) and > created VMs (fedora 20) . Logged in using VM's instance console using > horizon dashboard. The mouse does not function within a virtual machine. > Can anyone suggest how to enable this ? > > > > Regards, > > Chinmaya > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Mon Jun 20 13:35:52 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 20 Jun 2016 13:35:52 +0000 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: <227969699.215194.1466423093998.JavaMail.zimbra@redhat.com> References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <1952373947.58008246.1465425205627.JavaMail.zimbra@redhat.com> <314502750.13808285.1465460613103.JavaMail.zimbra@redhat.com> <1021183242.16736095.1466174715123.JavaMail.zimbra@redhat.com> , <227969699.215194.1466423093998.JavaMail.zimbra@redhat.com> Message-ID: ________________________________ From: Javier Pena Sent: Monday, June 20, 2016 7:44 AM To: Boris Derzhavets Cc: rdo-list; alan pevec Subject: Re: [rdo-list] Packstack refactor and future ideas ----- Original Message ----- > From: rdo-list-bounces at redhat.com on behalf of > Javier Pena > Sent: Friday, June 17, 2016 10:45 AM > To: rdo-list > Cc: alan pevec > Subject: Re: [rdo-list] Packstack refactor and future ideas > ----- Original Message ----- > > > We could take an easier way and assume we only have 3 roles, as in the > > > current refactored code: controller, network, compute. The logic would > > > then be: > > > - By default we install everything, so all in one > > > - If our host is not CONFIG_CONTROLLER_HOST but is part of > > > CONFIG_NETWORK_HOSTS, we apply the network manifest > > > - Same as above if our host is part of CONFIG_COMPUTE_HOSTS > > > > > > Of course, the last two options would assume a first server is installed > > > as > > > controller. > > > > > > This would allow us to reuse the same answer file on all runs (one per > > > host > > > as you proposed), eliminate the ssh code as we are always running > > > locally, > > > and make some assumptions in the python code, like expecting OPM to be > > > deployed and such. A contributed ansible wrapper to automate the runs > > > would be straightforward to create. > > > > > > What do you think? Would it be worth the effort? > > > > +2 I like that proposal a lot! An ansible wrapper is then just an > > example playbook in docs but could be done w/o ansible as well, > > manually or using some other remote execution tooling of user's > > choice. > > > Now that the phase 1 refactor is under review and passing CI, I think it's > time to come to a conclusion on this. > This option looks like the best compromise between keeping it simple and > dropping the least possible amount of features. So unless someone has a > better idea, I'll work on that as soon as the current review is merged. > > Would it be possible :- > > - By default we install everything, so all in one > - If our host is not CONFIG_CONTROLLER_HOST but is part of > CONFIG_NETWORK_HOSTS, we apply the network manifest > - Same as above if our host is part of CONFIG_COMPUTE_HOSTS > - If our host is not CONFIG_CONTROLLER_HOST but is part of > CONFIG_STORAGE_HOSTS , we apply the storage manifest > > Just one more role. May we have 4 roles ? This is a tricky one. There used to be support for separate CONFIG_STORAGE_HOSTS, but I think it has been removed (or at least not tested for quite a long time). However, this feature currently works for RDO Mitaka ( as well it woks for Liberty) It's even possible to add Storage Node via packstack , taking care of glance and swift proxy keystone endpoints manually . For small prod deployments like several (5-10) Haswell Xeon boxes. ( no HA requirements from customer's side ). Ability to split Storage specifically Swift (AIO) instances or Cinder iSCSILVM back ends hosting Node from Controller is extremely critical feature. What I am writing is based on several projects committed in South America's countries. No complaints from site support stuff to myself for configurations deployed via Packstack. Dropping this feature ( unsupported , but stable working ) will for sure make Packstack almost useless toy . In situation when I am able only play with TripleO QuickStart due to Upstream docs ( Mitaka trunk instructions set) for instack-virt-setup don't allow to commit `openstack undercloud install` makes Howto :- https://remote-lab.net/rdo-manager-ha-openstack-deployment non reproducible. I have nothing against TripleO turn, but absence of Red Hat high quality manuals for TripleO bare metal / TripleO Instak-virt-setup will affect RDO Community in wide spread way. I mean first all countries like Chile, Brazil, China and etc. Thank you. Boris. This would need to be a follow-up review, if it is finally decided to do so. Regards, Javier > Thanks > Boris. > Regards, > Javier > > Alan > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list rdo-list Info Page - Red Hat www.redhat.com The rdo-list mailing list provides a forum for discussions about installing, running, and using OpenStack on Red Hat based distributions. To see the collection of ... > rdo-list Info Page - Red Hat > www.redhat.com > The rdo-list mailing list provides a forum for discussions about installing, > running, and using OpenStack on Red Hat based distributions. To see the > collection of ... > To unsubscribe: rdo-list-unsubscribe at redhat.com > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichavero at redhat.com Mon Jun 20 14:11:48 2016 From: ichavero at redhat.com (Ivan Chavero) Date: Mon, 20 Jun 2016 10:11:48 -0400 (EDT) Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <314502750.13808285.1465460613103.JavaMail.zimbra@redhat.com> <1021183242.16736095.1466174715123.JavaMail.zimbra@redhat.com> <227969699.215194.1466423093998.JavaMail.zimbra@redhat.com> Message-ID: <1805327027.177605.1466431907997.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Boris Derzhavets" > To: "Javier Pena" > Cc: "alan pevec" , "rdo-list" > Sent: Monday, June 20, 2016 8:35:52 AM > Subject: Re: [rdo-list] Packstack refactor and future ideas > > > > > > > > From: Javier Pena > Sent: Monday, June 20, 2016 7:44 AM > To: Boris Derzhavets > Cc: rdo-list; alan pevec > Subject: Re: [rdo-list] Packstack refactor and future ideas > ----- Original Message ----- > > > From: rdo-list-bounces at redhat.com on behalf > > of > > Javier Pena > > Sent: Friday, June 17, 2016 10:45 AM > > To: rdo-list > > Cc: alan pevec > > Subject: Re: [rdo-list] Packstack refactor and future ideas > > > ----- Original Message ----- > > > > We could take an easier way and assume we only have 3 roles, as in the > > > > current refactored code: controller, network, compute. The logic would > > > > then be: > > > > - By default we install everything, so all in one > > > > - If our host is not CONFIG_CONTROLLER_HOST but is part of > > > > CONFIG_NETWORK_HOSTS, we apply the network manifest > > > > - Same as above if our host is part of CONFIG_COMPUTE_HOSTS > > > > > > > > Of course, the last two options would assume a first server is > > > > installed > > > > as > > > > controller. > > > > > > > > This would allow us to reuse the same answer file on all runs (one per > > > > host > > > > as you proposed), eliminate the ssh code as we are always running > > > > locally, > > > > and make some assumptions in the python code, like expecting OPM to be > > > > deployed and such. A contributed ansible wrapper to automate the runs > > > > would be straightforward to create. > > > > > > > > What do you think? Would it be worth the effort? > > > > > > +2 I like that proposal a lot! An ansible wrapper is then just an > > > example playbook in docs but could be done w/o ansible as well, > > > manually or using some other remote execution tooling of user's > > > choice. > > > > > Now that the phase 1 refactor is under review and passing CI, I think it's > > time to come to a conclusion on this. > > This option looks like the best compromise between keeping it simple and > > dropping the least possible amount of features. So unless someone has a > > better idea, I'll work on that as soon as the current review is merged. > > > > Would it be possible :- > > > > - By default we install everything, so all in one > > - If our host is not CONFIG_CONTROLLER_HOST but is part of > > CONFIG_NETWORK_HOSTS, we apply the network manifest > > - Same as above if our host is part of CONFIG_COMPUTE_HOSTS > > - If our host is not CONFIG_CONTROLLER_HOST but is part of > > CONFIG_STORAGE_HOSTS , we apply the storage manifest > > > > Just one more role. May we have 4 roles ? > > This is a tricky one. There used to be support for separate > CONFIG_STORAGE_HOSTS, but I think it has been removed (or at least not > tested for quite a long time). This option is still there, is set as "unsupported" i think it might be a good idea to keep it. what do you guys think? > However, this feature currently works for RDO Mitaka ( as well it woks for > Liberty) > It's even possible to add Storage Node via packstack , taking care of glance > and swift proxy > keystone endpoints manually . > For small prod deployments like several (5-10) Haswell Xeon boxes. ( no HA > requirements from > customer's side ). Ability to split Storage specifically Swift (AIO) > instances or Cinder iSCSILVM > back ends hosting Node from Controller is extremely critical feature. > What I am writing is based on several projects committed in South America's > countries. > No complaints from site support stuff to myself for configurations deployed > via Packstack. > Dropping this feature ( unsupported , but stable working ) will for sure make > Packstack > almost useless toy . > In situation when I am able only play with TripleO QuickStart due to Upstream > docs > ( Mitaka trunk instructions set) for instack-virt-setup don't allow to commit > `openstack undercloud install` makes Howto :- > > https://remote-lab.net/rdo-manager-ha-openstack-deployment > > non reproducible. I have nothing against TripleO turn, but absence of Red Hat > high quality manuals for TripleO bare metal / TripleO Instak-virt-setup > will affect RDO Community in wide spread way. I mean first all countries > like Chile, Brazil, China and etc. > > Thank you. > Boris. > > This would need to be a follow-up review, if it is finally decided to do so. > > Regards, > Javier > > > Thanks > > Boris. > > > Regards, > > Javier > > > > Alan > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > rdo-list Info Page - Red Hat > www.redhat.com > The rdo-list mailing list provides a forum for discussions about installing, > running, and using OpenStack on Red Hat based distributions. To see the > collection of ... > > > > rdo-list Info Page - Red Hat > > www.redhat.com > > The rdo-list mailing list provides a forum for discussions about > > installing, > > running, and using OpenStack on Red Hat based distributions. To see the > > collection of ... > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From rbowen at redhat.com Mon Jun 20 14:33:32 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 20 Jun 2016 10:33:32 -0400 Subject: [rdo-list] Unanswered 'RDO' questions on ask.openstack.org Message-ID: 59 unanswered questions: Unable to start Ceilometer services https://ask.openstack.org/en/question/93600/unable-to-start-ceilometer-services/ Tags: ceilometer, ceilometer-api Dashboard console - Keyboard and mouse issue in Linux graphical environmevt https://ask.openstack.org/en/question/93583/dashboard-console-keyboard-and-mouse-issue-in-linux-graphical-environmevt/ Tags: nova, nova-console Adding hard drive space to RDO installation https://ask.openstack.org/en/question/93412/adding-hard-drive-space-to-rdo-installation/ Tags: cinder, openstack, space, add AWS Ec2 inst Eth port loses IP when attached to linux bridge in Openstack https://ask.openstack.org/en/question/92271/aws-ec2-inst-eth-port-loses-ip-when-attached-to-linux-bridge-in-openstack/ Tags: openstack, networking, aws ceilometer: I've installed openstack mitaka. but swift stops working when i configured the pipeline and ceilometer filter https://ask.openstack.org/en/question/92035/ceilometer-ive-installed-openstack-mitaka-but-swift-stops-working-when-i-configured-the-pipeline-and-ceilometer-filter/ Tags: ceilometer, openstack-swift, mitaka Fail on installing the controller on Cent OS 7 https://ask.openstack.org/en/question/92025/fail-on-installing-the-controller-on-cent-os-7/ Tags: installation, centos7, controller the error of service entity and API endpoints https://ask.openstack.org/en/question/91702/the-error-of-service-entity-and-api-endpoints/ Tags: service, entity, and, api, endpoints Running delorean fails: Git won't fetch sources https://ask.openstack.org/en/question/91600/running-delorean-fails-git-wont-fetch-sources/ Tags: delorean, rdo RDO Manager install issue - can't resolve trunk-mgt.rdoproject.org https://ask.openstack.org/en/question/91533/rdo-manager-install-issue-cant-resolve-trunk-mgtrdoprojectorg/ Tags: rdo-manager Keystone authentication: Failed to contact the endpoint. https://ask.openstack.org/en/question/91517/keystone-authentication-failed-to-contact-the-endpoint/ Tags: keystone, authenticate, endpoint, murano adding computer node. https://ask.openstack.org/en/question/91417/adding-computer-node/ Tags: rdo, openstack Liberty RDO: stack resource topology icons are pink https://ask.openstack.org/en/question/91347/liberty-rdo-stack-resource-topology-icons-are-pink/ Tags: stack, resource, topology, dashboard Build of instance aborted: Block Device Mapping is Invalid. https://ask.openstack.org/en/question/91205/build-of-instance-aborted-block-device-mapping-is-invalid/ Tags: cinder, lvm, centos7 No handlers could be found for logger "oslo_config.cfg" while syncing the glance database https://ask.openstack.org/en/question/91169/no-handlers-could-be-found-for-logger-oslo_configcfg-while-syncing-the-glance-database/ Tags: liberty, glance, install-openstack how to use chef auto manage openstack in RDO? https://ask.openstack.org/en/question/90992/how-to-use-chef-auto-manage-openstack-in-rdo/ Tags: chef, rdo Separate Cinder storage traffic from management https://ask.openstack.org/en/question/90405/separate-cinder-storage-traffic-from-management/ Tags: cinder, separate, nic, iscsi Openstack installation fails using packstack, failure is in installation of openstack-nova-compute. Error: Dependency Package[nova-compute] has failures https://ask.openstack.org/en/question/88993/openstack-installation-fails-using-packstack-failure-is-in-installation-of-openstack-nova-compute-error-dependency-packagenova-compute-has-failures/ Tags: novacompute, rdo, packstack, dependency, failure CentOS OpenStack - compute node can't talk https://ask.openstack.org/en/question/88989/centos-openstack-compute-node-cant-talk/ Tags: rdo How to setup SWIFT_PROXY_NODE and SWIFT_STORAGE_NODEs separately on RDO Liberty ? https://ask.openstack.org/en/question/88897/how-to-setup-swift_proxy_node-and-swift_storage_nodes-separately-on-rdo-liberty/ Tags: rdo, liberty, swift, ha VM and container can't download anything from internet https://ask.openstack.org/en/question/88338/vm-and-container-cant-download-anything-from-internet/ Tags: rdo, neutron, network, connectivity Fedora22, Liberty, horizon VNC console and keymap=sv with ; and/ https://ask.openstack.org/en/question/87451/fedora22-liberty-horizon-vnc-console-and-keymapsv-with-and/ Tags: keyboard, map, keymap, vncproxy, novnc OpenStack-Docker driver failed https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/ Tags: docker, openstack, liberty Can't create volume with cinder https://ask.openstack.org/en/question/86670/cant-create-volume-with-cinder/ Tags: cinder, glusterfs, nfs Sahara SSHException: Error reading SSH protocol banner https://ask.openstack.org/en/question/84710/sahara-sshexception-error-reading-ssh-protocol-banner/ Tags: sahara, icehouse, ssh, vanila Error Sahara create cluster: 'Error attach volume to instance https://ask.openstack.org/en/question/84651/error-sahara-create-cluster-error-attach-volume-to-instance/ Tags: sahara, attach-volume, vanila, icehouse Creating Sahara cluster: Error attach volume to instance https://ask.openstack.org/en/question/84650/creating-sahara-cluster-error-attach-volume-to-instance/ Tags: sahara, attach-volume, hadoop, icehouse, vanilla Routing between two tenants https://ask.openstack.org/en/question/84645/routing-between-two-tenants/ Tags: kilo, fuel, rdo, routing RDO kilo installation metadata widget doesn't work https://ask.openstack.org/en/question/83870/rdo-kilo-installation-metadata-widget-doesnt-work/ Tags: kilo, flavor, metadata Not able to ssh into RDO Kilo instance https://ask.openstack.org/en/question/83707/not-able-to-ssh-into-rdo-kilo-instance/ Tags: rdo, instance-ssh redhat RDO enable access to swift via S3 https://ask.openstack.org/en/question/83607/redhat-rdo-enable-access-to-swift-via-s3/ Tags: swift, s3 -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdocommunity.org @RDOCommunity From trown at redhat.com Mon Jun 20 14:34:55 2016 From: trown at redhat.com (John Trowbridge) Date: Mon, 20 Jun 2016 10:34:55 -0400 Subject: [rdo-list] [tripleo] [oooq] Deployment to baremetal fails; "No valid host was found" In-Reply-To: References: Message-ID: <5767FF0F.7000307@redhat.com> Unfortunately, "No valid host" is the most generic error in OpenStack. Maybe someday Nova will provide a better error message for that error, but in the meantime we need to check the scheduler logs (/var/log/nova/nova-scheduler.log) for more clues. I usually grep for 'returned 0 host' to find which filter is failing to match any hosts. That narrows the search space to investigate further why the filter fails. On 06/20/2016 05:30 AM, Gerard Braad wrote: > Hi, > > > as mentioned in a previous email, I am deploying baremetal nodes using > the quickstart. At the moment I can introspect nodes correctly, but am > unable to deploy to them. > > I performed the checks as mentioned in > /tripleo-docs/doc/source/troubleshooting/troubleshooting-overcloud.rst: > > > The flavor list I have is unchanged: > [stack at undercloud ~]$ openstack flavor list > +--------------------------------------+---------------+------+------+-----------+-------+-----------+ > | ID | Name | RAM | Disk | > Ephemeral | VCPUs | Is Public | > +--------------------------------------+---------------+------+------+-----------+-------+-----------+ > | 2e72ffb5-c6d7-46fd-ad75-448c0ad6855f | baremetal | 4096 | 40 | > 0 | 1 | True | > | 6b8b37e4-618d-4841-b5e3-f556ef27fd4d | oooq_compute | 8192 | 49 | > 0 | 1 | True | > | 973b58c3-8730-4b1f-96b2-fda253c15dbc | oooq_control | 8192 | 49 | > 0 | 1 | True | > | e22dc516-f53f-4a71-9793-29c614999801 | oooq_ceph | 8192 | 49 | > 0 | 1 | True | > | e3dce62a-ac8d-41ba-9f97-84554b247faa | block-storage | 4096 | 40 | > 0 | 1 | True | > | f5fe9ba6-cf5c-4ef3-adc2-34f3b4381915 | control | 4096 | 40 | > 0 | 1 | True | > | fabf81d8-44cb-4c25-8ed0-2afd124425db | compute | 4096 | 40 | > 0 | 1 | True | > | fe512696-2294-40cb-9d20-12415f45c1a9 | ceph-storage | 4096 | 40 | > 0 | 1 | True | > | ffc859af-dbfd-4e27-99fb-9ab02f4afa79 | swift-storage | 4096 | 40 | > 0 | 1 | True | > +--------------------------------------+---------------+------+------+-----------+-------+-----------+ > > > In instackenv.json the nodes have been assigned as: > [stack at undercloud ~]$ cat instackenv.json > { > "nodes":[ > { > "_comment": "ooo1", > "pm_type":"pxe_ipmitool", > "mac": [ > "00:26:9e:9b:c3:36" > ], > "cpu": "16", > "memory": "65536", > "disk": "370", > "arch": "x86_64", > "pm_user":"root", > "pm_password":"admin", > "pm_addr":"10.0.108.126", > "capabilities": "profile:control,boot_option:local" > }, > { > "_comment": "ooo2", > "pm_type":"pxe_ipmitool", > "mac": [ > "00:26:9e:9c:38:a6" > ], > "cpu": "16", > "memory": "65536", > "disk": "370", > "arch": "x86_64", > "pm_user":"root", > "pm_password":"admin", > "pm_addr":"10.0.108.127", > "capabilities": "profile:compute,boot_option:local" > } > ] > } > > [stack at undercloud ~]$ ironic node-list > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > | UUID | Name | Instance UUID | Power > State | Provisioning State | Maintenance | > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > | 0956df36-b642-44b8-a67f-0df88270372b | None | None | power > off | manageable | False | > | cc311355-f373-4e5c-99be-31ba3185639d | None | None | power > off | manageable | False | > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > And manually I perform the introspection: > > [stack at undercloud ~]$ openstack baremetal introspection bulk start > Setting nodes for introspection to manageable... > Starting introspection of node: 0956df36-b642-44b8-a67f-0df88270372b > Starting introspection of node: cc311355-f373-4e5c-99be-31ba3185639d > Waiting for introspection to finish... > Introspection for UUID 0956df36-b642-44b8-a67f-0df88270372b finished > successfully. > Introspection for UUID cc311355-f373-4e5c-99be-31ba3185639d finished > successfully. > Setting manageable nodes to available... > Node 0956df36-b642-44b8-a67f-0df88270372b has been set to available. > Node cc311355-f373-4e5c-99be-31ba3185639d has been set to available. > Introspection completed. > > > [stack at undercloud ~]$ ironic node-list > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > | UUID | Name | Instance UUID | Power > State | Provisioning State | Maintenance | > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > | 0956df36-b642-44b8-a67f-0df88270372b | None | None | power > off | available | False | > | cc311355-f373-4e5c-99be-31ba3185639d | None | None | power > off | available | False | > +--------------------------------------+------+---------------+-------------+--------------------+-------------+ > > > After this, I start the deployment. I have defined the compute and > control flavor to be of the respective type. > > [stack at undercloud ~]$ ./overcloud-deploy.sh > > + openstack overcloud deploy --templates --timeout 60 --control-scale > 1 --control-flavor control --compute-scale 1 --compute-flavor compute > --ntp-server pool.ntp.org -e /tmp/deploy_env.yaml > Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates > 2016-06-20 08:18:33 [overcloud]: CREATE_IN_PROGRESS Stack CREATE started > 2016-06-20 08:18:33 [HorizonSecret]: CREATE_IN_PROGRESS state changed > 2016-06-20 08:18:33 [RabbitCookie]: CREATE_IN_PROGRESS state changed > 2016-06-20 08:18:33 [PcsdPassword]: CREATE_IN_PROGRESS state changed > 2016-06-20 08:18:33 [MysqlClusterUniquePart]: CREATE_IN_PROGRESS state changed > 2016-06-20 08:18:33 [MysqlRootPassword]: CREATE_IN_PROGRESS state changed > 2016-06-20 08:18:33 [Networks]: CREATE_IN_PROGRESS state changed > 2016-06-20 08:18:34 [VipConfig]: CREATE_IN_PROGRESS state changed > 2016-06-20 08:18:34 [HeatAuthEncryptionKey]: CREATE_IN_PROGRESS state changed > 2016-06-20 08:18:34 [overcloud-VipConfig-i4dgmk37z6hg]: > CREATE_IN_PROGRESS Stack CREATE started > 2016-06-20 08:18:34 [overcloud-Networks-4pb3htxq7rkd]: > CREATE_IN_PROGRESS Stack CREATE started > > 2016-06-20 08:19:06 [Controller]: CREATE_FAILED ResourceInError: > resources.Controller: Went to status ERROR due to "Message: No valid > host was found. There are not enough hosts available., Code: 500" > 2016-06-20 08:19:06 [Controller]: DELETE_IN_PROGRESS state changed > 2016-06-20 08:19:06 [NovaCompute]: CREATE_FAILED ResourceInError: > resources.NovaCompute: Went to status ERROR due to "Message: No valid > host was found. There are not enough hosts available., Code: 500" > 2016-06-20 08:19:06 [NovaCompute]: DELETE_IN_PROGRESS state changed > 2016-06-20 08:19:09 [Controller]: DELETE_COMPLETE state changed > 2016-06-20 08:19:09 [NovaCompute]: DELETE_COMPLETE state changed > 2016-06-20 08:19:12 [Controller]: CREATE_IN_PROGRESS state changed > 2016-06-20 08:19:12 [NovaCompute]: CREATE_IN_PROGRESS state changed > 2016-06-20 08:19:14 [Controller]: CREATE_FAILED ResourceInError: > resources.Controller: Went to status ERROR due to "Message: No valid > host was found. There are not enough hosts available., Code: 500" > 2016-06-20 08:19:14 [Controller]: DELETE_IN_PROGRESS state changed > 2016-06-20 08:19:14 [NovaCompute]: CREATE_FAILED ResourceInError: > resources.NovaCompute: Went to status ERROR due to "Message: No valid > host was found. There are not enough hosts available., Code: 500" > 2016-06-20 08:19:14 [NovaCompute]: DELETE_IN_PROGRESS state changed > > > But as you can see, the deployment fails. > > I check the introspection information and verify that the disk, local > memory and cpus are matching or exceeding the flavor: > > [stack at undercloud ~]$ ironic node-show 0956df36-b642-44b8-a67f-0df88270372b > +------------------------+-------------------------------------------------------------------------+ > | Property | Value > | > +------------------------+-------------------------------------------------------------------------+ > | chassis_uuid | > | > | clean_step | {} > | > | console_enabled | False > | > | created_at | 2016-06-20T05:51:17+00:00 > | > | driver | pxe_ipmitool > | > | driver_info | {u'ipmi_password': u'******', > u'ipmi_address': u'10.0.108.126', | > | | u'ipmi_username': u'root', > u'deploy_kernel': | > | | u'07c794a6-b427-4e75-ba58-7c555abbf2f8', > u'deploy_ramdisk': u'67a66b7b- | > | | 637f-4b25-bcef-ed39ae32a1f4'} > | > | driver_internal_info | {} > | > | extra | {u'hardware_swift_object': > u'extra_hardware-0956df36-b642-44b8-a67f- | > | | 0df88270372b'} > | > | inspection_finished_at | None > | > | inspection_started_at | None > | > | instance_info | {} > | > | instance_uuid | None > | > | last_error | None > | > | maintenance | False > | > | maintenance_reason | None > | > | name | None > | > | power_state | power off > | > | properties | {u'memory_mb': u'65536', u'cpu_arch': > u'x86_64', u'local_gb': u'371', | > | | u'cpus': u'16', u'capabilities': > u'profile:control,boot_option:local'} | > | provision_state | available > | > | provision_updated_at | 2016-06-20T07:32:46+00:00 > | > | raid_config | > | > | reservation | None > | > | target_power_state | None > | > | target_provision_state | None > | > | target_raid_config | > | > | updated_at | 2016-06-20T07:32:46+00:00 > | > | uuid | 0956df36-b642-44b8-a67f-0df88270372b > | > +------------------------+-------------------------------------------------------------------------+ > > And also the hypervisor stats are set, but only matching the node count. > [stack at undercloud ~]$ nova hypervisor-stats > +----------------------+-------+ > | Property | Value | > +----------------------+-------+ > | count | 2 | > | current_workload | 0 | > | disk_available_least | 0 | > | free_disk_gb | 0 | > | free_ram_mb | 0 | > | local_gb | 0 | > | local_gb_used | 0 | > | memory_mb | 0 | > | memory_mb_used | 0 | > | running_vms | 0 | > | vcpus | 0 | > | vcpus_used | 0 | > +----------------------+-------+ > > Registering the nodes as profile:baremetal has the same effect. > > What other parameters are used in making the decision if a node can be > deployed to? I probably miss a small detail... what can I check to > make sure the deployment starts? > > regards, > > > Gerard > > From me at gbraad.nl Mon Jun 20 14:52:46 2016 From: me at gbraad.nl (Gerard Braad) Date: Mon, 20 Jun 2016 22:52:46 +0800 Subject: [rdo-list] [tripleo] [oooq] Deployment to baremetal fails; "No valid host was found" In-Reply-To: <5767FF0F.7000307@redhat.com> References: <5767FF0F.7000307@redhat.com> Message-ID: On Mon, Jun 20, 2016 at 10:34 PM, John Trowbridge wrote: > I usually grep for 'returned 0 host' to find which filter is failing Filter RamFilter returned 0 hosts regards, Gerard -- Gerard Braad | http://gbraad.nl [ Doing Open Source Matters ] From hguemar at fedoraproject.org Mon Jun 20 15:00:07 2016 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 20 Jun 2016 15:00:07 +0000 (UTC) Subject: [rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20160620150007.8711E60A4009@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2016-06-22 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Meeting ](https://etherpad.openstack.org/p/RDO-Meeting) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From me at gbraad.nl Mon Jun 20 15:46:38 2016 From: me at gbraad.nl (Gerard Braad) Date: Mon, 20 Jun 2016 23:46:38 +0800 Subject: [rdo-list] [tripleo] [oooq] Deployment to baremetal fails; "No valid host was found" In-Reply-To: References: <5767FF0F.7000307@redhat.com> Message-ID: Finally was able to get some time to look more into the log files On Mon, Jun 20, 2016 at 10:52 PM, Gerard Braad wrote: > On Mon, Jun 20, 2016 at 10:34 PM, John Trowbridge wrote: >> I usually grep for 'returned 0 host' to find which filter is failing after the Compute capability filter: 2016-06-20 09:15:18.275 1186 DEBUG nova.filters [req-665aaa1f-7788-4aea-85cd-c08097d934c5 0b48c6614e73469eb2350fe09c33ee10 2fea396ebcab48ed8c150ef31506f66f - - -] Filter ComputeCapabilitiesFilter returned 1 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:104 It seems a node that claims to have less than 4G of memory. The node actually contains 64G of memory and this is also reported by the introspection (node-show) 2016-06-20 09:15:18.276 1186 DEBUG nova.scheduler.filters.ram_filter [req-665aaa1f-7788-4aea-85cd-c08097d934c5 0b48c6614e73469eb2350fe09c33ee10 2fea396ebcab48ed8c150ef31506f66f - - -] (undercloud, 0956df36-b642-44b8-a67f-0df88270372b) ram: 0MB disk: 0MB io_ops: 0 instances: 0 does not have 4096 MB usable ram before overcommit, it only has 0 MB. host_passes /usr/lib/python2.7/site-packages/nova/scheduler/filters/ram_filter.py:45 2016-06-20 09:15:18.276 1186 INFO nova.filters [req-665aaa1f-7788-4aea-85cd-c08097d934c5 0b48c6614e73469eb2350fe09c33ee10 2fea396ebcab48ed8c150ef31506f66f - - -] Filter RamFilter returned 0 hosts -- Gerard Braad | http://gbraad.nl [ Doing Open Source Matters ] From johfulto at redhat.com Mon Jun 20 16:03:54 2016 From: johfulto at redhat.com (John Fulton) Date: Mon, 20 Jun 2016 12:03:54 -0400 Subject: [rdo-list] [tripleo] [oooq] Deployment to baremetal fails; "No valid host was found" In-Reply-To: References: <5767FF0F.7000307@redhat.com> Message-ID: <576813EA.4060605@redhat.com> On 06/20/2016 11:46 AM, Gerard Braad wrote: > Finally was able to get some time to look more into the log files > > On Mon, Jun 20, 2016 at 10:52 PM, Gerard Braad wrote: >> On Mon, Jun 20, 2016 at 10:34 PM, John Trowbridge wrote: >>> I usually grep for 'returned 0 host' to find which filter is failing > > after the Compute capability filter: > 2016-06-20 09:15:18.275 1186 DEBUG nova.filters > [req-665aaa1f-7788-4aea-85cd-c08097d934c5 > 0b48c6614e73469eb2350fe09c33ee10 2fea396ebcab48ed8c150ef31506f66f - - > -] Filter ComputeCapabilitiesFilter returned 1 host(s) > get_filtered_objects > /usr/lib/python2.7/site-packages/nova/filters.py:104 > > It seems a node that claims to have less than 4G of memory. The node > actually contains 64G of memory and this is also reported by the > introspection (node-show) > > 2016-06-20 09:15:18.276 1186 DEBUG nova.scheduler.filters.ram_filter > [req-665aaa1f-7788-4aea-85cd-c08097d934c5 > 0b48c6614e73469eb2350fe09c33ee10 2fea396ebcab48ed8c150ef31506f66f - - > -] (undercloud, 0956df36-b642-44b8-a67f-0df88270372b) ram: 0MB disk: > 0MB io_ops: 0 instances: 0 does not have 4096 MB usable ram before > overcommit, it only has 0 MB. host_passes > /usr/lib/python2.7/site-packages/nova/scheduler/filters/ram_filter.py:45 > 2016-06-20 09:15:18.276 1186 INFO nova.filters > [req-665aaa1f-7788-4aea-85cd-c08097d934c5 > 0b48c6614e73469eb2350fe09c33ee10 2fea396ebcab48ed8c150ef31506f66f - - > -] Filter RamFilter returned 0 hosts What does `nova flavor-list` return? I've seen this error and gotten past it by making sure my flavors have only 4G of RAM instead of trying to make them represent the hardware. John From me at gbraad.nl Mon Jun 20 16:13:51 2016 From: me at gbraad.nl (Gerard Braad) Date: Tue, 21 Jun 2016 00:13:51 +0800 Subject: [rdo-list] [tripleo] [oooq] Deployment to baremetal fails; "No valid host was found" In-Reply-To: <576813EA.4060605@redhat.com> References: <5767FF0F.7000307@redhat.com> <576813EA.4060605@redhat.com> Message-ID: Hi John, I do not use the oooq_compute (or oooq_control) flavour, but specifically use the compute (and control). On Tue, Jun 21, 2016 at 12:03 AM, John Fulton wrote: > What does `nova flavor-list` return? [stack at undercloud ~]$ nova flavor-list +--------------------------------------+---------------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +--------------------------------------+---------------+-----------+------+-----------+------+-------+-------------+-----------+ | 2e72ffb5-c6d7-46fd-ad75-448c0ad6855f | baremetal | 4096 | 40 | 0 | | 1 | 1.0 | True | | 6b8b37e4-618d-4841-b5e3-f556ef27fd4d | oooq_compute | 8192 | 49 | 0 | | 1 | 1.0 | True | | 973b58c3-8730-4b1f-96b2-fda253c15dbc | oooq_control | 8192 | 49 | 0 | | 1 | 1.0 | True | | e22dc516-f53f-4a71-9793-29c614999801 | oooq_ceph | 8192 | 49 | 0 | | 1 | 1.0 | True | | e3dce62a-ac8d-41ba-9f97-84554b247faa | block-storage | 4096 | 40 | 0 | | 1 | 1.0 | True | | f5fe9ba6-cf5c-4ef3-adc2-34f3b4381915 | control | 4096 | 40 | 0 | | 1 | 1.0 | True | | fabf81d8-44cb-4c25-8ed0-2afd124425db | compute | 4096 | 40 | 0 | | 1 | 1.0 | True | | fe512696-2294-40cb-9d20-12415f45c1a9 | ceph-storage | 4096 | 40 | 0 | | 1 | 1.0 | True | | ffc859af-dbfd-4e27-99fb-9ab02f4afa79 | swift-storage | 4096 | 40 | 0 | | 1 | 1.0 | True | +--------------------------------------+---------------+-----------+------+-----------+------+-------+-------------+-----------+ There are both at: 4G / 4096 regards, Gerard -- Gerard Braad | http://gbraad.nl [ Doing Open Source Matters ] From apevec at redhat.com Mon Jun 20 17:32:54 2016 From: apevec at redhat.com (Alan Pevec) Date: Mon, 20 Jun 2016 19:32:54 +0200 Subject: [rdo-list] Mouse does not work in a hosted virtual machine using openstack-mitaka release In-Reply-To: References: Message-ID: > Can anyone please suggest how to enable mouse in VM instance ? Thank you in > advance for your time and support. This is not really a primary use-case for cloud infra like OpenStack but to debug this: * try fedora VM using virt-manager i.e. direct libvirt on centos7 * try newer fedora 23 guest image Cheers, Alan From rlandy at redhat.com Mon Jun 20 18:55:42 2016 From: rlandy at redhat.com (Ronelle Landy) Date: Mon, 20 Jun 2016 14:55:42 -0400 (EDT) Subject: [rdo-list] [tripleo] [oooq] Deployment to baremetal fails; "No valid host was found" In-Reply-To: References: <5767FF0F.7000307@redhat.com> Message-ID: <492676804.355507.1466448942784.JavaMail.zimbra@redhat.com> > From: "Gerard Braad" > To: "John Trowbridge" > Cc: "rdo-list" > Sent: Monday, June 20, 2016 11:46:38 AM > Subject: Re: [rdo-list] [tripleo] [oooq] Deployment to baremetal fails; "No valid host was found" > > Finally was able to get some time to look more into the log files > > On Mon, Jun 20, 2016 at 10:52 PM, Gerard Braad wrote: > > On Mon, Jun 20, 2016 at 10:34 PM, John Trowbridge wrote: > >> I usually grep for 'returned 0 host' to find which filter is failing > > after the Compute capability filter: > 2016-06-20 09:15:18.275 1186 DEBUG nova.filters > [req-665aaa1f-7788-4aea-85cd-c08097d934c5 > 0b48c6614e73469eb2350fe09c33ee10 2fea396ebcab48ed8c150ef31506f66f - - > -] Filter ComputeCapabilitiesFilter returned 1 host(s) > get_filtered_objects > /usr/lib/python2.7/site-packages/nova/filters.py:104 > > It seems a node that claims to have less than 4G of memory. The node > actually contains 64G of memory and this is also reported by the > introspection (node-show) > > 2016-06-20 09:15:18.276 1186 DEBUG nova.scheduler.filters.ram_filter > [req-665aaa1f-7788-4aea-85cd-c08097d934c5 > 0b48c6614e73469eb2350fe09c33ee10 2fea396ebcab48ed8c150ef31506f66f - - > -] (undercloud, 0956df36-b642-44b8-a67f-0df88270372b) ram: 0MB disk: > 0MB io_ops: 0 instances: 0 does not have 4096 MB usable ram before > overcommit, it only has 0 MB. host_passes > /usr/lib/python2.7/site-packages/nova/scheduler/filters/ram_filter.py:45 > 2016-06-20 09:15:18.276 1186 INFO nova.filters > [req-665aaa1f-7788-4aea-85cd-c08097d934c5 > 0b48c6614e73469eb2350fe09c33ee10 2fea396ebcab48ed8c150ef31506f66f - - > -] Filter RamFilter returned 0 hosts If you are deploying with Mitaka, please check you have the fix linked here: https://bugs.launchpad.net/tripleo/+bug/1567395 (https://github.com/rdo-packages/ironic-python-agent-distgit/commit/198e5bf53d94a658d836f72a17d1fbbe368e8bd8). Other possible 'No Host found' culprits: If you are creating your own flavor, ensure that the correct properties are set: example: openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" ; The disk size reported after introspection looks fine - so you probably don't require disk hints but it's something to keep in mind: https://bugzilla.redhat.com/show_bug.cgi?id=1288192 > > > -- > > Gerard Braad | http://gbraad.nl > [ Doing Open Source Matters ] > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From rbowen at redhat.com Mon Jun 20 19:00:41 2016 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 20 Jun 2016 15:00:41 -0400 Subject: [rdo-list] Red Hat Summit: Demo volunteers wanted In-Reply-To: <9b62e5dd-0b7d-d46f-248a-0ac9a303388c@redhat.com> References: <9b62e5dd-0b7d-d46f-248a-0ac9a303388c@redhat.com> Message-ID: <09eabffd-9a41-c9de-4b53-b624c9925cb1@redhat.com> It is a more or less final reminder that if you're going to be at Red Hat Summit, and you have something that you'd like to show off in the RDO booth, or just hang out and answer questions, the etherpad below is the place to indicate your interest. In any event, drop by for RDO tshirts and other Red Hat swag. See you in San Francisco! --Rich On 06/06/2016 03:25 PM, Rich Bowen wrote: > At OpenStack Summit, we had a number of people volunteer to present > demos at the RDO booth and/or answer attendee questions. This was a big > success, with almost every time slot being filled by very helpful people. > > We'd like to do the same thing at Red Hat Summit, which will be held in > 3 weeks in San Francisco. If you plan to attend, and if you have a free > time slot, I would appreciate it if you'd be willing to do a shift in > the booth, and possibly bring a demo along with you. > > Demos *can* be a "live demo", but typically, unless it's completely > self-contained on your laptop, you're better off doing a video, since > network conditions can't be guaranteed. (We usually have a hard-wire > network in the booth, but even that can be flakey at peak times.) > > If you're willing to participate, please claim a slot in the schedule > etherpad, HERE: https://etherpad.openstack.org/p/rhsummit-rdo-booth > > Time slots are mostly 60 minutes. If some other time slot works better > for you, please do feel free to modify the start/end times. Please > indicate what you'll be demoing. > > Thanks! > > --Rich > -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdocommunity.org @RDOCommunity From bderzhavets at hotmail.com Mon Jun 20 20:48:51 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Mon, 20 Jun 2016 20:48:51 +0000 Subject: [rdo-list] mitaka installation In-Reply-To: References: , Message-ID: Option 1 File bug against packstack , stable release mitaka to https://bugzilla.redhat.com and wait for better times to come Option 2 1. packstack --gen-answer-file answer1.txt 2. Edit answer1.txt CONFIG_KEYSTONE_API_VERSION=v3 3.packstack --answer-file=./answer1.txt It will crash running cinder's puppet , however # systemctll | grep cinder will look just fine right after crash 4. Update answer1.txt and set CONFIG_CINDER_INSTALL=n 5. packstack --answer-file=./answer1.txt Up on completion CINDER would work as far as I remember. This is a hack Option 3. Read and follow https://www.linux.com/blog/backport-upstream-commits-stable-rdo-mitaka-release-deployments-keystone-api-v3 This is right way to go which demonstrates that you are understanding what you are doing. Regards. Boris. ------------------------------------------------------------------------------------------------------------------------------------------------- From: Andrey Shevel Sent: Monday, June 20, 2016 1:44 PM To: Boris Derzhavets Subject: Re: [rdo-list] mitaka installation Hello colleagues, I repeated packstack --allinone (mitaka) exactly like described on https://www.rdoproject.org/install/quickstart/ Packstack quickstart ? RDO www.rdoproject.org Packstack quickstart: Proof of concept for single node. Packstack is an installation utility that lets you spin up a proof of concept cloud on one node. on newly created VM and newly installed from scratch (as virtual server) with OS [root at openstack-test ~]# cat /etc/os-release* NAME="Scientific Linux" VERSION="7.2 (Nitrogen)" ID="rhel" ID_LIKE="fedora" VERSION_ID="7.2" PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA" HOME_URL="http://www.scientificlinux.org//" BUG_REPORT_URL="mailto:scientific-linux-devel at listserv.fnal.gov" I got exactly same errors as before =================================== Applying 192.168.122.47_amqp.pp Applying 192.168.122.47_mariadb.pp 192.168.122.47_amqp.pp: [ DONE ] 192.168.122.47_mariadb.pp: [ DONE ] Applying 192.168.122.47_apache.pp 192.168.122.47_apache.pp: [ DONE ] Applying 192.168.122.47_keystone.pp Applying 192.168.122.47_glance.pp Applying 192.168.122.47_cinder.pp 192.168.122.47_keystone.pp: [ ERROR ] Applying Puppet manifests [ ERROR ] ERROR : Error appeared during Puppet run: 192.168.122.47_keystone.pp Error: Could not prefetch keystone_role provider 'openstack': Could not authenticate You will find full trace in log /var/tmp/packstack/20160620-191746-ud6qNn/manifests/192.168.122.47_keystone.pp.log Please check log file /var/tmp/packstack/20160620-191746-ud6qNn/openstack-setup.log for more information Additional information: * A new answerfile was created in: /root/packstack-answers-20160620-191747.txt * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components. * File /root/keystonerc_admin has been created on OpenStack client host 192.168.122.47. To use the command line tools you need to source the file. * To access the OpenStack Dashboard browse to http://192.168.122.47/dashboard . Please, find your login credentials stored in the keystonerc_admin in your home directory. * To use Nagios, browse to http://192.168.122.47/nagios username: nagiosadmin, password: 6099599f76754f92 Stop Date & Time = Mon Jun 20 19:26:39 MSK 2016 ================================== It seems some openstack installation drawback takes place. Automatically generated packstack-answers file is in the attachment I did pay also attention that on the http://trystack.org/ we see version 'liberty' but not 'mitaka'. Any comments ? On Fri, Jun 17, 2016 at 1:35 PM, Boris Derzhavets wrote: > I have well tested workaround for CONFIG_KEYSTONE_API_VERSION=v3 based on > > back porting 2 recent upstream commits to stable RDO Mitaka. > > When you run `packstack --alinone` Keystone API is v2.0 by default not > v3. > > So you might be focused on v2.0, otherwise let me know. I have detailed > notes > > been done for back port ( one more time thanks to Javier Pena for upstream > work ) > > > Boris. > ________________________________ > From: rdo-list-bounces at redhat.com on behalf of > Andrey Shevel > Sent: Friday, June 17, 2016 4:08 AM > To: alan.pevec at redhat.com > Cc: rdo-list > Subject: Re: [rdo-list] mitaka installation > > The file REINSTALL.... is script to reinstall Openstack-mitaka > > On Thu, Jun 16, 2016 at 9:39 PM, Alan Pevec wrote: >>> ERROR : Error appeared during Puppet run: 193.124.84.22_keystone.pp >>> Error: Could not prefetch keystone_role provider 'openstack': Could >>> not authenticate >>> You will find full trace in log >>> >>> /var/tmp/packstack/20160616-133447-C9hfh9/manifests/193.124.84.22_keystone.pp.log >> >> ^ please paste this file so we can see more details about the error > > > > -- > Andrey Y Shevel -- Andrey Y Shevel -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Mon Jun 20 22:40:23 2016 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Mon, 20 Jun 2016 18:40:23 -0400 Subject: [rdo-list] Mouse does not work in a hosted virtual machine using openstack-mitaka release In-Reply-To: References: Message-ID: try - x forwarding over ssh - vnc - etc On Mon, Jun 20, 2016 at 1:32 PM, Alan Pevec wrote: > > Can anyone please suggest how to enable mouse in VM instance ? Thank you > in > > advance for your time and support. > > This is not really a primary use-case for cloud infra like OpenStack > but to debug this: > * try fedora VM using virt-manager i.e. direct libvirt on centos7 > * try newer fedora 23 guest image > > Cheers, > Alan > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- *805010942448935* *GR750055912MA* *Link to me on LinkedIn * -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at gbraad.nl Tue Jun 21 01:35:36 2016 From: me at gbraad.nl (Gerard Braad) Date: Tue, 21 Jun 2016 09:35:36 +0800 Subject: [rdo-list] [tripleo] [oooq] Deployment to baremetal fails; "No valid host was found" In-Reply-To: <492676804.355507.1466448942784.JavaMail.zimbra@redhat.com> References: <5767FF0F.7000307@redhat.com> <492676804.355507.1466448942784.JavaMail.zimbra@redhat.com> Message-ID: On Tue, Jun 21, 2016 at 2:55 AM, Ronelle Landy wrote: > If you are deploying with Mitaka Yes, I am deploying Mitaka... However, I have also tried to use Liberty, but this results in a failure of the introspection: [stack at undercloud ~]$ openstack baremetal introspection bulk start Setting nodes for introspection to manageable... Starting introspection of node: d9685415-328f-4f37-aabe-f4edfb0eebaf Starting introspection of node: 702b4804-a9d9-4d3b-895c-019377501022 Waiting for introspection to finish... Introspection for UUID d9685415-328f-4f37-aabe-f4edfb0eebaf finished with error: The following required parameters are missing: ['local_gb'] Introspection for UUID 702b4804-a9d9-4d3b-895c-019377501022 finished with error: The following required parameters are missing: ['local_gb'] Setting manageable nodes to available... Introspection completed with errors: d9685415-328f-4f37-aabe-f4edfb0eebaf: The following required parameters are missing: ['local_gb'] 702b4804-a9d9-4d3b-895c-019377501022: The following required parameters are missing: ['local_gb'] -- Gerard Braad | http://gbraad.nl [ Doing Open Source Matters ] From alon.dotan at hpe.com Tue Jun 21 03:51:13 2016 From: alon.dotan at hpe.com (Dotan, Alon) Date: Tue, 21 Jun 2016 03:51:13 +0000 Subject: [rdo-list] Issue with assigning multiple VFs to VM instance In-Reply-To: References: Message-ID: Which method you are using? Neutron sriov plugin? Nova (via flavor definition)? Neutron without sriov plugin (which I think is the best) Thanks, From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Chinmaya Dwibedy Sent: Monday, June 20, 2016 14:53 To: rdo-list at redhat.com Subject: Re: [rdo-list] Issue with assigning multiple VFs to VM instance Hi , Can anyone please suggest how to assign multiple VF devices to VM instance using open-stack openstack-mitaka release? Thank you in advance for your time and support. Regards, Chinmaya On Thu, Jun 16, 2016 at 5:42 PM, Chinmaya Dwibedy > wrote: Hi All, I have installed open-stack openstack-mitaka release on CentO7 system . It has two Intel QAT devices. There are 32 VF devices available per QAT Device/DH895xCC device. [root at localhost nova(keystone_admin)]# lspci -nn | grep 0435 83:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT [8086:0435] 88:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT [8086:0435] [root at localhost nova(keystone_admin)]# cat /sys/bus/pci/devices/0000\:88\:00.0/sriov_numvfs 32 [root at localhost nova(keystone_admin)]# cat /sys/bus/pci/devices/0000\:83\:00.0/sriov_numvfs 32 [root at localhost nova(keystone_admin)]# Changed the nova configuration (as stated below) for exposing VF ( via PCI-passthrough) to the instances. pci_alias = {"name": "QuickAssist", "product_id": "0443", "vendor_id": "8086", "device_type": "type-VF"} pci_passthrough_whitelist = [{"vendor_id":"8086","product_id":"0443"}}] Restarted the nova compute, nova API and nova scheduler service service openstack-nova-compute restart;service openstack-nova-api restart;systemctl restart openstack-nova-scheduler; scheduler_available_filters=nova.scheduler.filters.all_filters scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter Thereafter it shows all the available VFs (64) in nova database upon select * from pci_devices. Set the flavor 4 to allow passing two VFs to instances. [root at localhost nova(keystone_admin)]# nova flavor-show 4 +----------------------------+------------------------------------------------------------+ | Property | Value | +----------------------------+------------------------------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 80 | | extra_specs | {"pci_passthrough:alias": "QuickAssist:2"} | | id | 4 | | name | m1.large | | os-flavor-access:is_public | True | | ram | 8192 | | rxtx_factor | 1.0 | | swap | | | vcpus | 4 | +----------------------------+------------------------------------------------------------+ [root at localhost nova(keystone_admin)]# Also when I launch an instance using this new flavor, it goes into an error state nova boot --flavor 4 --key_name oskey1 --image bc859dc5-103b-428b-814f-d36e59009454 --nic net-id=e2ca118d-1f25-47de-8524-bb2a2635c4be TEST Here goes the output of nova-conductor.log 2016-06-16 07:55:34.640 5094 WARNING nova.scheduler.utils [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 150, in inner return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 104, in select_destinations dests = self.driver.select_destinations(ctxt, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 74, in select_destinations raise exception.NoValidHost(reason=reason) NoValidHost: No valid host was found. There are not enough hosts available. Here goes the output of nova-compute.log 2016-06-16 07:57:32.502 170789 INFO nova.compute.resource_tracker [req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Total usable vcpus: 36, total allocated vcpus: 16 2016-06-16 07:57:32.502 170789 INFO nova.compute.resource_tracker [req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Final resource view: name=localhost phys_ram=128721MB used_ram=33280MB phys_disk=49GB used_disk=320GB total_vcpus=36 used_vcpus=16 pci_stats=[PciDevicePool(count=0,numa_node=0,product_id='10fb',tags={dev_type='type-PF'},vendor_id='8086'), PciDevicePool(count=63,numa_node=1,product_id='0443',tags={dev_type='type-VF'},vendor_id='8086')] 2016-06-16 07:57:33.803 170789 INFO nova.compute.resource_tracker [req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Compute_service record updated for localhost:localhost Here goes the output of nova-scheduler.log 2016-06-16 07:55:34.636 171018 WARNING nova.scheduler.host_manager [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Host localhost has more disk space than database expected (-141 GB > -271 GB) 2016-06-16 07:55:34.637 171018 INFO nova.filters [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Filter PciPassthroughFilter returned 0 hosts 2016-06-16 07:55:34.638 171018 INFO nova.filters [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Filtering removed all hosts for the request with instance ID '4f68c680-5a17-4a38-a6df-5cdb6d76d75b'. Filter results: ['RamFilter: (start: 1, end: 1)', 'ComputeFilter: (start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)', 'ComputeCapabilitiesFilter: (start: 1, end: 1)', 'ImagePropertiesFilter: (start: 1, end: 1)', 'PciPassthroughFilter: (start: 1, end: 0)'] 2016-06-16 07:56:14.743 171018 INFO nova.scheduler.host_manager [req-64a8dc31-f2ab-4d93-8579-6b9f8210ece7 - - - - -] Successfully synced instances from host 'localhost'. 2016-06-16 07:58:17.748 171018 INFO nova.scheduler.host_manager [req-152ac777-1f77-433d-8493-6cd86ab3e0fc - - - - -] Successfully synced instances from host 'localhost'. Note that, If I set the flavor as (#nova flavor-key 4 set "pci_passthrough:alias"="QuickAssist:1") , it assigns a single VF to VM instance. I think, multiple PFs can be assigned per VM. Can anyone please suggest , where I am wrong and the way to solve this ? Thank you in advance for your support and help. Regards, Chinmaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at gbraad.nl Tue Jun 21 05:33:09 2016 From: me at gbraad.nl (Gerard Braad) Date: Tue, 21 Jun 2016 13:33:09 +0800 Subject: [rdo-list] [tripleo] [oooq] Deployment to baremetal fails; "No valid host was found" In-Reply-To: References: <5767FF0F.7000307@redhat.com> <492676804.355507.1466448942784.JavaMail.zimbra@redhat.com> Message-ID: On Tue, Jun 21, 2016 at 9:35 AM, Gerard Braad wrote: > On Tue, Jun 21, 2016 at 2:55 AM, Ronelle Landy wrote: >> If you are deploying with Mitaka > Yes, I am deploying Mitaka... Currently deploying from a fresh installation of Mitaka quickstart... I have not changed my setup (using the same deployment scripts and parameters) 2016-06-21 02:39:18 [Controller]: CREATE_COMPLETE state changed 2016-06-21 02:39:19 [ExternalPort]: CREATE_IN_PROGRESS state changed 2016-06-21 02:39:19 [TenantPort]: CREATE_IN_PROGRESS state changed 2016-06-21 02:39:20 [StoragePort]: CREATE_IN_PROGRESS state changed 2016-06-21 02:39:21 [ManagementPort]: CREATE_IN_PROGRESS state changed 2016-06-21 02:39:21 [UpdateDeployment]: CREATE_IN_PROGRESS state changed 2016-06-21 02:39:23 [StorageMgmtPort]: CREATE_IN_PROGRESS state changed 2016-06-21 02:39:23 [InternalApiPort]: CREATE_IN_PROGRESS state changed 2016-06-21 02:39:25 [ManagementPort]: CREATE_COMPLETE state changed 2016-06-21 02:39:26 [ExternalPort]: CREATE_COMPLETE state changed 2016-06-21 02:39:26 [TenantPort]: CREATE_COMPLETE state changed 2016-06-21 02:39:26 [StoragePort]: CREATE_COMPLETE state changed 2016-06-21 02:39:27 [StorageMgmtPort]: CREATE_COMPLETE state changed 2016-06-21 02:39:27 [InternalApiPort]: CREATE_COMPLETE state changed 2016-06-21 02:39:27 [NetworkConfig]: CREATE_IN_PROGRESS state changed 2016-06-21 02:39:28 [NetIpSubnetMap]: CREATE_IN_PROGRESS state changed 2016-06-21 02:39:29 [NetIpMap]: CREATE_IN_PROGRESS state changed 2016-06-21 02:39:31 [NetworkConfig]: CREATE_COMPLETE state changed 2016-06-21 02:39:31 [NetIpMap]: CREATE_COMPLETE state changed 2016-06-21 02:39:32 [NetIpSubnetMap]: CREATE_COMPLETE state changed 2016-06-21 02:39:32 [NetworkDeployment]: CREATE_IN_PROGRESS state changed And this seems to pass the Nova scheduler filter check... and deploys. But now the undercloud lost connectivity again. (and deployment hangs) [gerard at server-124 ~]$ !407 ssh -F /home/gerard/.quickstart/ssh.config.ansible undercloud Warning: Permanently added 'server-124.local,10.0.106.124' (ECDSA) to the list of known hosts. channel 0: open failed: connect failed: No route to host ssh_exchange_identification: Connection closed by remote host This disconnection is occurring on different machines, all with base 7.2 and the quickstart. Seems I have to shave this yak... regards, Gerard -- Gerard Braad | http://gbraad.nl [ Doing Open Source Matters ] From ckdwibedy at gmail.com Tue Jun 21 07:32:17 2016 From: ckdwibedy at gmail.com (Chinmaya Dwibedy) Date: Tue, 21 Jun 2016 13:02:17 +0530 Subject: [rdo-list] Issue with assigning multiple VFs to VM instance In-Reply-To: References: Message-ID: Hi Alon, Thank you for your response. I am using Nova flavor definition. But when I launch an instance using this new flavor, it goes into an error state nova boot --flavor 4 --key_name oskey1 --image bc859dc5-103b-428b-814f-d36e59009454 --nic net-id=e2ca118d-1f25-47de-8524-bb2a2635c4be TEST Here goes the output of nova-scheduler.log 2016-06-16 07:55:34.636 171018 WARNING nova.scheduler.host_manager [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Host localhost has more disk space than database expected (-141 GB > -271 GB) 2016-06-16 07:55:34.637 171018 INFO nova.filters [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Filter PciPassthroughFilter returned 0 hosts 2016-06-16 07:55:34.638 171018 INFO nova.filters [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Filtering removed all hosts for the request with instance ID '4f68c680-5a17-4a38-a6df-5cdb6d76d75b'. Filter results: ['RamFilter: (start: 1, end: 1)', 'ComputeFilter: (start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)', 'ComputeCapabilitiesFilter: (start: 1, end: 1)', 'ImagePropertiesFilter: (start: 1, end: 1)', 'PciPassthroughFilter: (start: 1, end: 0)'] 2016-06-16 07:56:14.743 171018 INFO nova.scheduler.host_manager [req-64a8dc31-f2ab-4d93-8579-6b9f8210ece7 - - - - -] Successfully synced instances from host 'localhost'. 2016-06-16 07:58:17.748 171018 INFO nova.scheduler.host_manager [req-152ac777-1f77-433d-8493-6cd86ab3e0fc - - - - -] Successfully synced instances from host 'localhost'. Note that, If I set the flavor as (#nova flavor-key 4 set "pci_passthrough:alias"="QuickAssist:1") , it assigns a single VF to VM instance. Getting issue when I try to assign multiple VFs per VM. Can you please suggest , where I am wrong and the way to solve this ? Regards, Chinmaya On Tue, Jun 21, 2016 at 9:21 AM, Dotan, Alon wrote: > Which method you are using? > > Neutron sriov plugin? Nova (via flavor definition)? Neutron without sriov > plugin (which I think is the best) > > Thanks, > > > > *From:* rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] *On > Behalf Of *Chinmaya Dwibedy > *Sent:* Monday, June 20, 2016 14:53 > *To:* rdo-list at redhat.com > *Subject:* Re: [rdo-list] Issue with assigning multiple VFs to VM instance > > > > > > Hi , > > > > Can anyone please suggest how to assign multiple VF devices to > VM instance using open-stack openstack-mitaka release? Thank you in advance > for your time and support. > > > > Regards, > > Chinmaya > > > > On Thu, Jun 16, 2016 at 5:42 PM, Chinmaya Dwibedy > wrote: > > Hi All, > > > > I have installed open-stack openstack-mitaka release on CentO7 system . It > has two Intel QAT devices. There are 32 VF devices available per QAT > Device/DH895xCC device. > > > > [root at localhost nova(keystone_admin)]# lspci -nn | grep 0435 > > 83:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT > [8086:0435] > > 88:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT > [8086:0435] > > [root at localhost nova(keystone_admin)]# cat > /sys/bus/pci/devices/0000\:88\:00.0/sriov_numvfs > > 32 > > [root at localhost nova(keystone_admin)]# cat > /sys/bus/pci/devices/0000\:83\:00.0/sriov_numvfs > > 32 > > [root at localhost nova(keystone_admin)]# > > > > Changed the nova configuration (as stated below) for exposing VF ( via > PCI-passthrough) to the instances. > > > > pci_alias = {"name": "QuickAssist", "product_id": "0443", "vendor_id": > "8086", "device_type": "type-VF"} > > pci_passthrough_whitelist = [{"vendor_id":"8086","product_id":"0443"}}] > > Restarted the nova compute, nova API and nova scheduler service > > service openstack-nova-compute restart;service openstack-nova-api > restart;systemctl restart openstack-nova-scheduler; > > scheduler_available_filters=nova.scheduler.filters.all_filters > > > scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter > > > > Thereafter it shows all the available VFs (64) in nova database upon > select * from pci_devices. Set the flavor 4 to allow passing two VFs to > instances. > > > > [root at localhost nova(keystone_admin)]# nova flavor-show 4 > > > +----------------------------+------------------------------------------------------------+ > > | Property | > Value | > > > +----------------------------+------------------------------------------------------------+ > > | OS-FLV-DISABLED:disabled | > False | > > | OS-FLV-EXT-DATA:ephemeral | > 0 | > > | disk | 80 > | > > | extra_specs | {"pci_passthrough:alias": "QuickAssist:2"} | > > | id | > 4 | > > | name | > m1.large | > > | os-flavor-access:is_public | > True | > > | ram | > 8192 | > > | rxtx_factor | > 1.0 | > > | swap > | | > > | vcpus | > 4 | > > > +----------------------------+------------------------------------------------------------+ > > [root at localhost nova(keystone_admin)]# > > > > Also when I launch an instance using this new flavor, it goes into an > error state > > > > nova boot --flavor 4 --key_name oskey1 --image > bc859dc5-103b-428b-814f-d36e59009454 --nic > net-id=e2ca118d-1f25-47de-8524-bb2a2635c4be TEST > > > > > > Here goes the output of nova-conductor.log > > > > 2016-06-16 07:55:34.640 5094 WARNING nova.scheduler.utils > [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 > 4bc608763cee41d9a8df26d3ef919825 - - -] Failed to > compute_task_build_instances: No valid host was found. There are not enough > hosts available. > > Traceback (most recent call last): > > > > File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", > line 150, in inner > > return func(*args, **kwargs) > > > > File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line > 104, in select_destinations > > dests = self.driver.select_destinations(ctxt, spec_obj) > > > > File > "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line > 74, in select_destinations > > raise exception.NoValidHost(reason=reason) > > > > NoValidHost: No valid host was found. There are not enough hosts available. > > > > Here goes the output of nova-compute.log > > > > 2016-06-16 07:57:32.502 170789 INFO nova.compute.resource_tracker > [req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Total usable vcpus: > 36, total allocated vcpus: 16 > > 2016-06-16 07:57:32.502 170789 INFO nova.compute.resource_tracker > [req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Final resource view: > name=localhost phys_ram=128721MB used_ram=33280MB phys_disk=49GB > used_disk=320GB total_vcpus=36 used_vcpus=16 > pci_stats=[PciDevicePool(count=0,numa_node=0,product_id='10fb',tags={dev_type='type-PF'},vendor_id='8086'), > PciDevicePool(count=63,numa_node=1,product_id='0443',tags={dev_type='type-VF'},vendor_id='8086')] > > 2016-06-16 07:57:33.803 170789 INFO nova.compute.resource_tracker > [req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Compute_service record > updated for localhost:localhost > > > > Here goes the output of nova-scheduler.log > > > > 2016-06-16 07:55:34.636 171018 WARNING nova.scheduler.host_manager > [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 > 4bc608763cee41d9a8df26d3ef919825 - - -] Host localhost has more disk space > than database expected (-141 GB > -271 GB) > > 2016-06-16 07:55:34.637 171018 INFO nova.filters > [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 > 4bc608763cee41d9a8df26d3ef919825 - - -] Filter PciPassthroughFilter > returned 0 hosts > > 2016-06-16 07:55:34.638 171018 INFO nova.filters > [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 > 4bc608763cee41d9a8df26d3ef919825 - - -] Filtering removed all hosts for the > request with instance ID '4f68c680-5a17-4a38-a6df-5cdb6d76d75b'. Filter > results: ['RamFilter: (start: 1, end: 1)', 'ComputeFilter: (start: 1, end: > 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)', > 'ComputeCapabilitiesFilter: (start: 1, end: 1)', 'ImagePropertiesFilter: > (start: 1, end: 1)', 'PciPassthroughFilter: (start: 1, end: 0)'] > > 2016-06-16 07:56:14.743 171018 INFO nova.scheduler.host_manager > [req-64a8dc31-f2ab-4d93-8579-6b9f8210ece7 - - - - -] Successfully synced > instances from host 'localhost'. > > 2016-06-16 07:58:17.748 171018 INFO nova.scheduler.host_manager > [req-152ac777-1f77-433d-8493-6cd86ab3e0fc - - - - -] Successfully synced > instances from host 'localhost'. > > > > Note that, If I set the flavor as (#nova flavor-key 4 set > "pci_passthrough:alias"="QuickAssist:1") , it assigns a single VF to VM > instance. I think, multiple PFs can be assigned per VM. Can anyone please > suggest , where I am wrong and the way to solve this ? Thank you in advance > for your support and help. > > > > > > > > Regards, > > Chinmaya > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alon.dotan at hpe.com Tue Jun 21 07:40:42 2016 From: alon.dotan at hpe.com (Alon Dotan) Date: Tue, 21 Jun 2016 10:40:42 +0300 Subject: [rdo-list] Issue with assigning multiple VFs to VM instance In-Reply-To: References: Message-ID: <518134bf-444b-6585-a39d-0751c7e0c6bc@hpe.com> any particular reason not using neutron api with port-create? any how can you add the output of lspci -tvn ? ------------------------------------------------------------------------ *From:* Chinmaya Dwibedy *Sent:* Tuesday, June 21, 2016 10:32AM *To:* Dotan, Alon *Cc:* Rdo-list *Subject:* Re: [rdo-list] Issue with assigning multiple VFs to VM instance Hi Alon, Thank you for your response. I am using Nova flavor definition. But when I launch an instance using this new flavor, it goes into an error state nova boot --flavor 4 --key_name oskey1 --image bc859dc5-103b-428b-814f-d36e59009454 --nic net-id=e2ca118d-1f25-47de-8524-bb2a2635c4be TEST Here goes the output of nova-scheduler.log 2016-06-16 07:55:34.636 171018 WARNING nova.scheduler.host_manager [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Host localhost has more disk space than database expected (-141 GB > -271 GB) 2016-06-16 07:55:34.637 171018 INFO nova.filters [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Filter PciPassthroughFilter returned 0 hosts 2016-06-16 07:55:34.638 171018 INFO nova.filters [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Filtering removed all hosts for the request with instance ID '4f68c680-5a17-4a38-a6df-5cdb6d76d75b'. Filter results: ['RamFilter: (start: 1, end: 1)', 'ComputeFilter: (start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)', 'ComputeCapabilitiesFilter: (start: 1, end: 1)', 'ImagePropertiesFilter: (start: 1, end: 1)', 'PciPassthroughFilter: (start: 1, end: 0)'] 2016-06-16 07:56:14.743 171018 INFO nova.scheduler.host_manager [req-64a8dc31-f2ab-4d93-8579-6b9f8210ece7 - - - - -] Successfully synced instances from host 'localhost'. 2016-06-16 07:58:17.748 171018 INFO nova.scheduler.host_manager [req-152ac777-1f77-433d-8493-6cd86ab3e0fc - - - - -] Successfully synced instances from host 'localhost'. Note that, If I set the flavor as (#nova flavor-key 4 set "pci_passthrough:alias"="QuickAssist:1") , it assigns a single VF to VM instance. Getting issue when I try to assign multiple VFs per VM. Can you please suggest , where I am wrong and the way to solve this ? Regards, Chinmaya On Tue, Jun 21, 2016 at 9:21 AM, Dotan, Alon > wrote: Which method you are using? Neutron sriov plugin? Nova (via flavor definition)? Neutron without sriov plugin (which I think is the best) Thanks, *From:*rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com ] *On Behalf Of *Chinmaya Dwibedy *Sent:* Monday, June 20, 2016 14:53 *To:* rdo-list at redhat.com *Subject:* Re: [rdo-list] Issue with assigning multiple VFs to VM instance Hi , Can anyone please suggest how to assign multiple VF devices to VM instance using open-stack openstack-mitaka release? Thank you in advance for your time and support. Regards, Chinmaya On Thu, Jun 16, 2016 at 5:42 PM, Chinmaya Dwibedy > wrote: Hi All, I have installed open-stack openstack-mitaka release on CentO7 system . It has two Intel QAT devices. There are 32 VF devices available per QAT Device/DH895xCC device. [root at localhost nova(keystone_admin)]# lspci -nn | grep 0435 83:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT [8086:0435] 88:00.0 Co-processor [0b40]: Intel Corporation DH895XCC Series QAT [8086:0435] [root at localhost nova(keystone_admin)]# cat /sys/bus/pci/devices/0000\:88\:00.0/sriov_numvfs 32 [root at localhost nova(keystone_admin)]# cat /sys/bus/pci/devices/0000\:83\:00.0/sriov_numvfs 32 [root at localhost nova(keystone_admin)]# Changed the nova configuration (as stated below) for exposing VF ( via PCI-passthrough) to the instances. pci_alias = {"name": "QuickAssist", "product_id": "0443", "vendor_id": "8086", "device_type": "type-VF"} pci_passthrough_whitelist = [{"vendor_id":"8086","product_id":"0443"}}] Restarted the nova compute, nova API and nova scheduler service service openstack-nova-compute restart;service openstack-nova-api restart;systemctl restart openstack-nova-scheduler; scheduler_available_filters=nova.scheduler.filters.all_filters scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter Thereafter it shows all the available VFs (64) in nova database upon select * from pci_devices. Set the flavor 4 to allow passing two VFs to instances. [root at localhost nova(keystone_admin)]# nova flavor-show 4 +----------------------------+------------------------------------------------------------+ | Property | Value | +----------------------------+------------------------------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 80 | | extra_specs | {"pci_passthrough:alias": "QuickAssist:2"} | | id | 4 | | name | m1.large | | os-flavor-access:is_public | True | | ram | 8192 | | rxtx_factor | 1.0 | | swap | | | vcpus | 4 | +----------------------------+------------------------------------------------------------+ [root at localhost nova(keystone_admin)]# Also when I launch an instance using this new flavor, it goes into an error state nova boot --flavor 4 --key_name oskey1 --image bc859dc5-103b-428b-814f-d36e59009454 --nic net-id=e2ca118d-1f25-47de-8524-bb2a2635c4be TEST Here goes the output of nova-conductor.log 2016-06-16 07:55:34.640 5094 WARNING nova.scheduler.utils [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 150, in inner return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 104, in select_destinations dests = self.driver.select_destinations(ctxt, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 74, in select_destinations raise exception.NoValidHost(reason=reason) NoValidHost: No valid host was found. There are not enough hosts available. Here goes the output of nova-compute.log 2016-06-16 07:57:32.502 170789 INFO nova.compute.resource_tracker [req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Total usable vcpus: 36, total allocated vcpus: 16 2016-06-16 07:57:32.502 170789 INFO nova.compute.resource_tracker [req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Final resource view: name=localhost phys_ram=128721MB used_ram=33280MB phys_disk=49GB used_disk=320GB total_vcpus=36 used_vcpus=16 pci_stats=[PciDevicePool(count=0,numa_node=0,product_id='10fb',tags={dev_type='type-PF'},vendor_id='8086'), PciDevicePool(count=63,numa_node=1,product_id='0443',tags={dev_type='type-VF'},vendor_id='8086')] 2016-06-16 07:57:33.803 170789 INFO nova.compute.resource_tracker [req-4529a2a8-390f-4620-98b3-d3eb77e077a3 - - - - -] Compute_service record updated for localhost:localhost Here goes the output of nova-scheduler.log 2016-06-16 07:55:34.636 171018 WARNING nova.scheduler.host_manager [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Host localhost has more disk space than database expected (-141 GB > -271 GB) 2016-06-16 07:55:34.637 171018 INFO nova.filters [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Filter PciPassthroughFilter returned 0 hosts 2016-06-16 07:55:34.638 171018 INFO nova.filters [req-6189d3c8-5587-4350-8cd3-704fd35cf2ad 266f5859848e4f39b9725203dda5c3f2 4bc608763cee41d9a8df26d3ef919825 - - -] Filtering removed all hosts for the request with instance ID '4f68c680-5a17-4a38-a6df-5cdb6d76d75b'. Filter results: ['RamFilter: (start: 1, end: 1)', 'ComputeFilter: (start: 1, end: 1)', 'AvailabilityZoneFilter: (start: 1, end: 1)', 'ComputeCapabilitiesFilter: (start: 1, end: 1)', 'ImagePropertiesFilter: (start: 1, end: 1)', 'PciPassthroughFilter: (start: 1, end: 0)'] 2016-06-16 07:56:14.743 171018 INFO nova.scheduler.host_manager [req-64a8dc31-f2ab-4d93-8579-6b9f8210ece7 - - - - -] Successfully synced instances from host 'localhost'. 2016-06-16 07:58:17.748 171018 INFO nova.scheduler.host_manager [req-152ac777-1f77-433d-8493-6cd86ab3e0fc - - - - -] Successfully synced instances from host 'localhost'. Note that, If I set the flavor as (#nova flavor-key 4 set "pci_passthrough:alias"="QuickAssist:1") , it assigns a single VF to VM instance. I think, multiple PFs can be assigned per VM. Can anyone please suggest , where I am wrong and the way to solve this ? Thank you in advance for your support and help. Regards, Chinmaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Tue Jun 21 12:30:03 2016 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 21 Jun 2016 08:30:03 -0400 Subject: [rdo-list] LinuxCon Berlin, RDO ambassadors Message-ID: <920e1c6f-e031-ba59-6622-83c77ece211b@redhat.com> We have an opportunity to have an expo hall presence at LinuxCon Berlin (October 4-6 - http://events.linuxfoundation.org/events/linuxcon-europe ) If you are either in that area, or are likely to attend LinuxCon anyways, we're looking for volunteers to spend a shift in the RDO booth to answer questions about RDO and OpenStack. The space is usually also shared with other projects (CentOS, oVirt, Atomic, Ceph, Gluster, and possibly others) so you won't be there alone. If you are interested/willing, please get in touch with me. Thank you. --Rich -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdocommunity.org @RDOCommunity From ayoung at redhat.com Tue Jun 21 14:00:03 2016 From: ayoung at redhat.com (Adam Young) Date: Tue, 21 Jun 2016 10:00:03 -0400 Subject: [rdo-list] Unanswered 'RDO' questions on ask.openstack.org In-Reply-To: References: Message-ID: <1aabd8c6-bc85-e363-5ee5-2ae8d2fd99b2@redhat.com> On 06/20/2016 10:33 AM, Rich Bowen wrote: > 59 unanswered questions: No there are not. I looked at the Keystone question. There are 3 responses, and no feedback from the original poster. > > Unable to start Ceilometer services > https://ask.openstack.org/en/question/93600/unable-to-start-ceilometer-services/ > Tags: ceilometer, ceilometer-api > > Dashboard console - Keyboard and mouse issue in Linux graphical environmevt > https://ask.openstack.org/en/question/93583/dashboard-console-keyboard-and-mouse-issue-in-linux-graphical-environmevt/ > Tags: nova, nova-console > > Adding hard drive space to RDO installation > https://ask.openstack.org/en/question/93412/adding-hard-drive-space-to-rdo-installation/ > Tags: cinder, openstack, space, add > > AWS Ec2 inst Eth port loses IP when attached to linux bridge in Openstack > https://ask.openstack.org/en/question/92271/aws-ec2-inst-eth-port-loses-ip-when-attached-to-linux-bridge-in-openstack/ > Tags: openstack, networking, aws > > ceilometer: I've installed openstack mitaka. but swift stops working > when i configured the pipeline and ceilometer filter > https://ask.openstack.org/en/question/92035/ceilometer-ive-installed-openstack-mitaka-but-swift-stops-working-when-i-configured-the-pipeline-and-ceilometer-filter/ > Tags: ceilometer, openstack-swift, mitaka > > Fail on installing the controller on Cent OS 7 > https://ask.openstack.org/en/question/92025/fail-on-installing-the-controller-on-cent-os-7/ > Tags: installation, centos7, controller > > the error of service entity and API endpoints > https://ask.openstack.org/en/question/91702/the-error-of-service-entity-and-api-endpoints/ > Tags: service, entity, and, api, endpoints > > Running delorean fails: Git won't fetch sources > https://ask.openstack.org/en/question/91600/running-delorean-fails-git-wont-fetch-sources/ > Tags: delorean, rdo > > RDO Manager install issue - can't resolve trunk-mgt.rdoproject.org > https://ask.openstack.org/en/question/91533/rdo-manager-install-issue-cant-resolve-trunk-mgtrdoprojectorg/ > Tags: rdo-manager > > Keystone authentication: Failed to contact the endpoint. > https://ask.openstack.org/en/question/91517/keystone-authentication-failed-to-contact-the-endpoint/ > Tags: keystone, authenticate, endpoint, murano > > adding computer node. > https://ask.openstack.org/en/question/91417/adding-computer-node/ > Tags: rdo, openstack > > Liberty RDO: stack resource topology icons are pink > https://ask.openstack.org/en/question/91347/liberty-rdo-stack-resource-topology-icons-are-pink/ > Tags: stack, resource, topology, dashboard > > Build of instance aborted: Block Device Mapping is Invalid. > https://ask.openstack.org/en/question/91205/build-of-instance-aborted-block-device-mapping-is-invalid/ > Tags: cinder, lvm, centos7 > > No handlers could be found for logger "oslo_config.cfg" while syncing > the glance database > https://ask.openstack.org/en/question/91169/no-handlers-could-be-found-for-logger-oslo_configcfg-while-syncing-the-glance-database/ > Tags: liberty, glance, install-openstack > > how to use chef auto manage openstack in RDO? > https://ask.openstack.org/en/question/90992/how-to-use-chef-auto-manage-openstack-in-rdo/ > Tags: chef, rdo > > Separate Cinder storage traffic from management > https://ask.openstack.org/en/question/90405/separate-cinder-storage-traffic-from-management/ > Tags: cinder, separate, nic, iscsi > > Openstack installation fails using packstack, failure is in installation > of openstack-nova-compute. Error: Dependency Package[nova-compute] has > failures > https://ask.openstack.org/en/question/88993/openstack-installation-fails-using-packstack-failure-is-in-installation-of-openstack-nova-compute-error-dependency-packagenova-compute-has-failures/ > Tags: novacompute, rdo, packstack, dependency, failure > > CentOS OpenStack - compute node can't talk > https://ask.openstack.org/en/question/88989/centos-openstack-compute-node-cant-talk/ > Tags: rdo > > How to setup SWIFT_PROXY_NODE and SWIFT_STORAGE_NODEs separately on > RDO Liberty ? > https://ask.openstack.org/en/question/88897/how-to-setup-swift_proxy_node-and-swift_storage_nodes-separately-on-rdo-liberty/ > Tags: rdo, liberty, swift, ha > > VM and container can't download anything from internet > https://ask.openstack.org/en/question/88338/vm-and-container-cant-download-anything-from-internet/ > Tags: rdo, neutron, network, connectivity > > Fedora22, Liberty, horizon VNC console and keymap=sv with ; and/ > https://ask.openstack.org/en/question/87451/fedora22-liberty-horizon-vnc-console-and-keymapsv-with-and/ > Tags: keyboard, map, keymap, vncproxy, novnc > > OpenStack-Docker driver failed > https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/ > Tags: docker, openstack, liberty > > Can't create volume with cinder > https://ask.openstack.org/en/question/86670/cant-create-volume-with-cinder/ > Tags: cinder, glusterfs, nfs > > Sahara SSHException: Error reading SSH protocol banner > https://ask.openstack.org/en/question/84710/sahara-sshexception-error-reading-ssh-protocol-banner/ > Tags: sahara, icehouse, ssh, vanila > > Error Sahara create cluster: 'Error attach volume to instance > https://ask.openstack.org/en/question/84651/error-sahara-create-cluster-error-attach-volume-to-instance/ > Tags: sahara, attach-volume, vanila, icehouse > > Creating Sahara cluster: Error attach volume to instance > https://ask.openstack.org/en/question/84650/creating-sahara-cluster-error-attach-volume-to-instance/ > Tags: sahara, attach-volume, hadoop, icehouse, vanilla > > Routing between two tenants > https://ask.openstack.org/en/question/84645/routing-between-two-tenants/ > Tags: kilo, fuel, rdo, routing > > RDO kilo installation metadata widget doesn't work > https://ask.openstack.org/en/question/83870/rdo-kilo-installation-metadata-widget-doesnt-work/ > Tags: kilo, flavor, metadata > > Not able to ssh into RDO Kilo instance > https://ask.openstack.org/en/question/83707/not-able-to-ssh-into-rdo-kilo-instance/ > Tags: rdo, instance-ssh > > redhat RDO enable access to swift via S3 > https://ask.openstack.org/en/question/83607/redhat-rdo-enable-access-to-swift-via-s3/ > Tags: swift, s3 > > > From rbowen at redhat.com Tue Jun 21 14:50:39 2016 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 21 Jun 2016 10:50:39 -0400 Subject: [rdo-list] Unanswered 'RDO' questions on ask.openstack.org In-Reply-To: <1aabd8c6-bc85-e363-5ee5-2ae8d2fd99b2@redhat.com> References: <1aabd8c6-bc85-e363-5ee5-2ae8d2fd99b2@redhat.com> Message-ID: <13ae5bf4-582a-d4a7-d208-81c50f1b55a0@redhat.com> On 06/21/2016 10:00 AM, Adam Young wrote: > On 06/20/2016 10:33 AM, Rich Bowen wrote: >> 59 unanswered questions: > No there are not. > > I looked at the Keystone question. There are 3 responses, and no > feedback from the original poster. Ah. Sorry. My script asks the API, and only looks for responses that have actually been accepted as an answer. Perhaps I should have it check for responses, rather than answers. Thanks for the feedback. --Rich > > >> >> Unable to start Ceilometer services >> https://ask.openstack.org/en/question/93600/unable-to-start-ceilometer-services/ >> >> Tags: ceilometer, ceilometer-api >> >> Dashboard console - Keyboard and mouse issue in Linux graphical >> environmevt >> https://ask.openstack.org/en/question/93583/dashboard-console-keyboard-and-mouse-issue-in-linux-graphical-environmevt/ >> >> Tags: nova, nova-console >> >> Adding hard drive space to RDO installation >> https://ask.openstack.org/en/question/93412/adding-hard-drive-space-to-rdo-installation/ >> >> Tags: cinder, openstack, space, add >> >> AWS Ec2 inst Eth port loses IP when attached to linux bridge in Openstack >> https://ask.openstack.org/en/question/92271/aws-ec2-inst-eth-port-loses-ip-when-attached-to-linux-bridge-in-openstack/ >> >> Tags: openstack, networking, aws >> >> ceilometer: I've installed openstack mitaka. but swift stops working >> when i configured the pipeline and ceilometer filter >> https://ask.openstack.org/en/question/92035/ceilometer-ive-installed-openstack-mitaka-but-swift-stops-working-when-i-configured-the-pipeline-and-ceilometer-filter/ >> >> Tags: ceilometer, openstack-swift, mitaka >> >> Fail on installing the controller on Cent OS 7 >> https://ask.openstack.org/en/question/92025/fail-on-installing-the-controller-on-cent-os-7/ >> >> Tags: installation, centos7, controller >> >> the error of service entity and API endpoints >> https://ask.openstack.org/en/question/91702/the-error-of-service-entity-and-api-endpoints/ >> >> Tags: service, entity, and, api, endpoints >> >> Running delorean fails: Git won't fetch sources >> https://ask.openstack.org/en/question/91600/running-delorean-fails-git-wont-fetch-sources/ >> >> Tags: delorean, rdo >> >> RDO Manager install issue - can't resolve trunk-mgt.rdoproject.org >> https://ask.openstack.org/en/question/91533/rdo-manager-install-issue-cant-resolve-trunk-mgtrdoprojectorg/ >> >> Tags: rdo-manager >> >> Keystone authentication: Failed to contact the endpoint. >> https://ask.openstack.org/en/question/91517/keystone-authentication-failed-to-contact-the-endpoint/ >> >> Tags: keystone, authenticate, endpoint, murano >> >> adding computer node. >> https://ask.openstack.org/en/question/91417/adding-computer-node/ >> Tags: rdo, openstack >> >> Liberty RDO: stack resource topology icons are pink >> https://ask.openstack.org/en/question/91347/liberty-rdo-stack-resource-topology-icons-are-pink/ >> >> Tags: stack, resource, topology, dashboard >> >> Build of instance aborted: Block Device Mapping is Invalid. >> https://ask.openstack.org/en/question/91205/build-of-instance-aborted-block-device-mapping-is-invalid/ >> >> Tags: cinder, lvm, centos7 >> >> No handlers could be found for logger "oslo_config.cfg" while syncing >> the glance database >> https://ask.openstack.org/en/question/91169/no-handlers-could-be-found-for-logger-oslo_configcfg-while-syncing-the-glance-database/ >> >> Tags: liberty, glance, install-openstack >> >> how to use chef auto manage openstack in RDO? >> https://ask.openstack.org/en/question/90992/how-to-use-chef-auto-manage-openstack-in-rdo/ >> >> Tags: chef, rdo >> >> Separate Cinder storage traffic from management >> https://ask.openstack.org/en/question/90405/separate-cinder-storage-traffic-from-management/ >> >> Tags: cinder, separate, nic, iscsi >> >> Openstack installation fails using packstack, failure is in installation >> of openstack-nova-compute. Error: Dependency Package[nova-compute] has >> failures >> https://ask.openstack.org/en/question/88993/openstack-installation-fails-using-packstack-failure-is-in-installation-of-openstack-nova-compute-error-dependency-packagenova-compute-has-failures/ >> >> Tags: novacompute, rdo, packstack, dependency, failure >> >> CentOS OpenStack - compute node can't talk >> https://ask.openstack.org/en/question/88989/centos-openstack-compute-node-cant-talk/ >> >> Tags: rdo >> >> How to setup SWIFT_PROXY_NODE and SWIFT_STORAGE_NODEs separately on >> RDO Liberty ? >> https://ask.openstack.org/en/question/88897/how-to-setup-swift_proxy_node-and-swift_storage_nodes-separately-on-rdo-liberty/ >> >> Tags: rdo, liberty, swift, ha >> >> VM and container can't download anything from internet >> https://ask.openstack.org/en/question/88338/vm-and-container-cant-download-anything-from-internet/ >> >> Tags: rdo, neutron, network, connectivity >> >> Fedora22, Liberty, horizon VNC console and keymap=sv with ; and/ >> https://ask.openstack.org/en/question/87451/fedora22-liberty-horizon-vnc-console-and-keymapsv-with-and/ >> >> Tags: keyboard, map, keymap, vncproxy, novnc >> >> OpenStack-Docker driver failed >> https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/ >> >> Tags: docker, openstack, liberty >> >> Can't create volume with cinder >> https://ask.openstack.org/en/question/86670/cant-create-volume-with-cinder/ >> >> Tags: cinder, glusterfs, nfs >> >> Sahara SSHException: Error reading SSH protocol banner >> https://ask.openstack.org/en/question/84710/sahara-sshexception-error-reading-ssh-protocol-banner/ >> >> Tags: sahara, icehouse, ssh, vanila >> >> Error Sahara create cluster: 'Error attach volume to instance >> https://ask.openstack.org/en/question/84651/error-sahara-create-cluster-error-attach-volume-to-instance/ >> >> Tags: sahara, attach-volume, vanila, icehouse >> >> Creating Sahara cluster: Error attach volume to instance >> https://ask.openstack.org/en/question/84650/creating-sahara-cluster-error-attach-volume-to-instance/ >> >> Tags: sahara, attach-volume, hadoop, icehouse, vanilla >> >> Routing between two tenants >> https://ask.openstack.org/en/question/84645/routing-between-two-tenants/ >> Tags: kilo, fuel, rdo, routing >> >> RDO kilo installation metadata widget doesn't work >> https://ask.openstack.org/en/question/83870/rdo-kilo-installation-metadata-widget-doesnt-work/ >> >> Tags: kilo, flavor, metadata >> >> Not able to ssh into RDO Kilo instance >> https://ask.openstack.org/en/question/83707/not-able-to-ssh-into-rdo-kilo-instance/ >> >> Tags: rdo, instance-ssh >> >> redhat RDO enable access to swift via S3 >> https://ask.openstack.org/en/question/83607/redhat-rdo-enable-access-to-swift-via-s3/ >> >> Tags: swift, s3 >> >> >> > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdocommunity.org @RDOCommunity From rlandy at redhat.com Tue Jun 21 14:56:59 2016 From: rlandy at redhat.com (Ronelle Landy) Date: Tue, 21 Jun 2016 10:56:59 -0400 (EDT) Subject: [rdo-list] [tripleo] [oooq] Deployment to baremetal fails; "No valid host was found" In-Reply-To: References: <5767FF0F.7000307@redhat.com> <492676804.355507.1466448942784.JavaMail.zimbra@redhat.com> Message-ID: <466457170.582984.1466521019177.JavaMail.zimbra@redhat.com> > From: "Gerard Braad" > To: "Ronelle Landy" > Cc: "John Trowbridge" , "rdo-list" > Sent: Tuesday, June 21, 2016 1:33:09 AM > Subject: Re: [rdo-list] [tripleo] [oooq] Deployment to baremetal fails; "No valid host was found" > > On Tue, Jun 21, 2016 at 9:35 AM, Gerard Braad wrote: > > On Tue, Jun 21, 2016 at 2:55 AM, Ronelle Landy wrote: > >> If you are deploying with Mitaka > > Yes, I am deploying Mitaka... > > Currently deploying from a fresh installation of Mitaka quickstart... > I have not changed my setup (using the same deployment scripts and > parameters) > > 2016-06-21 02:39:18 [Controller]: CREATE_COMPLETE state changed > 2016-06-21 02:39:19 [ExternalPort]: CREATE_IN_PROGRESS state changed > 2016-06-21 02:39:19 [TenantPort]: CREATE_IN_PROGRESS state changed > 2016-06-21 02:39:20 [StoragePort]: CREATE_IN_PROGRESS state changed > 2016-06-21 02:39:21 [ManagementPort]: CREATE_IN_PROGRESS state changed > 2016-06-21 02:39:21 [UpdateDeployment]: CREATE_IN_PROGRESS state changed > 2016-06-21 02:39:23 [StorageMgmtPort]: CREATE_IN_PROGRESS state changed > 2016-06-21 02:39:23 [InternalApiPort]: CREATE_IN_PROGRESS state changed > 2016-06-21 02:39:25 [ManagementPort]: CREATE_COMPLETE state changed > 2016-06-21 02:39:26 [ExternalPort]: CREATE_COMPLETE state changed > 2016-06-21 02:39:26 [TenantPort]: CREATE_COMPLETE state changed > 2016-06-21 02:39:26 [StoragePort]: CREATE_COMPLETE state changed > 2016-06-21 02:39:27 [StorageMgmtPort]: CREATE_COMPLETE state changed > 2016-06-21 02:39:27 [InternalApiPort]: CREATE_COMPLETE state changed > 2016-06-21 02:39:27 [NetworkConfig]: CREATE_IN_PROGRESS state changed > 2016-06-21 02:39:28 [NetIpSubnetMap]: CREATE_IN_PROGRESS state changed > 2016-06-21 02:39:29 [NetIpMap]: CREATE_IN_PROGRESS state changed > 2016-06-21 02:39:31 [NetworkConfig]: CREATE_COMPLETE state changed > 2016-06-21 02:39:31 [NetIpMap]: CREATE_COMPLETE state changed > 2016-06-21 02:39:32 [NetIpSubnetMap]: CREATE_COMPLETE state changed > 2016-06-21 02:39:32 [NetworkDeployment]: CREATE_IN_PROGRESS state changed > > And this seems to pass the Nova scheduler filter check... and deploys. Ok - so your previous issue of 'no hosts' seems to be resolved with latest Mitaka. > > > But now the undercloud lost connectivity again. (and deployment hangs) > > [gerard at server-124 ~]$ !407 > ssh -F /home/gerard/.quickstart/ssh.config.ansible undercloud > Warning: Permanently added 'server-124.local,10.0.106.124' (ECDSA) to > the list of known hosts. > channel 0: open failed: connect failed: No route to host > ssh_exchange_identification: Connection closed by remote host > > This disconnection is occurring on different machines, all with base > 7.2 and the quickstart. I have seen a similar loss of connection working on Newton during deploy/introspection: fatal: [undercloud]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true} It is possibly an environment configuration issue - I am looking into it more. > > Seems I have to shave this yak... > > > regards, > > > Gerard > > -- > > Gerard Braad | http://gbraad.nl > [ Doing Open Source Matters ] > From rbowen at redhat.com Tue Jun 21 15:20:41 2016 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 21 Jun 2016 11:20:41 -0400 Subject: [rdo-list] RDO bloggers, Jun 21 2016 Message-ID: Here's what RDO enthusiasts have been blogging about in the last week. Skydive plugin for devstack by Babu Shanmugam Devstack is the most commonly used project for OpenStack development. Wouldn?t it be cool to have a supporting software which analyzes the network infrastructure and helps us troubleshoot and monitor the SDN solution that Devstack is deploying? ? read more at http://tm3.org/7a ANNOUNCE: libvirt switch to time based rules for updating version numbers by Daniel P. Berrang? Until today, libvirt has used a 3 digit version number for monthly releases off the git master branch, and a 4 digit version number for maintenance releases off stable branches. Henceforth all releases will use 3 digits, and the next release will be 2.0.0, followed by 2.1.0, 2.2.0, etc, with stable releases incrementing the last digit (2.0.1, 2.0.2, etc) instead of appending yet another digit. ? read more at http://tm3.org/7b Community Central at Red Hat Summit by Rich Bowen OpenStack swims in a larger ecosystem of community projects. At the upcoming Red Hat Summit in San Francisco, RDO will be sharing the Community Central section of the show floor with various of these projects. ? read more at http://tm3.org/7c Custom Overcloud Deploys by Adam Young I?ve been using Tripleo Quickstart. I need custom deploys. Start with modifying the heat templates. I?m doing a mitaka deploy ? read more at http://tm3.org/7d Learning about the Overcloud Deploy Process by Adam Young The process of deploying the overcloud goes through several technologies. Here?s what I?ve learned about tracing it. ? read more at http://tm3.org/7e The difference between auth_uri and auth_url in auth_token by Adam Young Dramatis Personae: Adam Young, Jamie Lennox: Keystone core. Scene: #openstack-keystone chat room. ayoung: I still don?t understand the difference between url and uri ? read more at http://tm3.org/7f Scaling Magnum and Kubernetes: 2 million requests per second by Ricardo Rocha Two months ago, we described in this blog post how we deployed OpenStack Magnum in the CERN cloud. It is available as a pre-production service and we're steadily moving towards full production mode. ? read more at http://tm3.org/7g Keystone Auth Entry Points by Adam Young OpenStack libraries now use Authenication plugins from the keystoneauth1 library. One othe the plugins has disappered? Kerbersop. This used to be in the python-keystoneclient-kerberos package, but that is not shipped with Mitaka. What happened? ? read more at http://tm3.org/7h OpenStack Days Budapest, OpenStack Days Prague by Eliska Malikova It was the 4th OSD in Budapest, but at a brand new place, which was absolutely brilliant. And by brilliant I mean - nice place to stay, great location, enough options around, very good sound, well working AC in rooms for talks and professional catering. I am not sure about number of attendees, but it was pretty big and crowded - so awesome! ? read more at http://tm3.org/7i -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdocommunity.org @RDOCommunity From ayoung at redhat.com Tue Jun 21 16:17:37 2016 From: ayoung at redhat.com (Adam Young) Date: Tue, 21 Jun 2016 12:17:37 -0400 Subject: [rdo-list] Unanswered 'RDO' questions on ask.openstack.org In-Reply-To: <13ae5bf4-582a-d4a7-d208-81c50f1b55a0@redhat.com> References: <1aabd8c6-bc85-e363-5ee5-2ae8d2fd99b2@redhat.com> <13ae5bf4-582a-d4a7-d208-81c50f1b55a0@redhat.com> Message-ID: On 06/21/2016 10:50 AM, Rich Bowen wrote: > > On 06/21/2016 10:00 AM, Adam Young wrote: >> On 06/20/2016 10:33 AM, Rich Bowen wrote: >>> 59 unanswered questions: >> No there are not. >> >> I looked at the Keystone question. There are 3 responses, and no >> feedback from the original poster. > Ah. Sorry. My script asks the API, and only looks for responses that > have actually been accepted as an answer. Perhaps I should have it check > for responses, rather than answers. Thanks for the feedback. > > --Rich Didn't hurt for me to look. We need to treat that like Stack overflow...or maybe move it to Stack overflow? > >> >>> Unable to start Ceilometer services >>> https://ask.openstack.org/en/question/93600/unable-to-start-ceilometer-services/ >>> >>> Tags: ceilometer, ceilometer-api >>> >>> Dashboard console - Keyboard and mouse issue in Linux graphical >>> environmevt >>> https://ask.openstack.org/en/question/93583/dashboard-console-keyboard-and-mouse-issue-in-linux-graphical-environmevt/ >>> >>> Tags: nova, nova-console >>> >>> Adding hard drive space to RDO installation >>> https://ask.openstack.org/en/question/93412/adding-hard-drive-space-to-rdo-installation/ >>> >>> Tags: cinder, openstack, space, add >>> >>> AWS Ec2 inst Eth port loses IP when attached to linux bridge in Openstack >>> https://ask.openstack.org/en/question/92271/aws-ec2-inst-eth-port-loses-ip-when-attached-to-linux-bridge-in-openstack/ >>> >>> Tags: openstack, networking, aws >>> >>> ceilometer: I've installed openstack mitaka. but swift stops working >>> when i configured the pipeline and ceilometer filter >>> https://ask.openstack.org/en/question/92035/ceilometer-ive-installed-openstack-mitaka-but-swift-stops-working-when-i-configured-the-pipeline-and-ceilometer-filter/ >>> >>> Tags: ceilometer, openstack-swift, mitaka >>> >>> Fail on installing the controller on Cent OS 7 >>> https://ask.openstack.org/en/question/92025/fail-on-installing-the-controller-on-cent-os-7/ >>> >>> Tags: installation, centos7, controller >>> >>> the error of service entity and API endpoints >>> https://ask.openstack.org/en/question/91702/the-error-of-service-entity-and-api-endpoints/ >>> >>> Tags: service, entity, and, api, endpoints >>> >>> Running delorean fails: Git won't fetch sources >>> https://ask.openstack.org/en/question/91600/running-delorean-fails-git-wont-fetch-sources/ >>> >>> Tags: delorean, rdo >>> >>> RDO Manager install issue - can't resolve trunk-mgt.rdoproject.org >>> https://ask.openstack.org/en/question/91533/rdo-manager-install-issue-cant-resolve-trunk-mgtrdoprojectorg/ >>> >>> Tags: rdo-manager >>> >>> Keystone authentication: Failed to contact the endpoint. >>> https://ask.openstack.org/en/question/91517/keystone-authentication-failed-to-contact-the-endpoint/ >>> >>> Tags: keystone, authenticate, endpoint, murano >>> >>> adding computer node. >>> https://ask.openstack.org/en/question/91417/adding-computer-node/ >>> Tags: rdo, openstack >>> >>> Liberty RDO: stack resource topology icons are pink >>> https://ask.openstack.org/en/question/91347/liberty-rdo-stack-resource-topology-icons-are-pink/ >>> >>> Tags: stack, resource, topology, dashboard >>> >>> Build of instance aborted: Block Device Mapping is Invalid. >>> https://ask.openstack.org/en/question/91205/build-of-instance-aborted-block-device-mapping-is-invalid/ >>> >>> Tags: cinder, lvm, centos7 >>> >>> No handlers could be found for logger "oslo_config.cfg" while syncing >>> the glance database >>> https://ask.openstack.org/en/question/91169/no-handlers-could-be-found-for-logger-oslo_configcfg-while-syncing-the-glance-database/ >>> >>> Tags: liberty, glance, install-openstack >>> >>> how to use chef auto manage openstack in RDO? >>> https://ask.openstack.org/en/question/90992/how-to-use-chef-auto-manage-openstack-in-rdo/ >>> >>> Tags: chef, rdo >>> >>> Separate Cinder storage traffic from management >>> https://ask.openstack.org/en/question/90405/separate-cinder-storage-traffic-from-management/ >>> >>> Tags: cinder, separate, nic, iscsi >>> >>> Openstack installation fails using packstack, failure is in installation >>> of openstack-nova-compute. Error: Dependency Package[nova-compute] has >>> failures >>> https://ask.openstack.org/en/question/88993/openstack-installation-fails-using-packstack-failure-is-in-installation-of-openstack-nova-compute-error-dependency-packagenova-compute-has-failures/ >>> >>> Tags: novacompute, rdo, packstack, dependency, failure >>> >>> CentOS OpenStack - compute node can't talk >>> https://ask.openstack.org/en/question/88989/centos-openstack-compute-node-cant-talk/ >>> >>> Tags: rdo >>> >>> How to setup SWIFT_PROXY_NODE and SWIFT_STORAGE_NODEs separately on >>> RDO Liberty ? >>> https://ask.openstack.org/en/question/88897/how-to-setup-swift_proxy_node-and-swift_storage_nodes-separately-on-rdo-liberty/ >>> >>> Tags: rdo, liberty, swift, ha >>> >>> VM and container can't download anything from internet >>> https://ask.openstack.org/en/question/88338/vm-and-container-cant-download-anything-from-internet/ >>> >>> Tags: rdo, neutron, network, connectivity >>> >>> Fedora22, Liberty, horizon VNC console and keymap=sv with ; and/ >>> https://ask.openstack.org/en/question/87451/fedora22-liberty-horizon-vnc-console-and-keymapsv-with-and/ >>> >>> Tags: keyboard, map, keymap, vncproxy, novnc >>> >>> OpenStack-Docker driver failed >>> https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/ >>> >>> Tags: docker, openstack, liberty >>> >>> Can't create volume with cinder >>> https://ask.openstack.org/en/question/86670/cant-create-volume-with-cinder/ >>> >>> Tags: cinder, glusterfs, nfs >>> >>> Sahara SSHException: Error reading SSH protocol banner >>> https://ask.openstack.org/en/question/84710/sahara-sshexception-error-reading-ssh-protocol-banner/ >>> >>> Tags: sahara, icehouse, ssh, vanila >>> >>> Error Sahara create cluster: 'Error attach volume to instance >>> https://ask.openstack.org/en/question/84651/error-sahara-create-cluster-error-attach-volume-to-instance/ >>> >>> Tags: sahara, attach-volume, vanila, icehouse >>> >>> Creating Sahara cluster: Error attach volume to instance >>> https://ask.openstack.org/en/question/84650/creating-sahara-cluster-error-attach-volume-to-instance/ >>> >>> Tags: sahara, attach-volume, hadoop, icehouse, vanilla >>> >>> Routing between two tenants >>> https://ask.openstack.org/en/question/84645/routing-between-two-tenants/ >>> Tags: kilo, fuel, rdo, routing >>> >>> RDO kilo installation metadata widget doesn't work >>> https://ask.openstack.org/en/question/83870/rdo-kilo-installation-metadata-widget-doesnt-work/ >>> >>> Tags: kilo, flavor, metadata >>> >>> Not able to ssh into RDO Kilo instance >>> https://ask.openstack.org/en/question/83707/not-able-to-ssh-into-rdo-kilo-instance/ >>> >>> Tags: rdo, instance-ssh >>> >>> redhat RDO enable access to swift via S3 >>> https://ask.openstack.org/en/question/83607/redhat-rdo-enable-access-to-swift-via-s3/ >>> >>> Tags: swift, s3 >>> >>> >>> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com From hguemar at fedoraproject.org Tue Jun 21 16:36:51 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Tue, 21 Jun 2016 18:36:51 +0200 Subject: [rdo-list] LinuxCon Berlin, RDO ambassadors In-Reply-To: <920e1c6f-e031-ba59-6622-83c77ece211b@redhat.com> References: <920e1c6f-e031-ba59-6622-83c77ece211b@redhat.com> Message-ID: 2016-06-21 14:30 GMT+02:00 Rich Bowen : > We have an opportunity to have an expo hall presence at LinuxCon Berlin > (October 4-6 - http://events.linuxfoundation.org/events/linuxcon-europe ) > > If you are either in that area, or are likely to attend LinuxCon > anyways, we're looking for volunteers to spend a shift in the RDO booth > to answer questions about RDO and OpenStack. The space is usually also > shared with other projects (CentOS, oVirt, Atomic, Ceph, Gluster, and > possibly others) so you won't be there alone. > > If you are interested/willing, please get in touch with me. Thank you. > > --Rich > I submitted few workshops, count me in if I'm going there. H. > -- > Rich Bowen - rbowen at redhat.com > RDO Community Liaison > http://rdocommunity.org > @RDOCommunity > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From rbowen at redhat.com Tue Jun 21 19:50:01 2016 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 21 Jun 2016 15:50:01 -0400 Subject: [rdo-list] Unanswered 'RDO' questions on ask.openstack.org In-Reply-To: References: <1aabd8c6-bc85-e363-5ee5-2ae8d2fd99b2@redhat.com> <13ae5bf4-582a-d4a7-d208-81c50f1b55a0@redhat.com> Message-ID: On 06/21/2016 12:17 PM, Adam Young wrote: > On 06/21/2016 10:50 AM, Rich Bowen wrote: >> >> On 06/21/2016 10:00 AM, Adam Young wrote: >>> On 06/20/2016 10:33 AM, Rich Bowen wrote: >>>> 59 unanswered questions: >>> No there are not. >>> >>> I looked at the Keystone question. There are 3 responses, and no >>> feedback from the original poster. >> Ah. Sorry. My script asks the API, and only looks for responses that >> have actually been accepted as an answer. Perhaps I should have it check >> for responses, rather than answers. Thanks for the feedback. >> >> --Rich > Didn't hurt for me to look. > > We need to treat that like Stack overflow...or maybe move it to Stack > overflow? Treat it like Stack Overflow in what regard, exactly? ask.o.o is maintained by the OpenStack Foundation. As I remember it, they looked at S.O. as a possibley place to host it, and decided that they preferred to host their own. > > >> >>> >>>> Unable to start Ceilometer services >>>> https://ask.openstack.org/en/question/93600/unable-to-start-ceilometer-services/ >>>> >>>> >>>> Tags: ceilometer, ceilometer-api >>>> >>>> Dashboard console - Keyboard and mouse issue in Linux graphical >>>> environmevt >>>> https://ask.openstack.org/en/question/93583/dashboard-console-keyboard-and-mouse-issue-in-linux-graphical-environmevt/ >>>> >>>> >>>> Tags: nova, nova-console >>>> >>>> Adding hard drive space to RDO installation >>>> https://ask.openstack.org/en/question/93412/adding-hard-drive-space-to-rdo-installation/ >>>> >>>> >>>> Tags: cinder, openstack, space, add >>>> >>>> AWS Ec2 inst Eth port loses IP when attached to linux bridge in >>>> Openstack >>>> https://ask.openstack.org/en/question/92271/aws-ec2-inst-eth-port-loses-ip-when-attached-to-linux-bridge-in-openstack/ >>>> >>>> >>>> Tags: openstack, networking, aws >>>> >>>> ceilometer: I've installed openstack mitaka. but swift stops working >>>> when i configured the pipeline and ceilometer filter >>>> https://ask.openstack.org/en/question/92035/ceilometer-ive-installed-openstack-mitaka-but-swift-stops-working-when-i-configured-the-pipeline-and-ceilometer-filter/ >>>> >>>> >>>> Tags: ceilometer, openstack-swift, mitaka >>>> >>>> Fail on installing the controller on Cent OS 7 >>>> https://ask.openstack.org/en/question/92025/fail-on-installing-the-controller-on-cent-os-7/ >>>> >>>> >>>> Tags: installation, centos7, controller >>>> >>>> the error of service entity and API endpoints >>>> https://ask.openstack.org/en/question/91702/the-error-of-service-entity-and-api-endpoints/ >>>> >>>> >>>> Tags: service, entity, and, api, endpoints >>>> >>>> Running delorean fails: Git won't fetch sources >>>> https://ask.openstack.org/en/question/91600/running-delorean-fails-git-wont-fetch-sources/ >>>> >>>> >>>> Tags: delorean, rdo >>>> >>>> RDO Manager install issue - can't resolve trunk-mgt.rdoproject.org >>>> https://ask.openstack.org/en/question/91533/rdo-manager-install-issue-cant-resolve-trunk-mgtrdoprojectorg/ >>>> >>>> >>>> Tags: rdo-manager >>>> >>>> Keystone authentication: Failed to contact the endpoint. >>>> https://ask.openstack.org/en/question/91517/keystone-authentication-failed-to-contact-the-endpoint/ >>>> >>>> >>>> Tags: keystone, authenticate, endpoint, murano >>>> >>>> adding computer node. >>>> https://ask.openstack.org/en/question/91417/adding-computer-node/ >>>> Tags: rdo, openstack >>>> >>>> Liberty RDO: stack resource topology icons are pink >>>> https://ask.openstack.org/en/question/91347/liberty-rdo-stack-resource-topology-icons-are-pink/ >>>> >>>> >>>> Tags: stack, resource, topology, dashboard >>>> >>>> Build of instance aborted: Block Device Mapping is Invalid. >>>> https://ask.openstack.org/en/question/91205/build-of-instance-aborted-block-device-mapping-is-invalid/ >>>> >>>> >>>> Tags: cinder, lvm, centos7 >>>> >>>> No handlers could be found for logger "oslo_config.cfg" while syncing >>>> the glance database >>>> https://ask.openstack.org/en/question/91169/no-handlers-could-be-found-for-logger-oslo_configcfg-while-syncing-the-glance-database/ >>>> >>>> >>>> Tags: liberty, glance, install-openstack >>>> >>>> how to use chef auto manage openstack in RDO? >>>> https://ask.openstack.org/en/question/90992/how-to-use-chef-auto-manage-openstack-in-rdo/ >>>> >>>> >>>> Tags: chef, rdo >>>> >>>> Separate Cinder storage traffic from management >>>> https://ask.openstack.org/en/question/90405/separate-cinder-storage-traffic-from-management/ >>>> >>>> >>>> Tags: cinder, separate, nic, iscsi >>>> >>>> Openstack installation fails using packstack, failure is in >>>> installation >>>> of openstack-nova-compute. Error: Dependency Package[nova-compute] has >>>> failures >>>> https://ask.openstack.org/en/question/88993/openstack-installation-fails-using-packstack-failure-is-in-installation-of-openstack-nova-compute-error-dependency-packagenova-compute-has-failures/ >>>> >>>> >>>> Tags: novacompute, rdo, packstack, dependency, failure >>>> >>>> CentOS OpenStack - compute node can't talk >>>> https://ask.openstack.org/en/question/88989/centos-openstack-compute-node-cant-talk/ >>>> >>>> >>>> Tags: rdo >>>> >>>> How to setup SWIFT_PROXY_NODE and SWIFT_STORAGE_NODEs separately on >>>> RDO Liberty ? >>>> https://ask.openstack.org/en/question/88897/how-to-setup-swift_proxy_node-and-swift_storage_nodes-separately-on-rdo-liberty/ >>>> >>>> >>>> Tags: rdo, liberty, swift, ha >>>> >>>> VM and container can't download anything from internet >>>> https://ask.openstack.org/en/question/88338/vm-and-container-cant-download-anything-from-internet/ >>>> >>>> >>>> Tags: rdo, neutron, network, connectivity >>>> >>>> Fedora22, Liberty, horizon VNC console and keymap=sv with ; and/ >>>> https://ask.openstack.org/en/question/87451/fedora22-liberty-horizon-vnc-console-and-keymapsv-with-and/ >>>> >>>> >>>> Tags: keyboard, map, keymap, vncproxy, novnc >>>> >>>> OpenStack-Docker driver failed >>>> https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/ >>>> >>>> >>>> Tags: docker, openstack, liberty >>>> >>>> Can't create volume with cinder >>>> https://ask.openstack.org/en/question/86670/cant-create-volume-with-cinder/ >>>> >>>> >>>> Tags: cinder, glusterfs, nfs >>>> >>>> Sahara SSHException: Error reading SSH protocol banner >>>> https://ask.openstack.org/en/question/84710/sahara-sshexception-error-reading-ssh-protocol-banner/ >>>> >>>> >>>> Tags: sahara, icehouse, ssh, vanila >>>> >>>> Error Sahara create cluster: 'Error attach volume to instance >>>> https://ask.openstack.org/en/question/84651/error-sahara-create-cluster-error-attach-volume-to-instance/ >>>> >>>> >>>> Tags: sahara, attach-volume, vanila, icehouse >>>> >>>> Creating Sahara cluster: Error attach volume to instance >>>> https://ask.openstack.org/en/question/84650/creating-sahara-cluster-error-attach-volume-to-instance/ >>>> >>>> >>>> Tags: sahara, attach-volume, hadoop, icehouse, vanilla >>>> >>>> Routing between two tenants >>>> https://ask.openstack.org/en/question/84645/routing-between-two-tenants/ >>>> >>>> Tags: kilo, fuel, rdo, routing >>>> >>>> RDO kilo installation metadata widget doesn't work >>>> https://ask.openstack.org/en/question/83870/rdo-kilo-installation-metadata-widget-doesnt-work/ >>>> >>>> >>>> Tags: kilo, flavor, metadata >>>> >>>> Not able to ssh into RDO Kilo instance >>>> https://ask.openstack.org/en/question/83707/not-able-to-ssh-into-rdo-kilo-instance/ >>>> >>>> >>>> Tags: rdo, instance-ssh >>>> >>>> redhat RDO enable access to swift via S3 >>>> https://ask.openstack.org/en/question/83607/redhat-rdo-enable-access-to-swift-via-s3/ >>>> >>>> >>>> Tags: swift, s3 >>>> >>>> >>>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdocommunity.org @RDOCommunity From ayoung at redhat.com Tue Jun 21 19:54:11 2016 From: ayoung at redhat.com (Adam Young) Date: Tue, 21 Jun 2016 15:54:11 -0400 Subject: [rdo-list] Unanswered 'RDO' questions on ask.openstack.org In-Reply-To: References: <1aabd8c6-bc85-e363-5ee5-2ae8d2fd99b2@redhat.com> <13ae5bf4-582a-d4a7-d208-81c50f1b55a0@redhat.com> Message-ID: <0fea69a2-aaa5-d98d-b2e3-85e2f4f01c33@redhat.com> On 06/21/2016 03:50 PM, Rich Bowen wrote: > > On 06/21/2016 12:17 PM, Adam Young wrote: >> On 06/21/2016 10:50 AM, Rich Bowen wrote: >>> On 06/21/2016 10:00 AM, Adam Young wrote: >>>> On 06/20/2016 10:33 AM, Rich Bowen wrote: >>>>> 59 unanswered questions: >>>> No there are not. >>>> >>>> I looked at the Keystone question. There are 3 responses, and no >>>> feedback from the original poster. >>> Ah. Sorry. My script asks the API, and only looks for responses that >>> have actually been accepted as an answer. Perhaps I should have it check >>> for responses, rather than answers. Thanks for the feedback. >>> >>> --Rich >> Didn't hurt for me to look. >> >> We need to treat that like Stack overflow...or maybe move it to Stack >> overflow? > Treat it like Stack Overflow in what regard, exactly? > > ask.o.o is maintained by the OpenStack Foundation. As I remember it, > they looked at S.O. as a possibley place to host it, and decided that > they preferred to host their own. In the management of the answers (voting up etc) and the ability to say "this question has already been asked" and then close it. > >> >>>>> Unable to start Ceilometer services >>>>> https://ask.openstack.org/en/question/93600/unable-to-start-ceilometer-services/ >>>>> >>>>> >>>>> Tags: ceilometer, ceilometer-api >>>>> >>>>> Dashboard console - Keyboard and mouse issue in Linux graphical >>>>> environmevt >>>>> https://ask.openstack.org/en/question/93583/dashboard-console-keyboard-and-mouse-issue-in-linux-graphical-environmevt/ >>>>> >>>>> >>>>> Tags: nova, nova-console >>>>> >>>>> Adding hard drive space to RDO installation >>>>> https://ask.openstack.org/en/question/93412/adding-hard-drive-space-to-rdo-installation/ >>>>> >>>>> >>>>> Tags: cinder, openstack, space, add >>>>> >>>>> AWS Ec2 inst Eth port loses IP when attached to linux bridge in >>>>> Openstack >>>>> https://ask.openstack.org/en/question/92271/aws-ec2-inst-eth-port-loses-ip-when-attached-to-linux-bridge-in-openstack/ >>>>> >>>>> >>>>> Tags: openstack, networking, aws >>>>> >>>>> ceilometer: I've installed openstack mitaka. but swift stops working >>>>> when i configured the pipeline and ceilometer filter >>>>> https://ask.openstack.org/en/question/92035/ceilometer-ive-installed-openstack-mitaka-but-swift-stops-working-when-i-configured-the-pipeline-and-ceilometer-filter/ >>>>> >>>>> >>>>> Tags: ceilometer, openstack-swift, mitaka >>>>> >>>>> Fail on installing the controller on Cent OS 7 >>>>> https://ask.openstack.org/en/question/92025/fail-on-installing-the-controller-on-cent-os-7/ >>>>> >>>>> >>>>> Tags: installation, centos7, controller >>>>> >>>>> the error of service entity and API endpoints >>>>> https://ask.openstack.org/en/question/91702/the-error-of-service-entity-and-api-endpoints/ >>>>> >>>>> >>>>> Tags: service, entity, and, api, endpoints >>>>> >>>>> Running delorean fails: Git won't fetch sources >>>>> https://ask.openstack.org/en/question/91600/running-delorean-fails-git-wont-fetch-sources/ >>>>> >>>>> >>>>> Tags: delorean, rdo >>>>> >>>>> RDO Manager install issue - can't resolve trunk-mgt.rdoproject.org >>>>> https://ask.openstack.org/en/question/91533/rdo-manager-install-issue-cant-resolve-trunk-mgtrdoprojectorg/ >>>>> >>>>> >>>>> Tags: rdo-manager >>>>> >>>>> Keystone authentication: Failed to contact the endpoint. >>>>> https://ask.openstack.org/en/question/91517/keystone-authentication-failed-to-contact-the-endpoint/ >>>>> >>>>> >>>>> Tags: keystone, authenticate, endpoint, murano >>>>> >>>>> adding computer node. >>>>> https://ask.openstack.org/en/question/91417/adding-computer-node/ >>>>> Tags: rdo, openstack >>>>> >>>>> Liberty RDO: stack resource topology icons are pink >>>>> https://ask.openstack.org/en/question/91347/liberty-rdo-stack-resource-topology-icons-are-pink/ >>>>> >>>>> >>>>> Tags: stack, resource, topology, dashboard >>>>> >>>>> Build of instance aborted: Block Device Mapping is Invalid. >>>>> https://ask.openstack.org/en/question/91205/build-of-instance-aborted-block-device-mapping-is-invalid/ >>>>> >>>>> >>>>> Tags: cinder, lvm, centos7 >>>>> >>>>> No handlers could be found for logger "oslo_config.cfg" while syncing >>>>> the glance database >>>>> https://ask.openstack.org/en/question/91169/no-handlers-could-be-found-for-logger-oslo_configcfg-while-syncing-the-glance-database/ >>>>> >>>>> >>>>> Tags: liberty, glance, install-openstack >>>>> >>>>> how to use chef auto manage openstack in RDO? >>>>> https://ask.openstack.org/en/question/90992/how-to-use-chef-auto-manage-openstack-in-rdo/ >>>>> >>>>> >>>>> Tags: chef, rdo >>>>> >>>>> Separate Cinder storage traffic from management >>>>> https://ask.openstack.org/en/question/90405/separate-cinder-storage-traffic-from-management/ >>>>> >>>>> >>>>> Tags: cinder, separate, nic, iscsi >>>>> >>>>> Openstack installation fails using packstack, failure is in >>>>> installation >>>>> of openstack-nova-compute. Error: Dependency Package[nova-compute] has >>>>> failures >>>>> https://ask.openstack.org/en/question/88993/openstack-installation-fails-using-packstack-failure-is-in-installation-of-openstack-nova-compute-error-dependency-packagenova-compute-has-failures/ >>>>> >>>>> >>>>> Tags: novacompute, rdo, packstack, dependency, failure >>>>> >>>>> CentOS OpenStack - compute node can't talk >>>>> https://ask.openstack.org/en/question/88989/centos-openstack-compute-node-cant-talk/ >>>>> >>>>> >>>>> Tags: rdo >>>>> >>>>> How to setup SWIFT_PROXY_NODE and SWIFT_STORAGE_NODEs separately on >>>>> RDO Liberty ? >>>>> https://ask.openstack.org/en/question/88897/how-to-setup-swift_proxy_node-and-swift_storage_nodes-separately-on-rdo-liberty/ >>>>> >>>>> >>>>> Tags: rdo, liberty, swift, ha >>>>> >>>>> VM and container can't download anything from internet >>>>> https://ask.openstack.org/en/question/88338/vm-and-container-cant-download-anything-from-internet/ >>>>> >>>>> >>>>> Tags: rdo, neutron, network, connectivity >>>>> >>>>> Fedora22, Liberty, horizon VNC console and keymap=sv with ; and/ >>>>> https://ask.openstack.org/en/question/87451/fedora22-liberty-horizon-vnc-console-and-keymapsv-with-and/ >>>>> >>>>> >>>>> Tags: keyboard, map, keymap, vncproxy, novnc >>>>> >>>>> OpenStack-Docker driver failed >>>>> https://ask.openstack.org/en/question/87243/openstack-docker-driver-failed/ >>>>> >>>>> >>>>> Tags: docker, openstack, liberty >>>>> >>>>> Can't create volume with cinder >>>>> https://ask.openstack.org/en/question/86670/cant-create-volume-with-cinder/ >>>>> >>>>> >>>>> Tags: cinder, glusterfs, nfs >>>>> >>>>> Sahara SSHException: Error reading SSH protocol banner >>>>> https://ask.openstack.org/en/question/84710/sahara-sshexception-error-reading-ssh-protocol-banner/ >>>>> >>>>> >>>>> Tags: sahara, icehouse, ssh, vanila >>>>> >>>>> Error Sahara create cluster: 'Error attach volume to instance >>>>> https://ask.openstack.org/en/question/84651/error-sahara-create-cluster-error-attach-volume-to-instance/ >>>>> >>>>> >>>>> Tags: sahara, attach-volume, vanila, icehouse >>>>> >>>>> Creating Sahara cluster: Error attach volume to instance >>>>> https://ask.openstack.org/en/question/84650/creating-sahara-cluster-error-attach-volume-to-instance/ >>>>> >>>>> >>>>> Tags: sahara, attach-volume, hadoop, icehouse, vanilla >>>>> >>>>> Routing between two tenants >>>>> https://ask.openstack.org/en/question/84645/routing-between-two-tenants/ >>>>> >>>>> Tags: kilo, fuel, rdo, routing >>>>> >>>>> RDO kilo installation metadata widget doesn't work >>>>> https://ask.openstack.org/en/question/83870/rdo-kilo-installation-metadata-widget-doesnt-work/ >>>>> >>>>> >>>>> Tags: kilo, flavor, metadata >>>>> >>>>> Not able to ssh into RDO Kilo instance >>>>> https://ask.openstack.org/en/question/83707/not-able-to-ssh-into-rdo-kilo-instance/ >>>>> >>>>> >>>>> Tags: rdo, instance-ssh >>>>> >>>>> redhat RDO enable access to swift via S3 >>>>> https://ask.openstack.org/en/question/83607/redhat-rdo-enable-access-to-swift-via-s3/ >>>>> >>>>> >>>>> Tags: swift, s3 >>>>> >>>>> >>>>> >>>> _______________________________________________ >>>> rdo-list mailing list >>>> rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com From rbowen at redhat.com Tue Jun 21 19:56:54 2016 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 21 Jun 2016 15:56:54 -0400 Subject: [rdo-list] Unanswered 'RDO' questions on ask.openstack.org In-Reply-To: <0fea69a2-aaa5-d98d-b2e3-85e2f4f01c33@redhat.com> References: <1aabd8c6-bc85-e363-5ee5-2ae8d2fd99b2@redhat.com> <13ae5bf4-582a-d4a7-d208-81c50f1b55a0@redhat.com> <0fea69a2-aaa5-d98d-b2e3-85e2f4f01c33@redhat.com> Message-ID: <05c68b0a-c7c7-58dc-fdd4-6dea968f2d8a@redhat.com> On 06/21/2016 03:54 PM, Adam Young wrote: > On 06/21/2016 03:50 PM, Rich Bowen wrote: >> >> On 06/21/2016 12:17 PM, Adam Young wrote: >>> On 06/21/2016 10:50 AM, Rich Bowen wrote: >>>> On 06/21/2016 10:00 AM, Adam Young wrote: >>>>> On 06/20/2016 10:33 AM, Rich Bowen wrote: >>>>>> 59 unanswered questions: >>>>> No there are not. >>>>> >>>>> I looked at the Keystone question. There are 3 responses, and no >>>>> feedback from the original poster. >>>> Ah. Sorry. My script asks the API, and only looks for responses that >>>> have actually been accepted as an answer. Perhaps I should have it >>>> check >>>> for responses, rather than answers. Thanks for the feedback. >>>> >>>> --Rich >>> Didn't hurt for me to look. >>> >>> We need to treat that like Stack overflow...or maybe move it to Stack >>> overflow? >> Treat it like Stack Overflow in what regard, exactly? >> >> ask.o.o is maintained by the OpenStack Foundation. As I remember it, >> they looked at S.O. as a possibley place to host it, and decided that >> they preferred to host their own. > > > In the management of the answers (voting up etc) and the ability to say > "this question has already been asked" and then close it. Oh, ok. I have been granted some powers on the site, and can close questions for a variety of reasons. For example, I've closed about 100 EOL questions today. And I close duplicate questions when I find them. The voting system doesn't seem to be being used by enough people for it to mean anything. I occasionally see one or two votes, but seldom. -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdocommunity.org @RDOCommunity From me at gbraad.nl Wed Jun 22 00:51:51 2016 From: me at gbraad.nl (Gerard Braad) Date: Wed, 22 Jun 2016 08:51:51 +0800 Subject: [rdo-list] [tripleo] [oooq] Deployment to baremetal fails; "No valid host was found" In-Reply-To: <466457170.582984.1466521019177.JavaMail.zimbra@redhat.com> References: <5767FF0F.7000307@redhat.com> <492676804.355507.1466448942784.JavaMail.zimbra@redhat.com> <466457170.582984.1466521019177.JavaMail.zimbra@redhat.com> Message-ID: Hi, Had several successful and unsuccessful (due to unreachable undercloud) deployments. From both I learned a lot, especially related to the Ironic association and provisioning states. On Tue, Jun 21, 2016 at 10:56 PM, Ronelle Landy wrote: >> From: "Gerard Braad" >> Currently deploying from a fresh installation of Mitaka quickstart... >> I have not changed my setup (using the same deployment scripts and >> parameters) > Ok - so your previous issue of 'no hosts' seems to be resolved with latest Mitaka. It seems. There is however a consistent issue remaining. I need to do a systemctl restart openstack-ironic-conductor after each reboot of the undercloud. The service refuses to start correctly: [stack at undercloud ~]$ openstack baremetal introspection bulk start No valid host was found. Reason: No conductor service registered which supports driver pxe_ipmitool. (HTTP 400) [stack at undercloud ~]$ sudo systemctl restart openstack-ironic-conductor [stack at undercloud ~]$ openstack baremetal introspection bulk start > I have seen a similar loss of connection working on Newton during deploy/introspection: > fatal: [undercloud]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true} > It is possibly an environment configuration issue - I am looking into it more. I have seen this behaviour with liberty and mitaka, and more consistent during deployment. When I did virtual deployments it wasn't as obvious, but now during the physical deployments it starts biting. Any suggestions what I should look at? regards, Gerard -- Gerard Braad | http://gbraad.nl [ Doing Open Source Matters ] From tom at buskey.name Wed Jun 22 15:18:08 2016 From: tom at buskey.name (Tom Buskey) Date: Wed, 22 Jun 2016 11:18:08 -0400 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: <1805327027.177605.1466431907997.JavaMail.zimbra@redhat.com> References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <314502750.13808285.1465460613103.JavaMail.zimbra@redhat.com> <1021183242.16736095.1466174715123.JavaMail.zimbra@redhat.com> <227969699.215194.1466423093998.JavaMail.zimbra@redhat.com> <1805327027.177605.1466431907997.JavaMail.zimbra@redhat.com> Message-ID: Having a tiny, multi-node, non-HA cloud has been extremely useful for developing anything that needs an Openstack cloud to talk to. We don't care if the cloud dies. Our VM images and recipes are elsewhere. We rebuild the clouds as needed. We migrate to another working cloud while we rebuild if needed. We need to test > 1 compute node but not 5 compute nodes. Ex: migration can't be done on a single node! packstack is ideal. A 2 node cloud where both nodes need to do compute lets us run 20-30 VMs to test. We also need multiple clouds. We have "production" clouds and need a minimum of 3 nodes == 50% more costs and the controller is mostly idle. Going from 1 to 5 (7?) for a "real production" cloud is a big leap. We don't need HA and the load on the controller is light enough that it can also handle compute. On Mon, Jun 20, 2016 at 10:11 AM, Ivan Chavero wrote: > > > ----- Original Message ----- > > From: "Boris Derzhavets" > > To: "Javier Pena" > > Cc: "alan pevec" , "rdo-list" < > rdo-list at redhat.com> > > Sent: Monday, June 20, 2016 8:35:52 AM > > Subject: Re: [rdo-list] Packstack refactor and future ideas > > > > > > > > > > > > > > > > From: Javier Pena > > Sent: Monday, June 20, 2016 7:44 AM > > To: Boris Derzhavets > > Cc: rdo-list; alan pevec > > Subject: Re: [rdo-list] Packstack refactor and future ideas > > ----- Original Message ----- > > > > > From: rdo-list-bounces at redhat.com on > behalf > > > of > > > Javier Pena > > > Sent: Friday, June 17, 2016 10:45 AM > > > To: rdo-list > > > Cc: alan pevec > > > Subject: Re: [rdo-list] Packstack refactor and future ideas > > > > > ----- Original Message ----- > > > > > We could take an easier way and assume we only have 3 roles, as in > the > > > > > current refactored code: controller, network, compute. The logic > would > > > > > then be: > > > > > - By default we install everything, so all in one > > > > > - If our host is not CONFIG_CONTROLLER_HOST but is part of > > > > > CONFIG_NETWORK_HOSTS, we apply the network manifest > > > > > - Same as above if our host is part of CONFIG_COMPUTE_HOSTS > > > > > > > > > > Of course, the last two options would assume a first server is > > > > > installed > > > > > as > > > > > controller. > > > > > > > > > > This would allow us to reuse the same answer file on all runs (one > per > > > > > host > > > > > as you proposed), eliminate the ssh code as we are always running > > > > > locally, > > > > > and make some assumptions in the python code, like expecting OPM > to be > > > > > deployed and such. A contributed ansible wrapper to automate the > runs > > > > > would be straightforward to create. > > > > > > > > > > What do you think? Would it be worth the effort? > > > > > > > > +2 I like that proposal a lot! An ansible wrapper is then just an > > > > example playbook in docs but could be done w/o ansible as well, > > > > manually or using some other remote execution tooling of user's > > > > choice. > > > > > > > Now that the phase 1 refactor is under review and passing CI, I think > it's > > > time to come to a conclusion on this. > > > This option looks like the best compromise between keeping it simple > and > > > dropping the least possible amount of features. So unless someone has a > > > better idea, I'll work on that as soon as the current review is merged. > > > > > > Would it be possible :- > > > > > > - By default we install everything, so all in one > > > - If our host is not CONFIG_CONTROLLER_HOST but is part of > > > CONFIG_NETWORK_HOSTS, we apply the network manifest > > > - Same as above if our host is part of CONFIG_COMPUTE_HOSTS > > > - If our host is not CONFIG_CONTROLLER_HOST but is part of > > > CONFIG_STORAGE_HOSTS , we apply the storage manifest > > > > > > Just one more role. May we have 4 roles ? > > > > This is a tricky one. There used to be support for separate > > CONFIG_STORAGE_HOSTS, but I think it has been removed (or at least not > > tested for quite a long time). > > This option is still there, is set as "unsupported" i think it might be > a good idea to keep it. > > what do you guys think? > > > > However, this feature currently works for RDO Mitaka ( as well it woks > for > > Liberty) > > It's even possible to add Storage Node via packstack , taking care of > glance > > and swift proxy > > keystone endpoints manually . > > For small prod deployments like several (5-10) Haswell Xeon boxes. ( no > HA > > requirements from > > customer's side ). Ability to split Storage specifically Swift (AIO) > > instances or Cinder iSCSILVM > > back ends hosting Node from Controller is extremely critical feature. > > What I am writing is based on several projects committed in South > America's > > countries. > > No complaints from site support stuff to myself for configurations > deployed > > via Packstack. > > Dropping this feature ( unsupported , but stable working ) will for sure > make > > Packstack > > almost useless toy . > > In situation when I am able only play with TripleO QuickStart due to > Upstream > > docs > > ( Mitaka trunk instructions set) for instack-virt-setup don't allow to > commit > > `openstack undercloud install` makes Howto :- > > > > https://remote-lab.net/rdo-manager-ha-openstack-deployment > > > > non reproducible. I have nothing against TripleO turn, but absence of > Red Hat > > high quality manuals for TripleO bare metal / TripleO Instak-virt-setup > > will affect RDO Community in wide spread way. I mean first all countries > > like Chile, Brazil, China and etc. > > > > Thank you. > > Boris. > > > > This would need to be a follow-up review, if it is finally decided to do > so. > > > > Regards, > > Javier > > > > > Thanks > > > Boris. > > > > > Regards, > > > Javier > > > > > > Alan > > > > > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > > rdo-list Info Page - Red Hat > > www.redhat.com > > The rdo-list mailing list provides a forum for discussions about > installing, > > running, and using OpenStack on Red Hat based distributions. To see the > > collection of ... > > > > > > > rdo-list Info Page - Red Hat > > > www.redhat.com > > > The rdo-list mailing list provides a forum for discussions about > > > installing, > > > running, and using OpenStack on Red Hat based distributions. To see the > > > collection of ... > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > _______________________________________________ > > > rdo-list mailing list > > > rdo-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichavero at redhat.com Wed Jun 22 16:14:28 2016 From: ichavero at redhat.com (Ivan Chavero) Date: Wed, 22 Jun 2016 12:14:28 -0400 (EDT) Subject: [rdo-list] [Minute] RDO meeting (2016-06-22) Minutes In-Reply-To: <1737759119.659331.1466611871293.JavaMail.zimbra@redhat.com> Message-ID: <1700980079.659952.1466612068661.JavaMail.zimbra@redhat.com> ============================== #rdo: RDO meeting (2016-06-22) ============================== Meeting started by imcsk8 at 15:00:45 UTC. The full logs are available at https://meetbot.fedoraproject.org/rdo/2016-06-22/rdo_meeting_(2016-06-22).2016-06-22-15.00.log.html . Meeting summary --------------- * LINK: https://etherpad.openstack.org/p/RDO-Meeting (chandankumar, 15:01:09) * rollcall (apevec, 15:04:01) * graylist review.rdoproject.org (apevec, 15:04:17) * ACTION: apevec to followup remaining graylisting of review.rdoproject.org with fbo jschlueter misc (apevec, 15:10:18) * DLRN instance migration to ci.centos infra (apevec, 15:10:44) * ACTION: dmsimard to re-sync the current-passed-ci symlinks (jpena, 15:16:16) * ACTION: jpena to switch DNS entries for trunk.rdoproject.org on Thu Jun 23 (jpena, 15:16:29) * MM3 (mailman3) installation (apevec, 15:17:02) * ACTION: number80 coordinate requirements for m-l migration in trello (number80, 15:21:14) * New release for openstack-utils (apevec, 15:27:01) * LINK: https://github.com/redhat-openstack/openstack-utils/commit/11c3e85609f168a64410354afce1c143fc8661a9 (number80, 15:29:46) * LINK: https://github.com/redhat-openstack/openstack-utils/issues/13 should be fixed? (apevec, 15:30:01) * ACTION: apevec and number80 do triage of open issue and release openstack-utils 2016.1 (apevec, 15:31:09) * Proposal to manage pinned packages (apevec, 15:31:35) * ACTION: jpena to start thread on rdo-list about pinned packages (jpena, 15:43:49) * Add openstack-macros in CBS cloud SIG buildroot (apevec, 15:44:21) * ACTION: number80 migrate rdo-rpm-macros to openstack-macros (number80, 15:44:57) * ACTION: apevec to review dlrn rpm-packaging support https://review.rdoproject.org/r/1346 (apevec, 15:47:50) * How to raise an alert when RDO Trunk repos are broken (apevec, 15:48:03) * ACTION: dmsimard to suggest lightweight sensu probe for basic rdo repo consistency check (apevec, 15:51:31) * Test Day (apevec, 15:52:12) * ACTION: rbowen to promote Newton Milestone 2 test day, July 21/22 (rbowen, 15:54:53) * Chair for next meeting (apevec, 15:55:04) * imcsk8 is chair June 29 (apevec, 15:56:17) * chandankumar is chair July 6 (apevec, 15:56:34) * Open Floor (apevec, 15:56:37) Meeting ended at 16:00:32 UTC. Action Items ------------ * apevec to followup remaining graylisting of review.rdoproject.org with fbo jschlueter misc * dmsimard to re-sync the current-passed-ci symlinks * jpena to switch DNS entries for trunk.rdoproject.org on Thu Jun 23 * number80 coordinate requirements for m-l migration in trello * apevec and number80 do triage of open issue and release openstack-utils 2016.1 * jpena to start thread on rdo-list about pinned packages * number80 migrate rdo-rpm-macros to openstack-macros * apevec to review dlrn rpm-packaging support https://review.rdoproject.org/r/1346 * dmsimard to suggest lightweight sensu probe for basic rdo repo consistency check * rbowen to promote Newton Milestone 2 test day, July 21/22 Action Items, by person ----------------------- * apevec * apevec to followup remaining graylisting of review.rdoproject.org with fbo jschlueter misc * apevec and number80 do triage of open issue and release openstack-utils 2016.1 * apevec to review dlrn rpm-packaging support https://review.rdoproject.org/r/1346 * dmsimard * dmsimard to re-sync the current-passed-ci symlinks * dmsimard to suggest lightweight sensu probe for basic rdo repo consistency check * jpena * jpena to switch DNS entries for trunk.rdoproject.org on Thu Jun 23 * jpena to start thread on rdo-list about pinned packages * misc * apevec to followup remaining graylisting of review.rdoproject.org with fbo jschlueter misc * number80 * number80 coordinate requirements for m-l migration in trello * apevec and number80 do triage of open issue and release openstack-utils 2016.1 * number80 migrate rdo-rpm-macros to openstack-macros * rbowen * rbowen to promote Newton Milestone 2 test day, July 21/22 * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * apevec (137) * number80 (31) * dmsimard (28) * rbowen (24) * jpena (24) * trown (21) * imcsk8 (20) * Duck (20) * misc (17) * zodbot (9) * chandankumar (4) * EmilienM (3) * amoralej (3) * mburned (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From bderzhavets at hotmail.com Wed Jun 22 16:51:01 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 22 Jun 2016 16:51:01 +0000 Subject: [rdo-list] Packstack refactor and future ideas In-Reply-To: References: <2136939962.13596997.1465403394549.JavaMail.zimbra@redhat.com> <314502750.13808285.1465460613103.JavaMail.zimbra@redhat.com> <1021183242.16736095.1466174715123.JavaMail.zimbra@redhat.com> <227969699.215194.1466423093998.JavaMail.zimbra@redhat.com> <1805327027.177605.1466431907997.JavaMail.zimbra@redhat.com>, Message-ID: ________________________________ From: tbuskey at gmail.com on behalf of Tom Buskey Sent: Wednesday, June 22, 2016 11:18 AM To: Ivan Chavero Cc: Boris Derzhavets; alan pevec; rdo-list; Javier Pena Subject: Re: [rdo-list] Packstack refactor and future ideas Having a tiny, multi-node, non-HA cloud has been extremely useful for developing anything that needs an Openstack cloud to talk to. We don't care if the cloud dies. Our VM images and recipes are elsewhere. We rebuild the clouds as needed. We migrate to another working cloud while we rebuild if needed. We need to test > 1 compute node but not 5 compute nodes. Ex: migration can't be done on a single node! packstack is ideal. A 2 node cloud where both nodes need to do compute lets us run 20-30 VMs to test. We also need multiple clouds. In current thread there were 2 major arguments why packstack has to be dropped ( or at least less functional in regards of multinode support then currently in RDO Mitaka ) 1. Multinode functionality cannot pass CI ( creating this tests is out of scope , hence it should be dropped) If it is not tested , then it is not working BY DEFINITION ( I am quoting Alan word in word ) Why Lars Kellogg-Stedman provided "RDO Havana Hangout" via YouTube and as slide-show, he was explaining packstack functionality which NEVER passed CI ? Something happened in Mitaka/Newton times - TripleO stabilized ( first of all on bare metal ) 2. Packstack is compromising RDO providing to customers wrong impression about RDO Core features in reality and in meantime . Hence, it is breaking correct focus on Triple0 Bare Betal/TripleO QuickStart ( now virtual but supposed to handle bare metal ASAP) . Regarding statement (2) I would better hold my opinion with myself. Boris. We have "production" clouds and need a minimum of 3 nodes == 50% more costs and the controller is mostly idle. Going from 1 to 5 (7?) for a "real production" cloud is a big leap. We don't need HA and the load on the controller is light enough that it can also handle compute. On Mon, Jun 20, 2016 at 10:11 AM, Ivan Chavero > wrote: ----- Original Message ----- > From: "Boris Derzhavets" > > To: "Javier Pena" > > Cc: "alan pevec" >, "rdo-list" > > Sent: Monday, June 20, 2016 8:35:52 AM > Subject: Re: [rdo-list] Packstack refactor and future ideas > > > > > > > > From: Javier Pena > > Sent: Monday, June 20, 2016 7:44 AM > To: Boris Derzhavets > Cc: rdo-list; alan pevec > Subject: Re: [rdo-list] Packstack refactor and future ideas > ----- Original Message ----- > > > From: rdo-list-bounces at redhat.com > on behalf > > of > > Javier Pena > > > Sent: Friday, June 17, 2016 10:45 AM > > To: rdo-list > > Cc: alan pevec > > Subject: Re: [rdo-list] Packstack refactor and future ideas > > > ----- Original Message ----- > > > > We could take an easier way and assume we only have 3 roles, as in the > > > > current refactored code: controller, network, compute. The logic would > > > > then be: > > > > - By default we install everything, so all in one > > > > - If our host is not CONFIG_CONTROLLER_HOST but is part of > > > > CONFIG_NETWORK_HOSTS, we apply the network manifest > > > > - Same as above if our host is part of CONFIG_COMPUTE_HOSTS > > > > > > > > Of course, the last two options would assume a first server is > > > > installed > > > > as > > > > controller. > > > > > > > > This would allow us to reuse the same answer file on all runs (one per > > > > host > > > > as you proposed), eliminate the ssh code as we are always running > > > > locally, > > > > and make some assumptions in the python code, like expecting OPM to be > > > > deployed and such. A contributed ansible wrapper to automate the runs > > > > would be straightforward to create. > > > > > > > > What do you think? Would it be worth the effort? > > > > > > +2 I like that proposal a lot! An ansible wrapper is then just an > > > example playbook in docs but could be done w/o ansible as well, > > > manually or using some other remote execution tooling of user's > > > choice. > > > > > Now that the phase 1 refactor is under review and passing CI, I think it's > > time to come to a conclusion on this. > > This option looks like the best compromise between keeping it simple and > > dropping the least possible amount of features. So unless someone has a > > better idea, I'll work on that as soon as the current review is merged. > > > > Would it be possible :- > > > > - By default we install everything, so all in one > > - If our host is not CONFIG_CONTROLLER_HOST but is part of > > CONFIG_NETWORK_HOSTS, we apply the network manifest > > - Same as above if our host is part of CONFIG_COMPUTE_HOSTS > > - If our host is not CONFIG_CONTROLLER_HOST but is part of > > CONFIG_STORAGE_HOSTS , we apply the storage manifest > > > > Just one more role. May we have 4 roles ? > > This is a tricky one. There used to be support for separate > CONFIG_STORAGE_HOSTS, but I think it has been removed (or at least not > tested for quite a long time). This option is still there, is set as "unsupported" i think it might be a good idea to keep it. what do you guys think? > However, this feature currently works for RDO Mitaka ( as well it woks for > Liberty) > It's even possible to add Storage Node via packstack , taking care of glance > and swift proxy > keystone endpoints manually . > For small prod deployments like several (5-10) Haswell Xeon boxes. ( no HA > requirements from > customer's side ). Ability to split Storage specifically Swift (AIO) > instances or Cinder iSCSILVM > back ends hosting Node from Controller is extremely critical feature. > What I am writing is based on several projects committed in South America's > countries. > No complaints from site support stuff to myself for configurations deployed > via Packstack. > Dropping this feature ( unsupported , but stable working ) will for sure make > Packstack > almost useless toy . > In situation when I am able only play with TripleO QuickStart due to Upstream > docs > ( Mitaka trunk instructions set) for instack-virt-setup don't allow to commit > `openstack undercloud install` makes Howto :- > > https://remote-lab.net/rdo-manager-ha-openstack-deployment > > non reproducible. I have nothing against TripleO turn, but absence of Red Hat > high quality manuals for TripleO bare metal / TripleO Instak-virt-setup > will affect RDO Community in wide spread way. I mean first all countries > like Chile, Brazil, China and etc. > > Thank you. > Boris. > > This would need to be a follow-up review, if it is finally decided to do so. > > Regards, > Javier > > > Thanks > > Boris. > > > Regards, > > Javier > > > > Alan > > > > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > rdo-list Info Page - Red Hat > www.redhat.com > The rdo-list mailing list provides a forum for discussions about installing, > running, and using OpenStack on Red Hat based distributions. To see the > collection of ... > > > > rdo-list Info Page - Red Hat > > www.redhat.com > > The rdo-list mailing list provides a forum for discussions about > > installing, > > running, and using OpenStack on Red Hat based distributions. To see the > > collection of ... > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > _______________________________________________ > > rdo-list mailing list > > rdo-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rdo-list > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier.pena at redhat.com Thu Jun 23 07:14:40 2016 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 23 Jun 2016 03:14:40 -0400 (EDT) Subject: [rdo-list] [DLRN] Options to pin packages when needed In-Reply-To: <184620250.1542999.1466665712826.JavaMail.zimbra@redhat.com> Message-ID: <514613618.1543611.1466666080349.JavaMail.zimbra@redhat.com> Hi all, RDO Trunk repos are meant to package the latest upstream commits, but in some rare cases we may need to pin a specific package to a non-latest commit to temporarily fix a breakage. This week, we had one of those cases with keystonemiddleware, but the procedure used to do it caused some disruption because some commits were rebuilt, breaking the current-passed-ci and current-tripleo repos for centos-master. Looking for alternatives, my first idea was the following: - Temporarily take that package out of rdoinfo using a specific tag (e.g. tag: to-fix) - And add a working package to an "overrides" repo, included as part of delorean-deps.repo This avoids any DLRN db hacks, but stops processing updates for the package until the breakage is fixed. What do you think? Any alternative ideas? Thanks, Javier From javier.pena at redhat.com Thu Jun 23 08:18:36 2016 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 23 Jun 2016 04:18:36 -0400 (EDT) Subject: [rdo-list] [DLRN] Instance switch to new infrastructure In-Reply-To: <362708336.1556578.1466669479511.JavaMail.zimbra@redhat.com> Message-ID: <715977688.1558395.1466669916298.JavaMail.zimbra@redhat.com> Hi RDO users, We have finally switched the DLRN instance to the new server in the Centos CI infrastructure. All repos should still be accessible via their current URLs, but of course there might be some hiccups during the transition, so please bear with us while we make sure everything is working as expected. This change also means that the consistent and CI-passed repos are now published using the CentOS CDN via buildlogs.centos.org, as shown be the following table: Old repo New URL centos7-master/consistent http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master centos7-master/current-passed-ci http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tested centos7-master/current-tripleo http://buildlogs aware that .centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tripleo centos7-mitaka/consistent http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-mitaka centos7-mitaka/current-passed-ci http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-mitaka-tested centos7-liberty/consistent http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-liberty centos7-liberty/current-passed-ci http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-liberty-tested We have set up compatibility redirections from the old URLs to the new ones, so you don't have to change the URLs immediately. If you find any issues, please reach to us at #rdo. Regards, Javier From hguemar at fedoraproject.org Thu Jun 23 10:38:18 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 23 Jun 2016 12:38:18 +0200 Subject: [rdo-list] [DLRN] Options to pin packages when needed In-Reply-To: <514613618.1543611.1466666080349.JavaMail.zimbra@redhat.com> References: <184620250.1542999.1466665712826.JavaMail.zimbra@redhat.com> <514613618.1543611.1466666080349.JavaMail.zimbra@redhat.com> Message-ID: 2016-06-23 9:14 GMT+02:00 Javier Pena : > Hi all, > > RDO Trunk repos are meant to package the latest upstream commits, but in some rare cases we may need to pin a specific package to a non-latest commit to temporarily fix a breakage. > > This week, we had one of those cases with keystonemiddleware, but the procedure used to do it caused some disruption because some commits were rebuilt, breaking the current-passed-ci and current-tripleo repos for centos-master. > > Looking for alternatives, my first idea was the following: > > - Temporarily take that package out of rdoinfo using a specific tag (e.g. tag: to-fix) if possible, I'd like DLRN to keep building these to keep track of their statuses and allow running CI jobs so that they don't get left out. It'd mean adding logic so that DLRN skip regenerating repo snapshot and put them in a separate staging repo. Considering the low number of pinned packages, I'm +0 about it. > - And add a working package to an "overrides" repo, included as part of delorean-deps.repo +1 > > This avoids any DLRN db hacks, but stops processing updates for the package until the breakage is fixed. > > What do you think? Any alternative ideas? > > Thanks, > Javier > This is likely the best path. H. From hguemar at fedoraproject.org Thu Jun 23 10:42:06 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 23 Jun 2016 12:42:06 +0200 Subject: [rdo-list] Fwd: [openstack-dev] [all][oslo] pbr potential breaking change coming In-Reply-To: <1466513668-sup-9687@lrrr.local> References: <1466513668-sup-9687@lrrr.local> Message-ID: We might experience documentation build failures in RDO master in the near-future. ---------- Forwarded message ---------- From: Doug Hellmann Date: 2016-06-21 15:01 GMT+02:00 Subject: [openstack-dev] [all][oslo] pbr potential breaking change coming To: openstack-dev A while back pbr had a feature that let projects pass "warnerror" through to Sphinx during documentation builds, causing any warnings in that build to be treated as an error and fail the build. This lets us avoid things like links to places that don't exist in the docs, bad but renderable rst, typos in directive or role names, etc. Somewhat more recently, but still a while ago, that feature "broke" with a Sphinx release that was not API compatible. Sachi King has fixed this issue within pbr, and so the next release of pbr will fix the broken behavior and start correctly passing warnerror again. That may result in doc builds breaking where they didn't before. The short-term solution is to turn of warnerrors (look in your setup.cfg), then fix the issues and turn it back on. Or you could preemptively fix any existing warnings in your doc builds before the release, but it's simple enough to turn off the feature if there isn't time. Josh, Sachi, & other Oslo folks, I think we should hold off on releasing until next week to give folks time. Is that OK? Doug PS - Thanks, Sachi, I know that bug wasn't a ton of fun to fix! __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From javier.pena at redhat.com Thu Jun 23 10:48:51 2016 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 23 Jun 2016 06:48:51 -0400 (EDT) Subject: [rdo-list] [DLRN] Options to pin packages when needed In-Reply-To: References: <184620250.1542999.1466665712826.JavaMail.zimbra@redhat.com> <514613618.1543611.1466666080349.JavaMail.zimbra@redhat.com> Message-ID: <1739073024.1593744.1466678931379.JavaMail.zimbra@redhat.com> ----- Original Message ----- > 2016-06-23 9:14 GMT+02:00 Javier Pena : > > Hi all, > > > > RDO Trunk repos are meant to package the latest upstream commits, but in > > some rare cases we may need to pin a specific package to a non-latest > > commit to temporarily fix a breakage. > > > > This week, we had one of those cases with keystonemiddleware, but the > > procedure used to do it caused some disruption because some commits were > > rebuilt, breaking the current-passed-ci and current-tripleo repos for > > centos-master. > > > > Looking for alternatives, my first idea was the following: > > > > - Temporarily take that package out of rdoinfo using a specific tag (e.g. > > tag: to-fix) > > if possible, I'd like DLRN to keep building these to keep track of > their statuses and allow running CI jobs so that they don't get left > out. > It'd mean adding logic so that DLRN skip regenerating repo snapshot > and put them in a separate staging repo. > > Considering the low number of pinned packages, I'm +0 about it. Would it work if we just added the package to the overrides repo, making sure it has a higher version number (or epoch)? The only issue I can think of is upgrades. > > > - And add a working package to an "overrides" repo, included as part of > > delorean-deps.repo > > +1 > > > > > This avoids any DLRN db hacks, but stops processing updates for the package > > until the breakage is fixed. > > > > What do you think? Any alternative ideas? > > > > Thanks, > > Javier > > > > This is likely the best path. > > H. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From amoralej at redhat.com Thu Jun 23 11:06:23 2016 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Thu, 23 Jun 2016 13:06:23 +0200 Subject: [rdo-list] [DLRN] Options to pin packages when needed In-Reply-To: <1739073024.1593744.1466678931379.JavaMail.zimbra@redhat.com> References: <184620250.1542999.1466665712826.JavaMail.zimbra@redhat.com> <514613618.1543611.1466666080349.JavaMail.zimbra@redhat.com> <1739073024.1593744.1466678931379.JavaMail.zimbra@redhat.com> Message-ID: On Thu, Jun 23, 2016 at 12:48 PM, Javier Pena wrote: > > > ----- Original Message ----- >> 2016-06-23 9:14 GMT+02:00 Javier Pena : >> > Hi all, >> > >> > RDO Trunk repos are meant to package the latest upstream commits, but in >> > some rare cases we may need to pin a specific package to a non-latest >> > commit to temporarily fix a breakage. >> > >> > This week, we had one of those cases with keystonemiddleware, but the >> > procedure used to do it caused some disruption because some commits were >> > rebuilt, breaking the current-passed-ci and current-tripleo repos for >> > centos-master. >> > >> > Looking for alternatives, my first idea was the following: >> > >> > - Temporarily take that package out of rdoinfo using a specific tag (e.g. >> > tag: to-fix) >> I like this idea, just note that this must be a per-release tag, package pin should for a specific release. >> if possible, I'd like DLRN to keep building these to keep track of >> their statuses and allow running CI jobs so that they don't get left >> out. >> It'd mean adding logic so that DLRN skip regenerating repo snapshot >> and put them in a separate staging repo. >> >> Considering the low number of pinned packages, I'm +0 about it. > > Would it work if we just added the package to the overrides repo, making sure it has a higher version number (or epoch)? The only issue I can think of is upgrades. > >> >> > - And add a working package to an "overrides" repo, included as part of >> > delorean-deps.repo >> >> +1 >> >> > >> > This avoids any DLRN db hacks, but stops processing updates for the package >> > until the breakage is fixed. >> > keep building packages would be ideal but i guess we should expect pinning packages will be exceptional and not to overcomplicate things, unless reality proves we need it. >> > What do you think? Any alternative ideas? >> > >> > Thanks, >> > Javier >> > >> >> This is likely the best path. >> >> H. >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From hguemar at fedoraproject.org Thu Jun 23 12:35:09 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 23 Jun 2016 14:35:09 +0200 Subject: [rdo-list] [DLRN] Options to pin packages when needed In-Reply-To: <1739073024.1593744.1466678931379.JavaMail.zimbra@redhat.com> References: <184620250.1542999.1466665712826.JavaMail.zimbra@redhat.com> <514613618.1543611.1466666080349.JavaMail.zimbra@redhat.com> <1739073024.1593744.1466678931379.JavaMail.zimbra@redhat.com> Message-ID: 2016-06-23 12:48 GMT+02:00 Javier Pena : > > > ----- Original Message ----- >> 2016-06-23 9:14 GMT+02:00 Javier Pena : >> > Hi all, >> > >> > RDO Trunk repos are meant to package the latest upstream commits, but in >> > some rare cases we may need to pin a specific package to a non-latest >> > commit to temporarily fix a breakage. >> > >> > This week, we had one of those cases with keystonemiddleware, but the >> > procedure used to do it caused some disruption because some commits were >> > rebuilt, breaking the current-passed-ci and current-tripleo repos for >> > centos-master. >> > >> > Looking for alternatives, my first idea was the following: >> > >> > - Temporarily take that package out of rdoinfo using a specific tag (e.g. >> > tag: to-fix) >> >> if possible, I'd like DLRN to keep building these to keep track of >> their statuses and allow running CI jobs so that they don't get left >> out. >> It'd mean adding logic so that DLRN skip regenerating repo snapshot >> and put them in a separate staging repo. >> >> Considering the low number of pinned packages, I'm +0 about it. > > Would it work if we just added the package to the overrides repo, making sure it has a higher version number (or epoch)? The only issue I can think of is upgrades. > It would work for me. Upgrades is a non-problem as we don't support upgrades between DLRN snapshots. >> >> > - And add a working package to an "overrides" repo, included as part of >> > delorean-deps.repo >> >> +1 >> >> > >> > This avoids any DLRN db hacks, but stops processing updates for the package >> > until the breakage is fixed. >> > >> > What do you think? Any alternative ideas? >> > >> > Thanks, >> > Javier >> > >> >> This is likely the best path. >> >> H. >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> From trown at redhat.com Thu Jun 23 14:39:54 2016 From: trown at redhat.com (John Trowbridge) Date: Thu, 23 Jun 2016 10:39:54 -0400 Subject: [rdo-list] [DLRN] Options to pin packages when needed In-Reply-To: <514613618.1543611.1466666080349.JavaMail.zimbra@redhat.com> References: <514613618.1543611.1466666080349.JavaMail.zimbra@redhat.com> Message-ID: <576BF4BA.5040606@redhat.com> On 06/23/2016 03:14 AM, Javier Pena wrote: > Hi all, > > RDO Trunk repos are meant to package the latest upstream commits, but in some rare cases we may need to pin a specific package to a non-latest commit to temporarily fix a breakage. > > This week, we had one of those cases with keystonemiddleware, but the procedure used to do it caused some disruption because some commits were rebuilt, breaking the current-passed-ci and current-tripleo repos for centos-master. > > Looking for alternatives, my first idea was the following: > > - Temporarily take that package out of rdoinfo using a specific tag (e.g. tag: to-fix) > - And add a working package to an "overrides" repo, included as part of delorean-deps.repo One potential issue here is that most (all?) consumers of dlrn use yum-plugin-priorities to make sure all packages in main dlrn repo take precedence over dlrn deps repo regardless of NVR. Will removing a package from rdoinfo make it disappear from future repos? Also, how do we signal to the various external consumers of dlrn which packages we have pinned and why? > > This avoids any DLRN db hacks, but stops processing updates for the package until the breakage is fixed. > > What do you think? Any alternative ideas? One alternative is to maintain a list of packages we use only from release tags. The root issue being that those projects do not strive to maintain master in working order, and in fact upstream CI is using releases to test them. I think the individual pinning solution is simpler if we only have a few issues per cycle, but the package list solution is better if it happens often enough that the pinning/unpinning becomes burdensome. Maybe we should go with the pinning solution for a cycle and re-evaluate (assuming couple minor issues above are resolvable). > > Thanks, > Javier > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From mscherer at redhat.com Thu Jun 23 15:58:45 2016 From: mscherer at redhat.com (Michael Scherer) Date: Thu, 23 Jun 2016 17:58:45 +0200 Subject: [rdo-list] dashboards.rdoproject.org is live Message-ID: <1466697525.30110.187.camel@redhat.com> Hi, so Fred found my hideout in the office and did ask me if I could move the RDO dashboard (https://github.com/rdo-infra/rdo-dashboards) to a different infra. So I worked yesterday night and today to produce that: https://github.com/rdo-infra/ansible-role-rdo_dashboards and deployed that: https://dashboards.rdoproject.org Now, that's the good news. On the topic of what need to be done: - fred asked for auto updat eof the code. I am on it, will do tomorow. - people want to change the domain (as it was decided by fiat by myself) That should be easy to do, just need to agree on something. So if people can discuss and decide, be my guest :) - Alan said he would prefer to have that as a subdirectory rather than a subdomain, which i would prefer to keep, for technical reasons (as the whole playbook is done on using different domain names), and since i want to place that outside of the current server. The reason for wanting to use another server is there: https://github.com/rdo-infra/ansible-role-rdo_dashboards/blob/master/tasks/main.yml#L11 since we use gem, we need gcc, ruby, and node. I am not that happy to have that on the server, so i was looking at moving to openshift v3 (I got a preview access). That's a long term solution, so for now, let's keep this way. Finally, before we decide on the name, please do not publish it around, I do not want to carry redirection for too long :) -- Michael Scherer Sysadmin, Community Infrastructure and Platform, OSAS -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From jp.methot at planethoster.info Thu Jun 23 16:53:25 2016 From: jp.methot at planethoster.info (Jean-Philippe Methot) Date: Thu, 23 Jun 2016 12:53:25 -0400 Subject: [rdo-list] Grub timing out too quickly on default CentOS 7 Openstack images. Message-ID: <611d024d-51da-09ce-6fda-bbb5eceed965@planethoster.info> Hi, I am using the default CentOS7 image on my Openstack setup. When a VM boots, I would like to have at least a 20 seconds to choose the kernel or boot into single user mode. To do so, I have modified the grub_timeout value in /etc/default/grub . Then, when I run grub2-mkconfig -o /boot/grub2/grub.cfg, the VM reboots right away and my changes do not seem to be taken into account. Is there anything I am missing here? From jweber at cofront.net Thu Jun 23 22:27:15 2016 From: jweber at cofront.net (Jeff Weber) Date: Thu, 23 Jun 2016 18:27:15 -0400 Subject: [rdo-list] RDO Liberty updated openstack-neutron-midonet package missing Message-ID: It looks like today the RDO packages for Liberty were updated to 7.1.1, but the openstack-neutron-midonet package is missing. Is there any way to get this corrected? -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at redhat.com Thu Jun 23 22:56:13 2016 From: apevec at redhat.com (Alan Pevec) Date: Fri, 24 Jun 2016 00:56:13 +0200 Subject: [rdo-list] RDO Liberty updated openstack-neutron-midonet package missing In-Reply-To: References: Message-ID: On Fri, Jun 24, 2016 at 12:27 AM, Jeff Weber wrote: > It looks like today the RDO packages for Liberty were updated to 7.1.1, but > the openstack-neutron-midonet package is missing. Is there any way to get > this corrected? It was removed from upstream stable/liberty branch in https://review.openstack.org/#/c/284524/ so packaging was adjusted in https://github.com/rdo-packages/neutron-distgit/commit/1fd37707dfed7486ce761ff08c193ec80541e13e AFAICT you need to migrate to https://github.com/openstack/networking-midonet which was not contributed to RDO yet, so you can follow README there for the option: vendor provided packages or install from sources. Cheers, Alan From puthi at live.com Fri Jun 24 07:34:02 2016 From: puthi at live.com (Soputhi Sea) Date: Fri, 24 Jun 2016 07:34:02 +0000 Subject: [rdo-list] python-keystoneclient (2.3.1-2) make wrong URI call for keystone api V3 In-Reply-To: References: , Message-ID: Hi, I'm using Mitaka release (the very latest public release one from Jun-02), and i'm having issue with List Project in Horizon. In my case i have multiple projects created and when i login to Horizon the drop down list of project (on the top left corner) doesn't list properly, it only list one project only. And as I use Apache wsgi as a service instead of keystone python web service, i checked apache log and here is what i found [23/Jun/2016:17:09:37 +0700] "GET /v3/tenants HTTP/1.1" 404 93 "-" "python-keystoneclient" [23/Jun/2016:18:47:18 +0700] "POST /v3/tokens HTTP/1.1" 404 93 "-" "keystoneauth1/2.4.1 python-requests/2.10.0 CPython/2.7.5" You can see here the URI "/v3/tenants" should be "/v2.0/tenants" or "/v3/projects" (i think) and /v3/tokens should be "/v2.0/tokens" or "/v3/auth/tokens" So i wonder if this is a bug in the python-keystoneclient or is there any configuration i can do to force the client/keystone/horizon to use a proper URI call? As a side, i applied a workaround to fix this issue by creating a redirect rule in apache as follow RewriteEngine on Redirect /v3/tenants /v2.0/tenants Redirect /v3/tokens /v2.0/tokens Thanks in advance for any help. Puthi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at redhat.com Mon Jun 27 05:12:27 2016 From: mrunge at redhat.com (Matthias Runge) Date: Mon, 27 Jun 2016 07:12:27 +0200 Subject: [rdo-list] python-keystoneclient (2.3.1-2) make wrong URI call for keystone api V3 In-Reply-To: References: Message-ID: On 24/06/16 09:34, Soputhi Sea wrote: > > You can see here the URI "/v3/tenants" should be "/v2.0/tenants" or > "/v3/projects" (i think) > > and /v3/tokens should be "/v2.0/tokens" or "/v3/auth/tokens" > > > So i wonder if this is a bug in the python-keystoneclient or is there > any configuration i can do to force the client/keystone/horizon to use a > proper URI call? > > > As a side, i applied a workaround to fix this issue by creating a > redirect rule in apache as follow > > RewriteEngine on > Redirect /v3/tenants /v2.0/tenants > Redirect /v3/tokens /v2.0/tokens > > Thanks in advance for any help. > > Puthi > Horizon gets endpoints from keystone catalog. Initial authentication URL is configured in /etc/openstack-dashboard/local_settings, and following interactions with API endpoints use settings from keystone. openstack catalog list will show you your configured endpoints. -- Matthias Runge Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander From puthi at live.com Mon Jun 27 07:52:58 2016 From: puthi at live.com (Soputhi Sea) Date: Mon, 27 Jun 2016 07:52:58 +0000 Subject: [rdo-list] python-keystoneclient (2.3.1-2) make wrong URI call for keystone api V3 In-Reply-To: References: , Message-ID: Hi, Thanks for pointing out, i just revisit the document for openstack-dashboard. And here is what i changed, it works now. /etc/openstack-dashboard/local_settings #note this option, i thought it enable by defaut but it is not OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2, } ... OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST Thanks Puthi ________________________________ From: rdo-list-bounces at redhat.com on behalf of Matthias Runge Sent: Monday, June 27, 2016 12:12 PM To: rdo-list at redhat.com Subject: Re: [rdo-list] python-keystoneclient (2.3.1-2) make wrong URI call for keystone api V3 On 24/06/16 09:34, Soputhi Sea wrote: > > You can see here the URI "/v3/tenants" should be "/v2.0/tenants" or > "/v3/projects" (i think) > > and /v3/tokens should be "/v2.0/tokens" or "/v3/auth/tokens" > > > So i wonder if this is a bug in the python-keystoneclient or is there > any configuration i can do to force the client/keystone/horizon to use a > proper URI call? > > > As a side, i applied a workaround to fix this issue by creating a > redirect rule in apache as follow > > RewriteEngine on > Redirect /v3/tenants /v2.0/tenants > Redirect /v3/tokens /v2.0/tokens > > Thanks in advance for any help. > > Puthi > Horizon gets endpoints from keystone catalog. Initial authentication URL is configured in /etc/openstack-dashboard/local_settings, and following interactions with API endpoints use settings from keystone. openstack catalog list will show you your configured endpoints. -- Matthias Runge Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, [https://www.redhat.com/profiles/rh/themes/redhatdotcom/img/Red_Hat_RGB.jpg] Red Hat | The world's open source leader www.de.redhat.com Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at redhat.com Mon Jun 27 08:17:47 2016 From: apevec at redhat.com (Alan Pevec) Date: Mon, 27 Jun 2016 10:17:47 +0200 Subject: [rdo-list] python-keystoneclient (2.3.1-2) make wrong URI call for keystone api V3 In-Reply-To: References: Message-ID: On Mon, Jun 27, 2016 at 9:52 AM, Soputhi Sea wrote: > Thanks for pointing out, i just revisit the document for openstack-dashboard. And here is what i changed, it works now. > /etc/openstack-dashboard/local_settings > #note this option, i thought it enable by defaut but it is not > OPENSTACK_API_VERSIONS = { > "identity": 3, > "image": 2, > "volume": 2, > } > ... > OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST What was the installation tool, packstack or tripleo? Cheers, Alan From puthi at live.com Mon Jun 27 08:22:14 2016 From: puthi at live.com (Soputhi Sea) Date: Mon, 27 Jun 2016 08:22:14 +0000 Subject: [rdo-list] python-keystoneclient (2.3.1-2) make wrong URI call for keystone api V3 In-Reply-To: References: , Message-ID: Hi, I wrote my own ansible to deploy the components, the need for my environment is a bit specific. I followed the document from openstack. Thanks Puthi > On Jun 27, 2016, at 3:17 PM, Alan Pevec wrote: > >> On Mon, Jun 27, 2016 at 9:52 AM, Soputhi Sea wrote: >> Thanks for pointing out, i just revisit the document for openstack-dashboard. And here is what i changed, it works now. >> /etc/openstack-dashboard/local_settings >> #note this option, i thought it enable by defaut but it is not >> OPENSTACK_API_VERSIONS = { >> "identity": 3, >> "image": 2, >> "volume": 2, >> } >> ... >> OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST > > What was the installation tool, packstack or tripleo? > > Cheers, > Alan From Milind.Gunjan at sprint.com Mon Jun 27 13:41:49 2016 From: Milind.Gunjan at sprint.com (Gunjan, Milind [CTO]) Date: Mon, 27 Jun 2016 13:41:49 +0000 Subject: [rdo-list] Redeploying UnderCloud Message-ID: <6088bd139e7f48d4b7e857c03e854465@PREWE13M11.ad.sprint.com> Hi All, Greeting. This is my first post and I am fairly new to RDO OpenStack. I am working on RDO Triple-O deployment on baremetal. Due to incorrect values in undercloud.conf file , my undercloud deployment failed. I would like to redeploy undercloud and I am trying to understand what steps has to be taken before redeploying undercloud. All the deployment was under stack user . So first step will be to delete stack user. I am not sure what has to be done regarding the networking configuration autogenerated by os-net-config during the older install. Please advise. Best Regards, Milind ________________________________ Learn more on how to switch to Sprint and save 50% on most Verizon, AT&T or T-Mobile rates. See sprint.com/50off for details. ________________________________ This e-mail may contain Sprint proprietary information intended for the sole use of the recipient(s). Any use by others is prohibited. If you are not the intended recipient, please contact the sender and delete all copies of the message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at remote-lab.net Mon Jun 27 13:52:46 2016 From: marius at remote-lab.net (Marius Cornea) Date: Mon, 27 Jun 2016 15:52:46 +0200 Subject: [rdo-list] Redeploying UnderCloud In-Reply-To: <6088bd139e7f48d4b7e857c03e854465@PREWE13M11.ad.sprint.com> References: <6088bd139e7f48d4b7e857c03e854465@PREWE13M11.ad.sprint.com> Message-ID: Hi, Can you try adjusting the undercloud.conf with the correct values and rerun 'openstack undercloud install'? It should apply the corrrect configuration to the system. Thanks, Marius On Mon, Jun 27, 2016 at 3:41 PM, Gunjan, Milind [CTO] wrote: > Hi All, > > Greeting. > > > > This is my first post and I am fairly new to RDO OpenStack. I am working on > RDO Triple-O deployment on baremetal. Due to incorrect values in > undercloud.conf file , my undercloud deployment failed. I would like to > redeploy undercloud and I am trying to understand what steps has to be taken > before redeploying undercloud. All the deployment was under stack user . So > first step will be to delete stack user. I am not sure what has to be done > regarding the networking configuration autogenerated by os-net-config during > the older install. > > Please advise. > > > > Best Regards, > > Milind > > > ________________________________ > Learn more on how to switch to Sprint and save 50% on most Verizon, AT&T or > T-Mobile rates. See sprint.com/50off for details. > > ________________________________ > > This e-mail may contain Sprint proprietary information intended for the sole > use of the recipient(s). Any use by others is prohibited. If you are not the > intended recipient, please contact the sender and delete all copies of the > message. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From imaslov at dispersivegroup.com Mon Jun 27 14:46:38 2016 From: imaslov at dispersivegroup.com (Ilja Maslov) Date: Mon, 27 Jun 2016 14:46:38 +0000 Subject: [rdo-list] Redeploying UnderCloud In-Reply-To: References: <6088bd139e7f48d4b7e857c03e854465@PREWE13M11.ad.sprint.com> Message-ID: Hi, If you are trying to install from the delorean, I've noticed that last week the undercloud installation broke. Installing undercloud from the centos-release-openstack-mitaka channel seems to work w/o a hitch. Ilja -----Original Message----- From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of Marius Cornea Sent: Monday, June 27, 2016 9:53 AM To: Gunjan, Milind [CTO] Cc: rdo-list at redhat.com Subject: Re: [rdo-list] Redeploying UnderCloud Hi, Can you try adjusting the undercloud.conf with the correct values and rerun 'openstack undercloud install'? It should apply the corrrect configuration to the system. Thanks, Marius On Mon, Jun 27, 2016 at 3:41 PM, Gunjan, Milind [CTO] wrote: > Hi All, > > Greeting. > > > > This is my first post and I am fairly new to RDO OpenStack. I am > working on RDO Triple-O deployment on baremetal. Due to incorrect > values in undercloud.conf file , my undercloud deployment failed. I > would like to redeploy undercloud and I am trying to understand what > steps has to be taken before redeploying undercloud. All the > deployment was under stack user . So first step will be to delete > stack user. I am not sure what has to be done regarding the networking > configuration autogenerated by os-net-config during the older install. > > Please advise. > > > > Best Regards, > > Milind > > > ________________________________ > Learn more on how to switch to Sprint and save 50% on most Verizon, > AT&T or T-Mobile rates. See sprint.com/50off for details. > > ________________________________ > > This e-mail may contain Sprint proprietary information intended for > the sole use of the recipient(s). Any use by others is prohibited. If > you are not the intended recipient, please contact the sender and > delete all copies of the message. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com From hguemar at fedoraproject.org Mon Jun 27 15:00:03 2016 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 27 Jun 2016 15:00:03 +0000 (UTC) Subject: [rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20160627150003.75BF460A4009@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2016-06-29 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Meeting ](https://etherpad.openstack.org/p/RDO-Meeting) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From dsneddon at redhat.com Mon Jun 27 16:40:05 2016 From: dsneddon at redhat.com (Dan Sneddon) Date: Mon, 27 Jun 2016 09:40:05 -0700 Subject: [rdo-list] Redeploying UnderCloud In-Reply-To: <6088bd139e7f48d4b7e857c03e854465@PREWE13M11.ad.sprint.com> References: <6088bd139e7f48d4b7e857c03e854465@PREWE13M11.ad.sprint.com> Message-ID: <6fd35627-027f-788d-062b-678206a78502@redhat.com> On 06/27/2016 06:41 AM, Gunjan, Milind [CTO] wrote: > Hi All, > > Greeting. > > > > This is my first post and I am fairly new to RDO OpenStack. I am > working on RDO Triple-O deployment on baremetal. Due to incorrect > values in undercloud.conf file , my undercloud deployment failed. I > would like to redeploy undercloud and I am trying to understand what > steps has to be taken before redeploying undercloud. All the deployment > was under stack user . So first step will be to delete stack user. I am > not sure what has to be done regarding the networking configuration > autogenerated by os-net-config during the older install. > > Please advise. > > > > Best Regards, > > Milind No, definitely you don't want to delete the stack user, especially not as your first step! That would get rid of the configuration files, etc. that are in ~stack, and generally make your life harder than it needs to be. Anyway, it isn't necessary. You can do a procedure very much like what you do when upgrading the undercloud, with a couple of extra steps. As the stack user, edit your undercloud.conf, and make sure there are no more typos. If the typos were in the network configuration, you should delete the bridge and remove the ifcfg files: $ sudo ifdown br-ctlplane $ sudo ovs-vsctl del-br br-ctlplane $ sudo rm /etc/sysconfig/network-scripts/*br-ctlplane Next, run the underclound installation again: $ sudo yum update -y # Reboot after if kernel or core packages updated $ openstack undercloud install Then proceed with the rest of the instructions. You may find that if you already uploaded disk images or imported nodes that they will still be in the database. That's OK, or you can delete and reimport. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From ckdwibedy at gmail.com Tue Jun 28 02:11:57 2016 From: ckdwibedy at gmail.com (Chinmaya Dwibedy) Date: Tue, 28 Jun 2016 07:41:57 +0530 Subject: [rdo-list] Issue with IPsec ESP packets dropped even if the security-groups and port security are disabled (using openstack-mitaka release on CentO7.2 system) Message-ID: Hi All, I have installed openstack-mitaka release on CentO7.2 system.I have disabled the security-groups and port security for all the neutron ports/all VMs using below stated. ML2 port security is enabled in /etc/neutron/plugins/ml2/ml2_conf.ini: extension_drivers = port_security #!/bin/bash for port in $(neutron port-list -c id -c port_security_enabled -c fixed_ips | grep True | cut -d '|' -f2); do echo "Removing security-groups and port_security for port: $port" neutron port-update --no-security-groups --port_security_enabled=False $port done echo "Completed" Thereafter when I send IPsec ESP traffic from One VM1 to another VM2, it is being received and captured (by tcpdump) by the corresponding tap device but the same is not being received on Linux bridge (qbrxxx) and qvbxxx (of VM1). Note that, if I send UDP traffic then I do not find any issue. It is being carried forwarded to VM2. The VM1's eth0 interface is connected to a Linux tap device tap2caa3b0e-e3 which is plugged into a Linux bridge, qbr2caa3b0e-e3. There are no iptables filtering applied when packets passing into or out of the Linux bridge. Can anyone please suggest what might the issue and its solution? Thank you in advance for your time and support. Here goes the configurations. Please feel free to let me know if you need any additional information. [root at stag48 ~(keystone_admin)]# brctl show bridge name bridge id STP enabled interfaces qbr2caa3b0e-e3 8000.1ec72d90a310 no qvb2caa3b0e-e3 tap2caa3b0e-e3 qbr408fa3a3-b4 8000.e6f0e680f28f no qvb408fa3a3-b4 tap408fa3a3-b4 qbr5fa991b5-de 8000.02c32f416df0 no qvb5fa991b5-de tap5fa991b5-de qbraf134785-23 8000.46e43737b69f no qvbaf134785-23 tapaf134785-23 qbre698fa07-9c 8000.5ea17f458f55 no qvbe698fa07-9c tape698fa07-9c qbrf6756f4d-08 8000.b2f79fe90f20 no qvbf6756f4d-08 tapf6756f4d-08 [root at stag48 ~(keystone_admin)]# iptables -S | grep tap2caa3b0e-e3 [root at stag48 ~(keystone_admin)]# [root at stag48 ~(keystone_admin)]# neutron security-group-rule-list +--------------------------------------+----------------+-----------+-----------+---------------+------------------+ | id | security_group | direction | ethertype | port/protocol | remote | +--------------------------------------+----------------+-----------+-----------+---------------+------------------+ | 16c2d8c8-a286-4b71-8045-94cd303b5c02 | default | ingress | IPv4 | 22/tcp | 0.0.0.0/0 (CIDR) | | 2332057f-8c66-4aa6-8700-561b26a5b906 | default | ingress | IPv4 | any | default (group) | | 4798772b-561f-4960-85b2-2453613d527e | default | ingress | IPv6 | any | default (group) | | 5142e3b2-d2ff-40c5-87eb-5d646852f2d4 | default | ingress | IPv4 | icmp | 0.0.0.0/0 (CIDR) | | 7179fc0a-5533-433a-8cc9-3099eeff5a4b | default | egress | IPv4 | any | any | | 7cb2f140-6c97-499a-b5f7-6bcc16f6c9a3 | default | ingress | IPv6 | any | default (group) | | 829e7607-463a-4c7a-b162-8357f47924d1 | default | ingress | IPv4 | 1-65535/udp | 0.0.0.0/0 (CIDR) | | 9f1b8571-3c46-4f53-ac80-835d2186a3c0 | default | egress | IPv6 | any | any | | bd46535b-6311-46f6-9b5c-cda78194ac01 | default | egress | IPv4 | any | any | | e1b7ab35-8426-4c07-b5bc-d5760b291520 | default | ingress | IPv4 | any | default (group) | | e82da2bf-f2e1-4d33-916b-ecb90b5db857 | default | egress | IPv6 | any | any | +--------------------------------------+----------------+-----------+-----------+---------------+------------------+ [root at stag48 ~(keystone_admin)]# nova secgroup-list-rules default +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | | | | | default | | icmp | -1 | -1 | 0.0.0.0/0 | | | udp | 1 | 65535 | 0.0.0.0/0 | | | tcp | 22 | 22 | 0.0.0.0/0 | | | | | | | default | +-------------+-----------+---------+-----------+--------------+ [root at stag48 ~(keystone_admin)]# [root at stag48 ~(keystone_admin)]# nova list +--------------------------------------+-------------+--------+------------+-------------+-------------------------------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------------+--------+------------+-------------+-------------------------------------------------------------------------+ | 38207997-25af-4113-bc40-109b2745412c | VM2 | ACTIVE | - | Running | private1=11.0.151.13, 172.19.208.25; private=10.0.151.50, 172.19.208.15 | | 302f90eb-2d0a-4a74-8e95-92ac8c7e2b71 | VM1 | ACTIVE | - | Running | private1=11.0.151.14, 172.19.208.26; private=10.0.151.51, 172.19.208.16 | +--------------------------------------+-------------+--------+------------+-------------+-------------------------------------------------------------------------+ [root at stag48 ~(keystone_admin)]# Regards, Chinmaya -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at gbraad.nl Tue Jun 28 02:25:08 2016 From: me at gbraad.nl (Gerard Braad) Date: Tue, 28 Jun 2016 10:25:08 +0800 Subject: [rdo-list] [centos-ci] Artifacts server does not support ranges / resume of downloads? Message-ID: Hi all, Usually I download the undercloud images using: https://gist.github.com/gbraad/45cbe30415b0dc631f5e8d20beaffebf which targets: 'artifacts.ci.centos.org' but now get "HTTP server doesn't seem to support byte ranges. Cannot resume." I need to resume a download about 10 times on the office connection (China), so what happened? I have a workaround by storing the files at another server first in Japan or the west-coast which supports resuming. Although, being able to resume the download of these files is preferable. regards, Gerard -- Gerard Braad | http://gbraad.nl [ Doing Open Source Matters ] From Milind.Gunjan at sprint.com Tue Jun 28 03:18:55 2016 From: Milind.Gunjan at sprint.com (Gunjan, Milind [CTO]) Date: Tue, 28 Jun 2016 03:18:55 +0000 Subject: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment Message-ID: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> Hi Dan, Thanks a lot for your response. Even after properly updating the undercloud.conf file and checking the network configuration, undercloud deployment still fails. To recreate the issue , I am mentioning all the configuration steps: 1. Installed CentOS Linux release 7.2.1511 (Core) image on baremetal. 2. created stack user and provided required permission to stack user . 3. setting hostname sudo hostnamectl set-hostname rdo-undercloud.mydomain sudo hostnamectl set-hostname --transient rdo-undercloud.mydomain [stack at rdo-undercloud etc]$ cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.0.2.1 rdo-undercloud undercloud-rdo.mydomain 4. enable required repositories sudo yum -y install epel-release sudo curl -o /etc/yum.repos.d/delorean-liberty.repo https://trunk.rdoproject.org/centos7-liberty/current/delorean.repo sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo 5. install repos sudo yum -y install yum-plugin-priorities sudo yum install -y python-tripleoclient 6. update undercloud.conf [stack at rdo-undercloud ~]$ cat undercloud.conf [DEFAULT] local_ip = 192.0.2.1/24 undercloud_public_vip = 192.0.2.2 undercloud_admin_vip = 192.0.2.3 local_interface = enp6s0 masquerade_network = 192.0.2.0/24 dhcp_start = 192.0.2.150 dhcp_end = 192.0.2.199 network_cidr = 192.0.2.0/24 network_gateway = 192.0.2.1 discovery_iprange = 192.0.2.200,192.0.2.230 discovery_runbench = false [auth] 7. install undercloud openstack undercloud install install ends in error: Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user[neutron]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_user[heat]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default Error: Could not prefetch keystone_service provider 'openstack': Execution of '/bin/openstack service list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) Error: Not managing Keystone_service[glance] due to earlier Keystone API failures. Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_service[glance::image]/ensure: change from absent to present failed: Not managing Keystone_service[glance] due to earlier Keystone API failures. Error: Could not prefetch keystone_role provider 'openstack': Execution of '/bin/openstack role list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) Error: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_role[ResellerAdmin]/ensure: change from absent to present failed: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. Error: Not managing Keystone_service[ironic] due to earlier Keystone API failures. Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_service[ironic::baremetal]/ensure: change from absent to present failed: Not managing Keystone_service[ironic] due to earlier Keystone API failures. Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_user[nova]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user[glance]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default Error: Not managing Keystone_service[novav3] due to earlier Keystone API failures. Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova v3 service, user novav3]/Keystone_service[novav3::computev3]/ensure: change from absent to present failed: Not managing Keystone_service[novav3] due to earlier Keystone API failures. Error: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures. Error: /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_user]/ensure: change from absent to present failed: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures. Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_user[ironic]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default Error: Not managing Keystone_service[nova] due to earlier Keystone API failures. Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_service[nova::compute]/ensure: change from absent to present failed: Not managing Keystone_service[nova] due to earlier Keystone API failures. Error: Not managing Keystone_role[swiftoperator] due to earlier Keystone API failures. Error: /Stage[main]/Swift::Keystone::Auth/Keystone_role[swiftoperator]/ensure: change from absent to present failed: Not managing Keystone_role[swiftoperator] due to earlier Keystone API failures. Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_user[ceilometer]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default Error: Not managing Keystone_service[neutron] due to earlier Keystone API failures. Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_service[neutron::network]/ensure: change from absent to present failed: Not managing Keystone_service[neutron] due to earlier Keystone API failures. Error: Not managing Keystone_service[ceilometer] due to earlier Keystone API failures. Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_service[ceilometer::metering]/ensure: change from absent to present failed: Not managing Keystone_service[ceilometer] due to earlier Keystone API failures. Error: Not managing Keystone_service[swift] due to earlier Keystone API failures. Error: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_service[swift::object-store]/ensure: change from absent to present failed: Not managing Keystone_service[swift] due to earlier Keystone API failures. Error: Not managing Keystone_service[keystone] due to earlier Keystone API failures. Error: /Stage[main]/Keystone::Endpoint/Keystone::Resource::Service_identity[keystone]/Keystone_service[keystone::identity]/ensure: change from absent to present failed: Not managing Keystone_service[keystone] due to earlier Keystone API failures. Error: Not managing Keystone_service[heat] due to earlier Keystone API failures. Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_service[heat::orchestration]/ensure: change from absent to present failed: Not managing Keystone_service[heat] due to earlier Keystone API failures. Error: Could not prefetch keystone_endpoint provider 'openstack': Execution of '/bin/openstack endpoint list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) Error: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_user[swift]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default Error: Could not prefetch keystone_tenant provider 'openstack': Execution of '/bin/openstack project list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) Error: Not managing Keystone_tenant[service] due to earlier Keystone API failures. Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[service]/ensure: change from absent to present failed: Not managing Keystone_tenant[service] due to earlier Keystone API failures. Error: Not managing Keystone_tenant[admin] due to earlier Keystone API failures. Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[admin]/ensure: change from absent to present failed: Not managing Keystone_tenant[admin] due to earlier Keystone API failures. Error: Not managing Keystone_role[admin] due to earlier Keystone API failures. Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[admin]/ensure: change from absent to present failed: Not managing Keystone_role[admin] due to earlier Keystone API failures. Error: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default Error: Could not prefetch keystone_domain provider 'openstack': Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) Notice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: ERROR: (pymysql.err.OperationalError) (1045, u"Access denied for user 'heat'@'rdo-undercloud' (using password: YES)") Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: Failed to call refresh: heat-manage --config-file /etc/heat/heat.conf db_sync returned 1 instead of one of [0] Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: heat-manage --config-file /etc/heat/heat.conf db_sync returned 1 instead of one of [0] [2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Triggered 'refresh' from 4 events Notice: Finished catalog run in 5259.44 seconds + rc=6 + set -e + echo 'puppet apply exited with exit code 6' puppet apply exited with exit code 6 + '[' 6 '!=' 2 -a 6 '!=' 0 ']' + exit 6 [2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] [2016-06-27 18:54:04,093] (os-refresh-config) [ERROR] Aborting... Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 815, in install _run_orc(instack_env) File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 699, in _run_orc _run_live_command(args, instack_env, 'os-refresh-config') File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 370, in _run_live_command raise RuntimeError('%s failed. See log for details.' % name) RuntimeError: os-refresh-config failed. See log for details. Command 'instack-install-undercloud' returned non-zero exit status 1 I am not able to understand the exact cause of undercloud install failure. It would be really helpful if you guys can point be in direction to understand the exact cause of issue and any possible resolution. Thanks a lot. Best Regards, Milind Best Regards, Milind -----Original Message----- From: Dan Sneddon [mailto:dsneddon at redhat.com] Sent: Monday, June 27, 2016 12:40 PM To: Gunjan, Milind [CTO] ; rdo-list at redhat.com Subject: Re: [rdo-list] Redeploying UnderCloud On 06/27/2016 06:41 AM, Gunjan, Milind [CTO] wrote: > Hi All, > > Greeting. > > > > This is my first post and I am fairly new to RDO OpenStack. I am > working on RDO Triple-O deployment on baremetal. Due to incorrect > values in undercloud.conf file , my undercloud deployment failed. I > would like to redeploy undercloud and I am trying to understand what > steps has to be taken before redeploying undercloud. All the > deployment was under stack user . So first step will be to delete > stack user. I am not sure what has to be done regarding the networking > configuration autogenerated by os-net-config during the older install. > > Please advise. > > > > Best Regards, > > Milind No, definitely you don't want to delete the stack user, especially not as your first step! That would get rid of the configuration files, etc. that are in ~stack, and generally make your life harder than it needs to be. Anyway, it isn't necessary. You can do a procedure very much like what you do when upgrading the undercloud, with a couple of extra steps. As the stack user, edit your undercloud.conf, and make sure there are no more typos. If the typos were in the network configuration, you should delete the bridge and remove the ifcfg files: $ sudo ifdown br-ctlplane $ sudo ovs-vsctl del-br br-ctlplane $ sudo rm /etc/sysconfig/network-scripts/*br-ctlplane Next, run the underclound installation again: $ sudo yum update -y # Reboot after if kernel or core packages updated $ openstack undercloud install Then proceed with the rest of the instructions. You may find that if you already uploaded disk images or imported nodes that they will still be in the database. That's OK, or you can delete and reimport. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter ________________________________ Learn more on how to switch to Sprint and save 50% on most Verizon, AT&T or T-Mobile rates. See sprint.com/50off for details. ________________________________ This e-mail may contain Sprint proprietary information intended for the sole use of the recipient(s). Any use by others is prohibited. If you are not the intended recipient, please contact the sender and delete all copies of the message. From marius at remote-lab.net Tue Jun 28 08:27:54 2016 From: marius at remote-lab.net (Marius Cornea) Date: Tue, 28 Jun 2016 10:27:54 +0200 Subject: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment In-Reply-To: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> Message-ID: On Tue, Jun 28, 2016 at 5:18 AM, Gunjan, Milind [CTO] wrote: > Hi Dan, > Thanks a lot for your response. > > Even after properly updating the undercloud.conf file and checking the network configuration, undercloud deployment still fails. > > To recreate the issue , I am mentioning all the configuration steps: > 1. Installed CentOS Linux release 7.2.1511 (Core) image on baremetal. > 2. created stack user and provided required permission to stack user . > 3. setting hostname > sudo hostnamectl set-hostname rdo-undercloud.mydomain > sudo hostnamectl set-hostname --transient rdo-undercloud.mydomain > > [stack at rdo-undercloud etc]$ cat /etc/hosts > 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 > ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 > 192.0.2.1 rdo-undercloud undercloud-rdo.mydomain Could you try removing the 192.0.2.1 entry from /etc/hosts and replace it with the address of the public interface, e.g: $ip_public_nic rdo-undercloud rdo-undercloud.mydomain then rerun openstack undercloud install > 4. enable required repositories > sudo yum -y install epel-release > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo https://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > > 5. install repos > > sudo yum -y install yum-plugin-priorities > sudo yum install -y python-tripleoclient > > 6. update undercloud.conf > > [stack at rdo-undercloud ~]$ cat undercloud.conf > [DEFAULT] > local_ip = 192.0.2.1/24 > undercloud_public_vip = 192.0.2.2 > undercloud_admin_vip = 192.0.2.3 > local_interface = enp6s0 > masquerade_network = 192.0.2.0/24 > dhcp_start = 192.0.2.150 > dhcp_end = 192.0.2.199 > network_cidr = 192.0.2.0/24 > network_gateway = 192.0.2.1 > discovery_iprange = 192.0.2.200,192.0.2.230 > discovery_runbench = false > [auth] > > 7. install undercloud > > openstack undercloud install > > install ends in error: > Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user[neutron]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default > Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_user[heat]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default > Error: Could not prefetch keystone_service provider 'openstack': Execution of '/bin/openstack service list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) > Error: Not managing Keystone_service[glance] due to earlier Keystone API failures. > Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_service[glance::image]/ensure: change from absent to present failed: Not managing Keystone_service[glance] due to earlier Keystone API failures. > Error: Could not prefetch keystone_role provider 'openstack': Execution of '/bin/openstack role list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) > Error: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. > Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_role[ResellerAdmin]/ensure: change from absent to present failed: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. > Error: Not managing Keystone_service[ironic] due to earlier Keystone API failures. > Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_service[ironic::baremetal]/ensure: change from absent to present failed: Not managing Keystone_service[ironic] due to earlier Keystone API failures. > Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_user[nova]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default > Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user[glance]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default > Error: Not managing Keystone_service[novav3] due to earlier Keystone API failures. > Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova v3 service, user novav3]/Keystone_service[novav3::computev3]/ensure: change from absent to present failed: Not managing Keystone_service[novav3] due to earlier Keystone API failures. > Error: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures. > Error: /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_user]/ensure: change from absent to present failed: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures. > Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_user[ironic]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default > Error: Not managing Keystone_service[nova] due to earlier Keystone API failures. > Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_service[nova::compute]/ensure: change from absent to present failed: Not managing Keystone_service[nova] due to earlier Keystone API failures. > Error: Not managing Keystone_role[swiftoperator] due to earlier Keystone API failures. > Error: /Stage[main]/Swift::Keystone::Auth/Keystone_role[swiftoperator]/ensure: change from absent to present failed: Not managing Keystone_role[swiftoperator] due to earlier Keystone API failures. > Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_user[ceilometer]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default > Error: Not managing Keystone_service[neutron] due to earlier Keystone API failures. > Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_service[neutron::network]/ensure: change from absent to present failed: Not managing Keystone_service[neutron] due to earlier Keystone API failures. > Error: Not managing Keystone_service[ceilometer] due to earlier Keystone API failures. > Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_service[ceilometer::metering]/ensure: change from absent to present failed: Not managing Keystone_service[ceilometer] due to earlier Keystone API failures. > Error: Not managing Keystone_service[swift] due to earlier Keystone API failures. > Error: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_service[swift::object-store]/ensure: change from absent to present failed: Not managing Keystone_service[swift] due to earlier Keystone API failures. > Error: Not managing Keystone_service[keystone] due to earlier Keystone API failures. > Error: /Stage[main]/Keystone::Endpoint/Keystone::Resource::Service_identity[keystone]/Keystone_service[keystone::identity]/ensure: change from absent to present failed: Not managing Keystone_service[keystone] due to earlier Keystone API failures. > Error: Not managing Keystone_service[heat] due to earlier Keystone API failures. > Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_service[heat::orchestration]/ensure: change from absent to present failed: Not managing Keystone_service[heat] due to earlier Keystone API failures. > Error: Could not prefetch keystone_endpoint provider 'openstack': Execution of '/bin/openstack endpoint list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) > Error: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_user[swift]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default > Error: Could not prefetch keystone_tenant provider 'openstack': Execution of '/bin/openstack project list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) > Error: Not managing Keystone_tenant[service] due to earlier Keystone API failures. > Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[service]/ensure: change from absent to present failed: Not managing Keystone_tenant[service] due to earlier Keystone API failures. > Error: Not managing Keystone_tenant[admin] due to earlier Keystone API failures. > Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[admin]/ensure: change from absent to present failed: Not managing Keystone_tenant[admin] due to earlier Keystone API failures. > Error: Not managing Keystone_role[admin] due to earlier Keystone API failures. > Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[admin]/ensure: change from absent to present failed: Not managing Keystone_role[admin] due to earlier Keystone API failures. > Error: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default > Error: Could not prefetch keystone_domain provider 'openstack': Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) > Notice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: ERROR: (pymysql.err.OperationalError) (1045, u"Access denied for user 'heat'@'rdo-undercloud' (using password: YES)") > Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: Failed to call refresh: heat-manage --config-file /etc/heat/heat.conf db_sync returned 1 instead of one of [0] > Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: heat-manage --config-file /etc/heat/heat.conf db_sync returned 1 instead of one of [0] > [2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] > > Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Triggered 'refresh' from 4 events > Notice: Finished catalog run in 5259.44 seconds > + rc=6 > + set -e > + echo 'puppet apply exited with exit code 6' > puppet apply exited with exit code 6 > + '[' 6 '!=' 2 -a 6 '!=' 0 ']' > + exit 6 > [2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] > > [2016-06-27 18:54:04,093] (os-refresh-config) [ERROR] Aborting... > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 815, in install > _run_orc(instack_env) > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 699, in _run_orc > _run_live_command(args, instack_env, 'os-refresh-config') > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 370, in _run_live_command > raise RuntimeError('%s failed. See log for details.' % name) > RuntimeError: os-refresh-config failed. See log for details. > Command 'instack-install-undercloud' returned non-zero exit status 1 > > > I am not able to understand the exact cause of undercloud install failure. It would be really helpful if you guys can point be in direction to understand the exact cause of issue and any possible resolution. > > Thanks a lot. > > Best Regards, > Milind > > > Best Regards, > Milind > -----Original Message----- > From: Dan Sneddon [mailto:dsneddon at redhat.com] > Sent: Monday, June 27, 2016 12:40 PM > To: Gunjan, Milind [CTO] ; rdo-list at redhat.com > Subject: Re: [rdo-list] Redeploying UnderCloud > > On 06/27/2016 06:41 AM, Gunjan, Milind [CTO] wrote: >> Hi All, >> >> Greeting. >> >> >> >> This is my first post and I am fairly new to RDO OpenStack. I am >> working on RDO Triple-O deployment on baremetal. Due to incorrect >> values in undercloud.conf file , my undercloud deployment failed. I >> would like to redeploy undercloud and I am trying to understand what >> steps has to be taken before redeploying undercloud. All the >> deployment was under stack user . So first step will be to delete >> stack user. I am not sure what has to be done regarding the networking >> configuration autogenerated by os-net-config during the older install. >> >> Please advise. >> >> >> >> Best Regards, >> >> Milind > > No, definitely you don't want to delete the stack user, especially not as your first step! That would get rid of the configuration files, etc. > that are in ~stack, and generally make your life harder than it needs to be. > > Anyway, it isn't necessary. You can do a procedure very much like what you do when upgrading the undercloud, with a couple of extra steps. > > As the stack user, edit your undercloud.conf, and make sure there are no more typos. > > If the typos were in the network configuration, you should delete the bridge and remove the ifcfg files: > > $ sudo ifdown br-ctlplane > $ sudo ovs-vsctl del-br br-ctlplane > $ sudo rm /etc/sysconfig/network-scripts/*br-ctlplane > > Next, run the underclound installation again: > > $ sudo yum update -y # Reboot after if kernel or core packages updated $ openstack undercloud install > > Then proceed with the rest of the instructions. You may find that if you already uploaded disk images or imported nodes that they will still be in the database. That's OK, or you can delete and reimport. > > -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > 650.254.4025 | dsneddon:irc @dxs:twitter > > > > ________________________________ > Learn more on how to switch to Sprint and save 50% on most Verizon, AT&T or T-Mobile rates. See sprint.com/50off for details. > > ________________________________ > > This e-mail may contain Sprint proprietary information intended for the sole use of the recipient(s). Any use by others is prohibited. If you are not the intended recipient, please contact the sender and delete all copies of the message. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From mpavlase at redhat.com Tue Jun 28 08:58:00 2016 From: mpavlase at redhat.com (=?UTF-8?Q?Martin_Pavl=c3=a1sek?=) Date: Tue, 28 Jun 2016 10:58:00 +0200 Subject: [rdo-list] Mouse does not work in a hosted virtual machine using openstack-mitaka release In-Reply-To: References: Message-ID: Hi, you can try use: $ x11vnc Just simply run this command inside VM (check if port, that this server will listen is opened from outside, it's usually 5900 or greater) and connect there through: $ vncviewer Martin On 20/06/16 13:56, Chinmaya Dwibedy wrote: > Hi , > > Can anyone please suggest how to enable mouse in VM instance ? Thank you in > advance for your time and support. > > Regards, > Chinmaya > > On Fri, Jun 17, 2016 at 6:40 PM, Chinmaya Dwibedy > wrote: > > Hi All, > > > I have installed openstack-mitaka release on stag48 (CentO7 system) and > created VMs (fedora 20) . Logged in using VM's instance console using > horizon dashboard. The mouse does not function within a virtual machine. > Can anyone suggest how to enable this ? > > > > Regards, > > Chinmaya > > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From apevec at redhat.com Tue Jun 28 09:36:20 2016 From: apevec at redhat.com (Alan Pevec) Date: Tue, 28 Jun 2016 11:36:20 +0200 Subject: [rdo-list] Attempt of Tripleo Quickstart deployments hang downloding undercload.qcow2 In-Reply-To: References: <57630DEA.4080806@redhat.com> Message-ID: On Fri, Jun 17, 2016 at 11:13 AM, Alan Pevec wrote: >> Quick idea: change download format to tarball which includes image + md5 ? > > I've piggy-backed it in > https://bugs.launchpad.net/tripleo-quickstart/+bug/1579921 > could we raise priority for this RFE ? Looks like piggy-backing there was denied. What about appending checksum at qcow2 EOF, then truncating the file after checking? Cheers, Alan From javier.pena at redhat.com Tue Jun 28 14:21:26 2016 From: javier.pena at redhat.com (Javier Pena) Date: Tue, 28 Jun 2016 10:21:26 -0400 (EDT) Subject: [rdo-list] [DLRN] Switching the fedora-master worker from f23 to f24 In-Reply-To: <481691974.3175883.1467123633962.JavaMail.zimbra@redhat.com> Message-ID: <268199787.3176005.1467123686463.JavaMail.zimbra@redhat.com> Hi RDO, Now that Fedora 24 has been released, we are removing the old fedora-master worker (based on f23) and setting up the new one, based on f24. Also, the fedora-rawhide-master worker will be reconfigured to use http://trunk.rdoproject.org/f25 as its base URL. It will take some time to cleanup the old data and bootstrap the new worker, we will keep you updated on the current status. Thanks, Javier Pe?a From trown at redhat.com Tue Jun 28 16:33:47 2016 From: trown at redhat.com (John Trowbridge) Date: Tue, 28 Jun 2016 12:33:47 -0400 Subject: [rdo-list] Attempt of Tripleo Quickstart deployments hang downloding undercload.qcow2 In-Reply-To: References: <57630DEA.4080806@redhat.com> Message-ID: <5772A6EB.80201@redhat.com> On 06/28/2016 05:36 AM, Alan Pevec wrote: > On Fri, Jun 17, 2016 at 11:13 AM, Alan Pevec wrote: >>> Quick idea: change download format to tarball which includes image + md5 ? >> >> I've piggy-backed it in >> https://bugs.launchpad.net/tripleo-quickstart/+bug/1579921 >> could we raise priority for this RFE ? > > Looks like piggy-backing there was denied. > What about appending checksum at qcow2 EOF, then truncating the file > after checking? > Hmm, that could work. We would need to change the image building role to do the checksums that way [1], at the same time changing tripleo-quickstart itself to consume the checksum in the new way. The change to the image building role would likely need to be guarded with a non-default option so as not to break other users of that role. Likewise the new behavior in tripleo-quickstart would need to be optional, though probably default if we change the image url to the CDN with these "all-in-one" images. [1] https://github.com/redhat-openstack/ansible-role-tripleo-image-build > Cheers, > Alan > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From Milind.Gunjan at sprint.com Tue Jun 28 20:45:25 2016 From: Milind.Gunjan at sprint.com (Gunjan, Milind [CTO]) Date: Tue, 28 Jun 2016 20:45:25 +0000 Subject: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment In-Reply-To: References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> Message-ID: <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> Hi All, Thanks a lot for continued support. I would just like to get clarity regarding the last recommendation. My current deployment is failing with following error : Error: Could not prefetch keystone_service provider 'openstack': Execution of '/bin/openstack service list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) Error: Could not prefetch keystone_role provider 'openstack': Execution of '/bin/openstack role list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) Error: Could not prefetch keystone_endpoint provider 'openstack': Execution of '/bin/openstack endpoint list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) I have previously done RHEL OSP7 deployment and I verified the host file of RHEL OSP7 undercloud deployment and it was configured with host gateway used by pxe network. Similarly, we have set the current host gateway to 192.0.2.1 as shown below for rdo manager undercloud installation: [stack at rdo-undercloud etc]$ cat /etc/hosts > 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 > ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 > 192.0.2.1 rdo-undercloud undercloud-rdo.mydomain I would like to know if it is required during rdo deployment to have it with the address of the public interface, e.g: $ip_public_nic rdo-undercloud rdo-undercloud.mydomain. Thanks again for your time and help. Really appreciate it. Best Regards, milind -----Original Message----- From: Marius Cornea [mailto:marius at remote-lab.net] Sent: Tuesday, June 28, 2016 4:28 AM To: Gunjan, Milind [CTO] Cc: rdo-list at redhat.com Subject: Re: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment On Tue, Jun 28, 2016 at 5:18 AM, Gunjan, Milind [CTO] wrote: > Hi Dan, > Thanks a lot for your response. > > Even after properly updating the undercloud.conf file and checking the network configuration, undercloud deployment still fails. > > To recreate the issue , I am mentioning all the configuration steps: > 1. Installed CentOS Linux release 7.2.1511 (Core) image on baremetal. > 2. created stack user and provided required permission to stack user . > 3. setting hostname > sudo hostnamectl set-hostname rdo-undercloud.mydomain > sudo hostnamectl set-hostname --transient rdo-undercloud.mydomain > > [stack at rdo-undercloud etc]$ cat /etc/hosts > 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 > ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 > 192.0.2.1 rdo-undercloud undercloud-rdo.mydomain Could you try removing the 192.0.2.1 entry from /etc/hosts and replace it with the address of the public interface, e.g: $ip_public_nic rdo-undercloud rdo-undercloud.mydomain then rerun openstack undercloud install > 4. enable required repositories > sudo yum -y install epel-release > sudo curl -o /etc/yum.repos.d/delorean-liberty.repo https://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > > 5. install repos > > sudo yum -y install yum-plugin-priorities > sudo yum install -y python-tripleoclient > > 6. update undercloud.conf > > [stack at rdo-undercloud ~]$ cat undercloud.conf > [DEFAULT] > local_ip = 192.0.2.1/24 > undercloud_public_vip = 192.0.2.2 > undercloud_admin_vip = 192.0.2.3 > local_interface = enp6s0 > masquerade_network = 192.0.2.0/24 > dhcp_start = 192.0.2.150 > dhcp_end = 192.0.2.199 > network_cidr = 192.0.2.0/24 > network_gateway = 192.0.2.1 > discovery_iprange = 192.0.2.200,192.0.2.230 > discovery_runbench = false > [auth] > > 7. install undercloud > > openstack undercloud install > > install ends in error: > Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user[neutron]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default > Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_user[heat]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default > Error: Could not prefetch keystone_service provider 'openstack': Execution of '/bin/openstack service list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) > Error: Not managing Keystone_service[glance] due to earlier Keystone API failures. > Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_service[glance::image]/ensure: change from absent to present failed: Not managing Keystone_service[glance] due to earlier Keystone API failures. > Error: Could not prefetch keystone_role provider 'openstack': Execution of '/bin/openstack role list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) > Error: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. > Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_role[ResellerAdmin]/ensure: change from absent to present failed: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. > Error: Not managing Keystone_service[ironic] due to earlier Keystone API failures. > Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_service[ironic::baremetal]/ensure: change from absent to present failed: Not managing Keystone_service[ironic] due to earlier Keystone API failures. > Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_user[nova]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default > Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user[glance]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default > Error: Not managing Keystone_service[novav3] due to earlier Keystone API failures. > Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova v3 service, user novav3]/Keystone_service[novav3::computev3]/ensure: change from absent to present failed: Not managing Keystone_service[novav3] due to earlier Keystone API failures. > Error: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures. > Error: /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_user]/ensure: change from absent to present failed: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures. > Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_user[ironic]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default > Error: Not managing Keystone_service[nova] due to earlier Keystone API failures. > Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_service[nova::compute]/ensure: change from absent to present failed: Not managing Keystone_service[nova] due to earlier Keystone API failures. > Error: Not managing Keystone_role[swiftoperator] due to earlier Keystone API failures. > Error: /Stage[main]/Swift::Keystone::Auth/Keystone_role[swiftoperator]/ensure: change from absent to present failed: Not managing Keystone_role[swiftoperator] due to earlier Keystone API failures. > Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_user[ceilometer]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default > Error: Not managing Keystone_service[neutron] due to earlier Keystone API failures. > Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_service[neutron::network]/ensure: change from absent to present failed: Not managing Keystone_service[neutron] due to earlier Keystone API failures. > Error: Not managing Keystone_service[ceilometer] due to earlier Keystone API failures. > Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_service[ceilometer::metering]/ensure: change from absent to present failed: Not managing Keystone_service[ceilometer] due to earlier Keystone API failures. > Error: Not managing Keystone_service[swift] due to earlier Keystone API failures. > Error: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_service[swift::object-store]/ensure: change from absent to present failed: Not managing Keystone_service[swift] due to earlier Keystone API failures. > Error: Not managing Keystone_service[keystone] due to earlier Keystone API failures. > Error: /Stage[main]/Keystone::Endpoint/Keystone::Resource::Service_identity[keystone]/Keystone_service[keystone::identity]/ensure: change from absent to present failed: Not managing Keystone_service[keystone] due to earlier Keystone API failures. > Error: Not managing Keystone_service[heat] due to earlier Keystone API failures. > Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_service[heat::orchestration]/ensure: change from absent to present failed: Not managing Keystone_service[heat] due to earlier Keystone API failures. > Error: Could not prefetch keystone_endpoint provider 'openstack': Execution of '/bin/openstack endpoint list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) > Error: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_user[swift]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default > Error: Could not prefetch keystone_tenant provider 'openstack': Execution of '/bin/openstack project list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) > Error: Not managing Keystone_tenant[service] due to earlier Keystone API failures. > Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[service]/ensure: change from absent to present failed: Not managing Keystone_tenant[service] due to earlier Keystone API failures. > Error: Not managing Keystone_tenant[admin] due to earlier Keystone API failures. > Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[admin]/ensure: change from absent to present failed: Not managing Keystone_tenant[admin] due to earlier Keystone API failures. > Error: Not managing Keystone_role[admin] due to earlier Keystone API failures. > Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[admin]/ensure: change from absent to present failed: Not managing Keystone_role[admin] due to earlier Keystone API failures. > Error: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default > Error: Could not prefetch keystone_domain provider 'openstack': Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) > Notice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: ERROR: (pymysql.err.OperationalError) (1045, u"Access denied for user 'heat'@'rdo-undercloud' (using password: YES)") > Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: Failed to call refresh: heat-manage --config-file /etc/heat/heat.conf db_sync returned 1 instead of one of [0] > Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: heat-manage --config-file /etc/heat/heat.conf db_sync returned 1 instead of one of [0] > [2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] > > Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Triggered 'refresh' from 4 events > Notice: Finished catalog run in 5259.44 seconds > + rc=6 > + set -e > + echo 'puppet apply exited with exit code 6' > puppet apply exited with exit code 6 > + '[' 6 '!=' 2 -a 6 '!=' 0 ']' > + exit 6 > [2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] > > [2016-06-27 18:54:04,093] (os-refresh-config) [ERROR] Aborting... > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 815, in install > _run_orc(instack_env) > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 699, in _run_orc > _run_live_command(args, instack_env, 'os-refresh-config') > File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 370, in _run_live_command > raise RuntimeError('%s failed. See log for details.' % name) > RuntimeError: os-refresh-config failed. See log for details. > Command 'instack-install-undercloud' returned non-zero exit status 1 > > > I am not able to understand the exact cause of undercloud install failure. It would be really helpful if you guys can point be in direction to understand the exact cause of issue and any possible resolution. > > Thanks a lot. > > Best Regards, > Milind > > > Best Regards, > Milind > -----Original Message----- > From: Dan Sneddon [mailto:dsneddon at redhat.com] > Sent: Monday, June 27, 2016 12:40 PM > To: Gunjan, Milind [CTO] ; rdo-list at redhat.com > Subject: Re: [rdo-list] Redeploying UnderCloud > > On 06/27/2016 06:41 AM, Gunjan, Milind [CTO] wrote: >> Hi All, >> >> Greeting. >> >> >> >> This is my first post and I am fairly new to RDO OpenStack. I am >> working on RDO Triple-O deployment on baremetal. Due to incorrect >> values in undercloud.conf file , my undercloud deployment failed. I >> would like to redeploy undercloud and I am trying to understand what >> steps has to be taken before redeploying undercloud. All the >> deployment was under stack user . So first step will be to delete >> stack user. I am not sure what has to be done regarding the networking >> configuration autogenerated by os-net-config during the older install. >> >> Please advise. >> >> >> >> Best Regards, >> >> Milind > > No, definitely you don't want to delete the stack user, especially not as your first step! That would get rid of the configuration files, etc. > that are in ~stack, and generally make your life harder than it needs to be. > > Anyway, it isn't necessary. You can do a procedure very much like what you do when upgrading the undercloud, with a couple of extra steps. > > As the stack user, edit your undercloud.conf, and make sure there are no more typos. > > If the typos were in the network configuration, you should delete the bridge and remove the ifcfg files: > > $ sudo ifdown br-ctlplane > $ sudo ovs-vsctl del-br br-ctlplane > $ sudo rm /etc/sysconfig/network-scripts/*br-ctlplane > > Next, run the underclound installation again: > > $ sudo yum update -y # Reboot after if kernel or core packages updated $ openstack undercloud install > > Then proceed with the rest of the instructions. You may find that if you already uploaded disk images or imported nodes that they will still be in the database. That's OK, or you can delete and reimport. > > -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > 650.254.4025 | dsneddon:irc @dxs:twitter > > > > ________________________________ > Learn more on how to switch to Sprint and save 50% on most Verizon, AT&T or T-Mobile rates. See sprint.com/50off for details. > > ________________________________ > > This e-mail may contain Sprint proprietary information intended for the sole use of the recipient(s). Any use by others is prohibited. If you are not the intended recipient, please contact the sender and delete all copies of the message. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From marius at remote-lab.net Wed Jun 29 06:56:28 2016 From: marius at remote-lab.net (Marius Cornea) Date: Wed, 29 Jun 2016 08:56:28 +0200 Subject: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment In-Reply-To: <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> Message-ID: On Tue, Jun 28, 2016 at 10:45 PM, Gunjan, Milind [CTO] wrote: > Hi All, > > Thanks a lot for continued support. > > I would just like to get clarity regarding the last recommendation. > My current deployment is failing with following error : > > Error: Could not prefetch keystone_service provider 'openstack': Execution of '/bin/openstack service list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) > Error: Could not prefetch keystone_role provider 'openstack': Execution of '/bin/openstack role list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) > Error: Could not prefetch keystone_endpoint provider 'openstack': Execution of '/bin/openstack endpoint list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) > > > I have previously done RHEL OSP7 deployment and I verified the host file of RHEL OSP7 undercloud deployment and it was configured with host gateway used by pxe network. > > Similarly, we have set the current host gateway to 192.0.2.1 as shown below for rdo manager undercloud installation: > [stack at rdo-undercloud etc]$ cat /etc/hosts >> 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 >> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 >> 192.0.2.1 rdo-undercloud undercloud-rdo.mydomain > > I would like to know if it is required during rdo deployment to have it with the address of the public interface, e.g: > $ip_public_nic rdo-undercloud rdo-undercloud.mydomain. The reason I asked for it is because of: Notice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: ERROR: (pymysql.err.OperationalError) (1045, u"Access denied for user 'heat'@'rdo-undercloud' (using password: YES)") Typically the undercloud's local_ip is used for these operations but in your case it's added to the hosts files and maybe this ip name mapping doesn't allow the installation to proceed. Please note that the docs[1] point that you should have an FQDN entry in the hosts file before the undercloud installation(when 192.0.2.1 isn't yet set on the system), that's why I mentioned the ip address of the public nic. [1] http://docs.openstack.org/developer/tripleo-docs/installation/installation.html > > Thanks again for your time and help. Really appreciate it. > > Best Regards, > milind > > -----Original Message----- > From: Marius Cornea [mailto:marius at remote-lab.net] > Sent: Tuesday, June 28, 2016 4:28 AM > To: Gunjan, Milind [CTO] > Cc: rdo-list at redhat.com > Subject: Re: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment > > On Tue, Jun 28, 2016 at 5:18 AM, Gunjan, Milind [CTO] > wrote: >> Hi Dan, >> Thanks a lot for your response. >> >> Even after properly updating the undercloud.conf file and checking the network configuration, undercloud deployment still fails. >> >> To recreate the issue , I am mentioning all the configuration steps: >> 1. Installed CentOS Linux release 7.2.1511 (Core) image on baremetal. >> 2. created stack user and provided required permission to stack user . >> 3. setting hostname >> sudo hostnamectl set-hostname rdo-undercloud.mydomain >> sudo hostnamectl set-hostname --transient rdo-undercloud.mydomain >> >> [stack at rdo-undercloud etc]$ cat /etc/hosts >> 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 >> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 >> 192.0.2.1 rdo-undercloud undercloud-rdo.mydomain > > Could you try removing the 192.0.2.1 entry from /etc/hosts and replace > it with the address of the public interface, e.g: > $ip_public_nic rdo-undercloud rdo-undercloud.mydomain > > then rerun openstack undercloud install > >> 4. enable required repositories >> sudo yum -y install epel-release >> sudo curl -o /etc/yum.repos.d/delorean-liberty.repo https://trunk.rdoproject.org/centos7-liberty/current/delorean.repo >> sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo >> >> 5. install repos >> >> sudo yum -y install yum-plugin-priorities >> sudo yum install -y python-tripleoclient >> >> 6. update undercloud.conf >> >> [stack at rdo-undercloud ~]$ cat undercloud.conf >> [DEFAULT] >> local_ip = 192.0.2.1/24 >> undercloud_public_vip = 192.0.2.2 >> undercloud_admin_vip = 192.0.2.3 >> local_interface = enp6s0 >> masquerade_network = 192.0.2.0/24 >> dhcp_start = 192.0.2.150 >> dhcp_end = 192.0.2.199 >> network_cidr = 192.0.2.0/24 >> network_gateway = 192.0.2.1 >> discovery_iprange = 192.0.2.200,192.0.2.230 >> discovery_runbench = false >> [auth] >> >> 7. install undercloud >> >> openstack undercloud install >> >> install ends in error: >> Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user[neutron]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_user[heat]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Could not prefetch keystone_service provider 'openstack': Execution of '/bin/openstack service list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) >> Error: Not managing Keystone_service[glance] due to earlier Keystone API failures. >> Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_service[glance::image]/ensure: change from absent to present failed: Not managing Keystone_service[glance] due to earlier Keystone API failures. >> Error: Could not prefetch keystone_role provider 'openstack': Execution of '/bin/openstack role list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) >> Error: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. >> Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_role[ResellerAdmin]/ensure: change from absent to present failed: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[ironic] due to earlier Keystone API failures. >> Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_service[ironic::baremetal]/ensure: change from absent to present failed: Not managing Keystone_service[ironic] due to earlier Keystone API failures. >> Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_user[nova]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user[glance]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Not managing Keystone_service[novav3] due to earlier Keystone API failures. >> Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova v3 service, user novav3]/Keystone_service[novav3::computev3]/ensure: change from absent to present failed: Not managing Keystone_service[novav3] due to earlier Keystone API failures. >> Error: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures. >> Error: /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_user]/ensure: change from absent to present failed: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures. >> Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_user[ironic]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Not managing Keystone_service[nova] due to earlier Keystone API failures. >> Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_service[nova::compute]/ensure: change from absent to present failed: Not managing Keystone_service[nova] due to earlier Keystone API failures. >> Error: Not managing Keystone_role[swiftoperator] due to earlier Keystone API failures. >> Error: /Stage[main]/Swift::Keystone::Auth/Keystone_role[swiftoperator]/ensure: change from absent to present failed: Not managing Keystone_role[swiftoperator] due to earlier Keystone API failures. >> Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_user[ceilometer]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Not managing Keystone_service[neutron] due to earlier Keystone API failures. >> Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_service[neutron::network]/ensure: change from absent to present failed: Not managing Keystone_service[neutron] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[ceilometer] due to earlier Keystone API failures. >> Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_service[ceilometer::metering]/ensure: change from absent to present failed: Not managing Keystone_service[ceilometer] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[swift] due to earlier Keystone API failures. >> Error: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_service[swift::object-store]/ensure: change from absent to present failed: Not managing Keystone_service[swift] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[keystone] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Endpoint/Keystone::Resource::Service_identity[keystone]/Keystone_service[keystone::identity]/ensure: change from absent to present failed: Not managing Keystone_service[keystone] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[heat] due to earlier Keystone API failures. >> Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_service[heat::orchestration]/ensure: change from absent to present failed: Not managing Keystone_service[heat] due to earlier Keystone API failures. >> Error: Could not prefetch keystone_endpoint provider 'openstack': Execution of '/bin/openstack endpoint list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) >> Error: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_user[swift]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Could not prefetch keystone_tenant provider 'openstack': Execution of '/bin/openstack project list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) >> Error: Not managing Keystone_tenant[service] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[service]/ensure: change from absent to present failed: Not managing Keystone_tenant[service] due to earlier Keystone API failures. >> Error: Not managing Keystone_tenant[admin] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[admin]/ensure: change from absent to present failed: Not managing Keystone_tenant[admin] due to earlier Keystone API failures. >> Error: Not managing Keystone_role[admin] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[admin]/ensure: change from absent to present failed: Not managing Keystone_role[admin] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Could not prefetch keystone_domain provider 'openstack': Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) >> Notice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: ERROR: (pymysql.err.OperationalError) (1045, u"Access denied for user 'heat'@'rdo-undercloud' (using password: YES)") >> Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: Failed to call refresh: heat-manage --config-file /etc/heat/heat.conf db_sync returned 1 instead of one of [0] >> Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: heat-manage --config-file /etc/heat/heat.conf db_sync returned 1 instead of one of [0] >> [2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] >> >> Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Triggered 'refresh' from 4 events >> Notice: Finished catalog run in 5259.44 seconds >> + rc=6 >> + set -e >> + echo 'puppet apply exited with exit code 6' >> puppet apply exited with exit code 6 >> + '[' 6 '!=' 2 -a 6 '!=' 0 ']' >> + exit 6 >> [2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] >> >> [2016-06-27 18:54:04,093] (os-refresh-config) [ERROR] Aborting... >> Traceback (most recent call last): >> File "", line 1, in >> File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 815, in install >> _run_orc(instack_env) >> File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 699, in _run_orc >> _run_live_command(args, instack_env, 'os-refresh-config') >> File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 370, in _run_live_command >> raise RuntimeError('%s failed. See log for details.' % name) >> RuntimeError: os-refresh-config failed. See log for details. >> Command 'instack-install-undercloud' returned non-zero exit status 1 >> >> >> I am not able to understand the exact cause of undercloud install failure. It would be really helpful if you guys can point be in direction to understand the exact cause of issue and any possible resolution. >> >> Thanks a lot. >> >> Best Regards, >> Milind >> >> >> Best Regards, >> Milind >> -----Original Message----- >> From: Dan Sneddon [mailto:dsneddon at redhat.com] >> Sent: Monday, June 27, 2016 12:40 PM >> To: Gunjan, Milind [CTO] ; rdo-list at redhat.com >> Subject: Re: [rdo-list] Redeploying UnderCloud >> >> On 06/27/2016 06:41 AM, Gunjan, Milind [CTO] wrote: >>> Hi All, >>> >>> Greeting. >>> >>> >>> >>> This is my first post and I am fairly new to RDO OpenStack. I am >>> working on RDO Triple-O deployment on baremetal. Due to incorrect >>> values in undercloud.conf file , my undercloud deployment failed. I >>> would like to redeploy undercloud and I am trying to understand what >>> steps has to be taken before redeploying undercloud. All the >>> deployment was under stack user . So first step will be to delete >>> stack user. I am not sure what has to be done regarding the networking >>> configuration autogenerated by os-net-config during the older install. >>> >>> Please advise. >>> >>> >>> >>> Best Regards, >>> >>> Milind >> >> No, definitely you don't want to delete the stack user, especially not as your first step! That would get rid of the configuration files, etc. >> that are in ~stack, and generally make your life harder than it needs to be. >> >> Anyway, it isn't necessary. You can do a procedure very much like what you do when upgrading the undercloud, with a couple of extra steps. >> >> As the stack user, edit your undercloud.conf, and make sure there are no more typos. >> >> If the typos were in the network configuration, you should delete the bridge and remove the ifcfg files: >> >> $ sudo ifdown br-ctlplane >> $ sudo ovs-vsctl del-br br-ctlplane >> $ sudo rm /etc/sysconfig/network-scripts/*br-ctlplane >> >> Next, run the underclound installation again: >> >> $ sudo yum update -y # Reboot after if kernel or core packages updated $ openstack undercloud install >> >> Then proceed with the rest of the instructions. You may find that if you already uploaded disk images or imported nodes that they will still be in the database. That's OK, or you can delete and reimport. >> >> -- >> Dan Sneddon | Principal OpenStack Engineer >> dsneddon at redhat.com | redhat.com/openstack >> 650.254.4025 | dsneddon:irc @dxs:twitter >> >> >> >> ________________________________ >> Learn more on how to switch to Sprint and save 50% on most Verizon, AT&T or T-Mobile rates. See sprint.com/50off for details. >> >> ________________________________ >> >> This e-mail may contain Sprint proprietary information intended for the sole use of the recipient(s). Any use by others is prohibited. If you are not the intended recipient, please contact the sender and delete all copies of the message. >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com From mohammed.arafa at gmail.com Wed Jun 29 07:25:13 2016 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 29 Jun 2016 03:25:13 -0400 Subject: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment In-Reply-To: References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> Message-ID: The hostname in the hosts file don't match. On Jun 29, 2016 8:58 AM, "Marius Cornea" wrote: > On Tue, Jun 28, 2016 at 10:45 PM, Gunjan, Milind [CTO] > wrote: > > Hi All, > > > > Thanks a lot for continued support. > > > > I would just like to get clarity regarding the last recommendation. > > My current deployment is failing with following error : > > > > Error: Could not prefetch keystone_service provider 'openstack': > Execution of '/bin/openstack service list --quiet --format csv --long' > returned 1: Gateway Timeout (HTTP 504) > > Error: Could not prefetch keystone_role provider 'openstack': Execution > of '/bin/openstack role list --quiet --format csv' returned 1: Gateway > Timeout (HTTP 504) > > Error: Could not prefetch keystone_endpoint provider 'openstack': > Execution of '/bin/openstack endpoint list --quiet --format csv' returned > 1: Gateway Timeout (HTTP 504) > > > > > > I have previously done RHEL OSP7 deployment and I verified the host file > of RHEL OSP7 undercloud deployment and it was configured with host > gateway used by pxe network. > > > > Similarly, we have set the current host gateway to 192.0.2.1 as shown > below for rdo manager undercloud installation: > > [stack at rdo-undercloud etc]$ cat /etc/hosts > >> 127.0.0.1 localhost localhost.localdomain localhost4 > localhost4.localdomain4 > >> ::1 localhost localhost.localdomain localhost6 > localhost6.localdomain6 > >> 192.0.2.1 rdo-undercloud undercloud-rdo.mydomain > > > > I would like to know if it is required during rdo deployment to have it > with the address of the public interface, e.g: > > $ip_public_nic rdo-undercloud rdo-undercloud.mydomain. > > The reason I asked for it is because of: > > Notice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: ERROR: > (pymysql.err.OperationalError) (1045, u"Access denied for user > 'heat'@'rdo-undercloud' (using password: YES)") > > Typically the undercloud's local_ip is used for these operations but > in your case it's added to the hosts files and maybe this ip name > mapping doesn't allow the installation to proceed. Please note that > the docs[1] point that you should have an FQDN entry in the hosts file > before the undercloud installation(when 192.0.2.1 isn't yet set on the > system), that's why I mentioned the ip address of the public nic. > > [1] > http://docs.openstack.org/developer/tripleo-docs/installation/installation.html > > > > > Thanks again for your time and help. Really appreciate it. > > > > Best Regards, > > milind > > > > -----Original Message----- > > From: Marius Cornea [mailto:marius at remote-lab.net] > > Sent: Tuesday, June 28, 2016 4:28 AM > > To: Gunjan, Milind [CTO] > > Cc: rdo-list at redhat.com > > Subject: Re: [rdo-list] Redeploying UnderCloud for baremetal triple-o > deployment > > > > On Tue, Jun 28, 2016 at 5:18 AM, Gunjan, Milind [CTO] > > wrote: > >> Hi Dan, > >> Thanks a lot for your response. > >> > >> Even after properly updating the undercloud.conf file and checking the > network configuration, undercloud deployment still fails. > >> > >> To recreate the issue , I am mentioning all the configuration steps: > >> 1. Installed CentOS Linux release 7.2.1511 (Core) image on baremetal. > >> 2. created stack user and provided required permission to stack user . > >> 3. setting hostname > >> sudo hostnamectl set-hostname rdo-undercloud.mydomain > >> sudo hostnamectl set-hostname --transient rdo-undercloud.mydomain > >> > >> [stack at rdo-undercloud etc]$ cat /etc/hosts > >> 127.0.0.1 localhost localhost.localdomain localhost4 > localhost4.localdomain4 > >> ::1 localhost localhost.localdomain localhost6 > localhost6.localdomain6 > >> 192.0.2.1 rdo-undercloud undercloud-rdo.mydomain > > > > Could you try removing the 192.0.2.1 entry from /etc/hosts and replace > > it with the address of the public interface, e.g: > > $ip_public_nic rdo-undercloud rdo-undercloud.mydomain > > > > then rerun openstack undercloud install > > > >> 4. enable required repositories > >> sudo yum -y install epel-release > >> sudo curl -o /etc/yum.repos.d/delorean-liberty.repo > https://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > >> sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > >> > >> 5. install repos > >> > >> sudo yum -y install yum-plugin-priorities > >> sudo yum install -y python-tripleoclient > >> > >> 6. update undercloud.conf > >> > >> [stack at rdo-undercloud ~]$ cat undercloud.conf > >> [DEFAULT] > >> local_ip = 192.0.2.1/24 > >> undercloud_public_vip = 192.0.2.2 > >> undercloud_admin_vip = 192.0.2.3 > >> local_interface = enp6s0 > >> masquerade_network = 192.0.2.0/24 > >> dhcp_start = 192.0.2.150 > >> dhcp_end = 192.0.2.199 > >> network_cidr = 192.0.2.0/24 > >> network_gateway = 192.0.2.1 > >> discovery_iprange = 192.0.2.200,192.0.2.230 > >> discovery_runbench = false > >> [auth] > >> > >> 7. install undercloud > >> > >> openstack undercloud install > >> > >> install ends in error: > >> Error: > /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user[neutron]: > Could not evaluate: Execution of '/bin/openstack domain show --format shell > Default' returned 1: Could not find resource Default > >> Error: > /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_user[heat]: > Could not evaluate: Execution of '/bin/openstack domain show --format shell > Default' returned 1: Could not find resource Default > >> Error: Could not prefetch keystone_service provider 'openstack': > Execution of '/bin/openstack service list --quiet --format csv --long' > returned 1: Gateway Timeout (HTTP 504) > >> Error: Not managing Keystone_service[glance] due to earlier Keystone > API failures. > >> Error: > /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_service[glance::image]/ensure: > change from absent to present failed: Not managing Keystone_service[glance] > due to earlier Keystone API failures. > >> Error: Could not prefetch keystone_role provider 'openstack': Execution > of '/bin/openstack role list --quiet --format csv' returned 1: Gateway > Timeout (HTTP 504) > >> Error: Not managing Keystone_role[ResellerAdmin] due to earlier > Keystone API failures. > >> Error: > /Stage[main]/Ceilometer::Keystone::Auth/Keystone_role[ResellerAdmin]/ensure: > change from absent to present failed: Not managing > Keystone_role[ResellerAdmin] due to earlier Keystone API failures. > >> Error: Not managing Keystone_service[ironic] due to earlier Keystone > API failures. > >> Error: > /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_service[ironic::baremetal]/ensure: > change from absent to present failed: Not managing Keystone_service[ironic] > due to earlier Keystone API failures. > >> Error: > /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova > service, user nova]/Keystone_user[nova]: Could not evaluate: Execution of > '/bin/openstack domain show --format shell Default' returned 1: Could not > find resource Default > >> Error: > /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user[glance]: > Could not evaluate: Execution of '/bin/openstack domain show --format shell > Default' returned 1: Could not find resource Default > >> Error: Not managing Keystone_service[novav3] due to earlier Keystone > API failures. > >> Error: > /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova > v3 service, user novav3]/Keystone_service[novav3::computev3]/ensure: change > from absent to present failed: Not managing Keystone_service[novav3] due to > earlier Keystone API failures. > >> Error: Not managing Keystone_role[heat_stack_user] due to earlier > Keystone API failures. > >> Error: > /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_user]/ensure: > change from absent to present failed: Not managing > Keystone_role[heat_stack_user] due to earlier Keystone API failures. > >> Error: > /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_user[ironic]: > Could not evaluate: Execution of '/bin/openstack domain show --format shell > Default' returned 1: Could not find resource Default > >> Error: Not managing Keystone_service[nova] due to earlier Keystone API > failures. > >> Error: > /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova > service, user nova]/Keystone_service[nova::compute]/ensure: change from > absent to present failed: Not managing Keystone_service[nova] due to > earlier Keystone API failures. > >> Error: Not managing Keystone_role[swiftoperator] due to earlier > Keystone API failures. > >> Error: > /Stage[main]/Swift::Keystone::Auth/Keystone_role[swiftoperator]/ensure: > change from absent to present failed: Not managing > Keystone_role[swiftoperator] due to earlier Keystone API failures. > >> Error: > /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_user[ceilometer]: > Could not evaluate: Execution of '/bin/openstack domain show --format shell > Default' returned 1: Could not find resource Default > >> Error: Not managing Keystone_service[neutron] due to earlier Keystone > API failures. > >> Error: > /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_service[neutron::network]/ensure: > change from absent to present failed: Not managing > Keystone_service[neutron] due to earlier Keystone API failures. > >> Error: Not managing Keystone_service[ceilometer] due to earlier > Keystone API failures. > >> Error: > /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_service[ceilometer::metering]/ensure: > change from absent to present failed: Not managing > Keystone_service[ceilometer] due to earlier Keystone API failures. > >> Error: Not managing Keystone_service[swift] due to earlier Keystone API > failures. > >> Error: > /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_service[swift::object-store]/ensure: > change from absent to present failed: Not managing Keystone_service[swift] > due to earlier Keystone API failures. > >> Error: Not managing Keystone_service[keystone] due to earlier Keystone > API failures. > >> Error: > /Stage[main]/Keystone::Endpoint/Keystone::Resource::Service_identity[keystone]/Keystone_service[keystone::identity]/ensure: > change from absent to present failed: Not managing > Keystone_service[keystone] due to earlier Keystone API failures. > >> Error: Not managing Keystone_service[heat] due to earlier Keystone API > failures. > >> Error: > /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_service[heat::orchestration]/ensure: > change from absent to present failed: Not managing Keystone_service[heat] > due to earlier Keystone API failures. > >> Error: Could not prefetch keystone_endpoint provider 'openstack': > Execution of '/bin/openstack endpoint list --quiet --format csv' returned > 1: Gateway Timeout (HTTP 504) > >> Error: > /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_user[swift]: > Could not evaluate: Execution of '/bin/openstack domain show --format shell > Default' returned 1: Could not find resource Default > >> Error: Could not prefetch keystone_tenant provider 'openstack': > Execution of '/bin/openstack project list --quiet --format csv --long' > returned 1: Gateway Timeout (HTTP 504) > >> Error: Not managing Keystone_tenant[service] due to earlier Keystone > API failures. > >> Error: > /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[service]/ensure: change > from absent to present failed: Not managing Keystone_tenant[service] due to > earlier Keystone API failures. > >> Error: Not managing Keystone_tenant[admin] due to earlier Keystone API > failures. > >> Error: > /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[admin]/ensure: change > from absent to present failed: Not managing Keystone_tenant[admin] due to > earlier Keystone API failures. > >> Error: Not managing Keystone_role[admin] due to earlier Keystone API > failures. > >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[admin]/ensure: > change from absent to present failed: Not managing Keystone_role[admin] due > to earlier Keystone API failures. > >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Could > not evaluate: Execution of '/bin/openstack domain show --format shell > Default' returned 1: Could not find resource Default > >> Error: Could not prefetch keystone_domain provider 'openstack': > Execution of '/bin/openstack domain list --quiet --format csv' returned 1: > Gateway Timeout (HTTP 504) > >> Notice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: ERROR: > (pymysql.err.OperationalError) (1045, u"Access denied for user 'heat'@'rdo-undercloud' > (using password: YES)") > >> Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: Failed to call > refresh: heat-manage --config-file /etc/heat/heat.conf db_sync returned 1 > instead of one of [0] > >> Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: heat-manage > --config-file /etc/heat/heat.conf db_sync returned 1 instead of one of [0] > >> [2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure > phase. [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > status 1] > >> > >> Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Triggered > 'refresh' from 4 events > >> Notice: Finished catalog run in 5259.44 seconds > >> + rc=6 > >> + set -e > >> + echo 'puppet apply exited with exit code 6' > >> puppet apply exited with exit code 6 > >> + '[' 6 '!=' 2 -a 6 '!=' 0 ']' > >> + exit 6 > >> [2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure > phase. [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > status 1] > >> > >> [2016-06-27 18:54:04,093] (os-refresh-config) [ERROR] Aborting... > >> Traceback (most recent call last): > >> File "", line 1, in > >> File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line > 815, in install > >> _run_orc(instack_env) > >> File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line > 699, in _run_orc > >> _run_live_command(args, instack_env, 'os-refresh-config') > >> File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line > 370, in _run_live_command > >> raise RuntimeError('%s failed. See log for details.' % name) > >> RuntimeError: os-refresh-config failed. See log for details. > >> Command 'instack-install-undercloud' returned non-zero exit status 1 > >> > >> > >> I am not able to understand the exact cause of undercloud install > failure. It would be really helpful if you guys can point be in direction > to understand the exact cause of issue and any possible resolution. > >> > >> Thanks a lot. > >> > >> Best Regards, > >> Milind > >> > >> > >> Best Regards, > >> Milind > >> -----Original Message----- > >> From: Dan Sneddon [mailto:dsneddon at redhat.com] > >> Sent: Monday, June 27, 2016 12:40 PM > >> To: Gunjan, Milind [CTO] ; > rdo-list at redhat.com > >> Subject: Re: [rdo-list] Redeploying UnderCloud > >> > >> On 06/27/2016 06:41 AM, Gunjan, Milind [CTO] wrote: > >>> Hi All, > >>> > >>> Greeting. > >>> > >>> > >>> > >>> This is my first post and I am fairly new to RDO OpenStack. I am > >>> working on RDO Triple-O deployment on baremetal. Due to incorrect > >>> values in undercloud.conf file , my undercloud deployment failed. I > >>> would like to redeploy undercloud and I am trying to understand what > >>> steps has to be taken before redeploying undercloud. All the > >>> deployment was under stack user . So first step will be to delete > >>> stack user. I am not sure what has to be done regarding the networking > >>> configuration autogenerated by os-net-config during the older install. > >>> > >>> Please advise. > >>> > >>> > >>> > >>> Best Regards, > >>> > >>> Milind > >> > >> No, definitely you don't want to delete the stack user, especially not > as your first step! That would get rid of the configuration files, etc. > >> that are in ~stack, and generally make your life harder than it needs > to be. > >> > >> Anyway, it isn't necessary. You can do a procedure very much like what > you do when upgrading the undercloud, with a couple of extra steps. > >> > >> As the stack user, edit your undercloud.conf, and make sure there are > no more typos. > >> > >> If the typos were in the network configuration, you should delete the > bridge and remove the ifcfg files: > >> > >> $ sudo ifdown br-ctlplane > >> $ sudo ovs-vsctl del-br br-ctlplane > >> $ sudo rm /etc/sysconfig/network-scripts/*br-ctlplane > >> > >> Next, run the underclound installation again: > >> > >> $ sudo yum update -y # Reboot after if kernel or core packages updated > $ openstack undercloud install > >> > >> Then proceed with the rest of the instructions. You may find that if > you already uploaded disk images or imported nodes that they will still be > in the database. That's OK, or you can delete and reimport. > >> > >> -- > >> Dan Sneddon | Principal OpenStack Engineer > >> dsneddon at redhat.com | redhat.com/openstack > >> 650.254.4025 | dsneddon:irc @dxs:twitter > >> > >> > >> > >> ________________________________ > >> Learn more on how to switch to Sprint and save 50% on most Verizon, > AT&T or T-Mobile rates. See sprint.com/50off for > details. > >> > >> ________________________________ > >> > >> This e-mail may contain Sprint proprietary information intended for the > sole use of the recipient(s). Any use by others is prohibited. If you are > not the intended recipient, please contact the sender and delete all copies > of the message. > >> > >> _______________________________________________ > >> rdo-list mailing list > >> rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Wed Jun 29 10:51:42 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Wed, 29 Jun 2016 12:51:42 +0200 Subject: [rdo-list] [centos-ci] Artifacts server does not support ranges / resume of downloads? In-Reply-To: References: Message-ID: 2016-06-28 4:25 GMT+02:00 Gerard Braad : > Hi all, > > > Usually I download the undercloud images using: > > https://gist.github.com/gbraad/45cbe30415b0dc631f5e8d20beaffebf > > which targets: 'artifacts.ci.centos.org' but now get > > "HTTP server doesn't seem to support byte ranges. Cannot resume." > > I need to resume a download about 10 times on the office connection > (China), so what happened? I have a workaround by storing the files at > another server first in Japan or the west-coast which supports > resuming. > > Although, being able to resume the download of these files is preferable. > > regards, > > > Gerard > Since this part of infrastructure is managed by CentOS Core Team, CC'ing centos-devel list. > > -- > > Gerard Braad | http://gbraad.nl > [ Doing Open Source Matters ] > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From Milind.Gunjan at sprint.com Wed Jun 29 11:19:37 2016 From: Milind.Gunjan at sprint.com (Gunjan, Milind [CTO]) Date: Wed, 29 Jun 2016 11:19:37 +0000 Subject: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment In-Reply-To: References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> Message-ID: <24755b61ab624c10bea2152601a0f67f@PREWE13M11.ad.sprint.com> Thanks a lot for pointing out the mistakes. So, as suggested, I updated the hostname in the host file to match in similar way as instructed in the docs: [root at undercloud ~]# cat /etc/hostname undercloud.poc [root at undercloud ~]# cat /etc/hosts 127.0.0.1 undercloud.poc Is there any stable kilo package which can be used for installation ? Best Regards, Milind Gunjan From: Mohammed Arafa [mailto:mohammed.arafa at gmail.com] Sent: Wednesday, June 29, 2016 3:25 AM To: Marius Cornea Cc: rdo-list at redhat.com; Gunjan, Milind [CTO] Subject: Re: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment The hostname in the hosts file don't match. On Jun 29, 2016 8:58 AM, "Marius Cornea" > wrote: On Tue, Jun 28, 2016 at 10:45 PM, Gunjan, Milind [CTO] > wrote: > Hi All, > > Thanks a lot for continued support. > > I would just like to get clarity regarding the last recommendation. > My current deployment is failing with following error : > > Error: Could not prefetch keystone_service provider 'openstack': Execution of '/bin/openstack service list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) > Error: Could not prefetch keystone_role provider 'openstack': Execution of '/bin/openstack role list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) > Error: Could not prefetch keystone_endpoint provider 'openstack': Execution of '/bin/openstack endpoint list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) > > > I have previously done RHEL OSP7 deployment and I verified the host file of RHEL OSP7 undercloud deployment and it was configured with host gateway used by pxe network. > > Similarly, we have set the current host gateway to 192.0.2.1 as shown below for rdo manager undercloud installation: > [stack at rdo-undercloud etc]$ cat /etc/hosts >> 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 >> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 >> 192.0.2.1 rdo-undercloud undercloud-rdo.mydomain > > I would like to know if it is required during rdo deployment to have it with the address of the public interface, e.g: > $ip_public_nic rdo-undercloud rdo-undercloud.mydomain. The reason I asked for it is because of: Notice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: ERROR: (pymysql.err.OperationalError) (1045, u"Access denied for user 'heat'@'rdo-undercloud' (using password: YES)") Typically the undercloud's local_ip is used for these operations but in your case it's added to the hosts files and maybe this ip name mapping doesn't allow the installation to proceed. Please note that the docs[1] point that you should have an FQDN entry in the hosts file before the undercloud installation(when 192.0.2.1 isn't yet set on the system), that's why I mentioned the ip address of the public nic. [1] http://docs.openstack.org/developer/tripleo-docs/installation/installation.html > > Thanks again for your time and help. Really appreciate it. > > Best Regards, > milind > > -----Original Message----- > From: Marius Cornea [mailto:marius at remote-lab.net] > Sent: Tuesday, June 28, 2016 4:28 AM > To: Gunjan, Milind [CTO] > > Cc: rdo-list at redhat.com > Subject: Re: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment > > On Tue, Jun 28, 2016 at 5:18 AM, Gunjan, Milind [CTO] > > wrote: >> Hi Dan, >> Thanks a lot for your response. >> >> Even after properly updating the undercloud.conf file and checking the network configuration, undercloud deployment still fails. >> >> To recreate the issue , I am mentioning all the configuration steps: >> 1. Installed CentOS Linux release 7.2.1511 (Core) image on baremetal. >> 2. created stack user and provided required permission to stack user . >> 3. setting hostname >> sudo hostnamectl set-hostname rdo-undercloud.mydomain >> sudo hostnamectl set-hostname --transient rdo-undercloud.mydomain >> >> [stack at rdo-undercloud etc]$ cat /etc/hosts >> 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 >> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 >> 192.0.2.1 rdo-undercloud undercloud-rdo.mydomain > > Could you try removing the 192.0.2.1 entry from /etc/hosts and replace > it with the address of the public interface, e.g: > $ip_public_nic rdo-undercloud rdo-undercloud.mydomain > > then rerun openstack undercloud install > >> 4. enable required repositories >> sudo yum -y install epel-release >> sudo curl -o /etc/yum.repos.d/delorean-liberty.repo https://trunk.rdoproject.org/centos7-liberty/current/delorean.repo >> sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo >> >> 5. install repos >> >> sudo yum -y install yum-plugin-priorities >> sudo yum install -y python-tripleoclient >> >> 6. update undercloud.conf >> >> [stack at rdo-undercloud ~]$ cat undercloud.conf >> [DEFAULT] >> local_ip = 192.0.2.1/24 >> undercloud_public_vip = 192.0.2.2 >> undercloud_admin_vip = 192.0.2.3 >> local_interface = enp6s0 >> masquerade_network = 192.0.2.0/24 >> dhcp_start = 192.0.2.150 >> dhcp_end = 192.0.2.199 >> network_cidr = 192.0.2.0/24 >> network_gateway = 192.0.2.1 >> discovery_iprange = 192.0.2.200,192.0.2.230 >> discovery_runbench = false >> [auth] >> >> 7. install undercloud >> >> openstack undercloud install >> >> install ends in error: >> Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user[neutron]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_user[heat]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Could not prefetch keystone_service provider 'openstack': Execution of '/bin/openstack service list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) >> Error: Not managing Keystone_service[glance] due to earlier Keystone API failures. >> Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_service[glance::image]/ensure: change from absent to present failed: Not managing Keystone_service[glance] due to earlier Keystone API failures. >> Error: Could not prefetch keystone_role provider 'openstack': Execution of '/bin/openstack role list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) >> Error: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. >> Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_role[ResellerAdmin]/ensure: change from absent to present failed: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[ironic] due to earlier Keystone API failures. >> Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_service[ironic::baremetal]/ensure: change from absent to present failed: Not managing Keystone_service[ironic] due to earlier Keystone API failures. >> Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_user[nova]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user[glance]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Not managing Keystone_service[novav3] due to earlier Keystone API failures. >> Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova v3 service, user novav3]/Keystone_service[novav3::computev3]/ensure: change from absent to present failed: Not managing Keystone_service[novav3] due to earlier Keystone API failures. >> Error: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures. >> Error: /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_user]/ensure: change from absent to present failed: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures. >> Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_user[ironic]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Not managing Keystone_service[nova] due to earlier Keystone API failures. >> Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_service[nova::compute]/ensure: change from absent to present failed: Not managing Keystone_service[nova] due to earlier Keystone API failures. >> Error: Not managing Keystone_role[swiftoperator] due to earlier Keystone API failures. >> Error: /Stage[main]/Swift::Keystone::Auth/Keystone_role[swiftoperator]/ensure: change from absent to present failed: Not managing Keystone_role[swiftoperator] due to earlier Keystone API failures. >> Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_user[ceilometer]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Not managing Keystone_service[neutron] due to earlier Keystone API failures. >> Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_service[neutron::network]/ensure: change from absent to present failed: Not managing Keystone_service[neutron] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[ceilometer] due to earlier Keystone API failures. >> Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_service[ceilometer::metering]/ensure: change from absent to present failed: Not managing Keystone_service[ceilometer] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[swift] due to earlier Keystone API failures. >> Error: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_service[swift::object-store]/ensure: change from absent to present failed: Not managing Keystone_service[swift] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[keystone] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Endpoint/Keystone::Resource::Service_identity[keystone]/Keystone_service[keystone::identity]/ensure: change from absent to present failed: Not managing Keystone_service[keystone] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[heat] due to earlier Keystone API failures. >> Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_service[heat::orchestration]/ensure: change from absent to present failed: Not managing Keystone_service[heat] due to earlier Keystone API failures. >> Error: Could not prefetch keystone_endpoint provider 'openstack': Execution of '/bin/openstack endpoint list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) >> Error: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_user[swift]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Could not prefetch keystone_tenant provider 'openstack': Execution of '/bin/openstack project list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) >> Error: Not managing Keystone_tenant[service] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[service]/ensure: change from absent to present failed: Not managing Keystone_tenant[service] due to earlier Keystone API failures. >> Error: Not managing Keystone_tenant[admin] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[admin]/ensure: change from absent to present failed: Not managing Keystone_tenant[admin] due to earlier Keystone API failures. >> Error: Not managing Keystone_role[admin] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[admin]/ensure: change from absent to present failed: Not managing Keystone_role[admin] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Could not prefetch keystone_domain provider 'openstack': Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) >> Notice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: ERROR: (pymysql.err.OperationalError) (1045, u"Access denied for user 'heat'@'rdo-undercloud' (using password: YES)") >> Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: Failed to call refresh: heat-manage --config-file /etc/heat/heat.conf db_sync returned 1 instead of one of [0] >> Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: heat-manage --config-file /etc/heat/heat.conf db_sync returned 1 instead of one of [0] >> [2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] >> >> Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Triggered 'refresh' from 4 events >> Notice: Finished catalog run in 5259.44 seconds >> + rc=6 >> + set -e >> + echo 'puppet apply exited with exit code 6' >> puppet apply exited with exit code 6 >> + '[' 6 '!=' 2 -a 6 '!=' 0 ']' >> + exit 6 >> [2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] >> >> [2016-06-27 18:54:04,093] (os-refresh-config) [ERROR] Aborting... >> Traceback (most recent call last): >> File "", line 1, in >> File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 815, in install >> _run_orc(instack_env) >> File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 699, in _run_orc >> _run_live_command(args, instack_env, 'os-refresh-config') >> File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 370, in _run_live_command >> raise RuntimeError('%s failed. See log for details.' % name) >> RuntimeError: os-refresh-config failed. See log for details. >> Command 'instack-install-undercloud' returned non-zero exit status 1 >> >> >> I am not able to understand the exact cause of undercloud install failure. It would be really helpful if you guys can point be in direction to understand the exact cause of issue and any possible resolution. >> >> Thanks a lot. >> >> Best Regards, >> Milind >> >> >> Best Regards, >> Milind >> -----Original Message----- >> From: Dan Sneddon [mailto:dsneddon at redhat.com] >> Sent: Monday, June 27, 2016 12:40 PM >> To: Gunjan, Milind [CTO] >; rdo-list at redhat.com >> Subject: Re: [rdo-list] Redeploying UnderCloud >> >> On 06/27/2016 06:41 AM, Gunjan, Milind [CTO] wrote: >>> Hi All, >>> >>> Greeting. >>> >>> >>> >>> This is my first post and I am fairly new to RDO OpenStack. I am >>> working on RDO Triple-O deployment on baremetal. Due to incorrect >>> values in undercloud.conf file , my undercloud deployment failed. I >>> would like to redeploy undercloud and I am trying to understand what >>> steps has to be taken before redeploying undercloud. All the >>> deployment was under stack user . So first step will be to delete >>> stack user. I am not sure what has to be done regarding the networking >>> configuration autogenerated by os-net-config during the older install. >>> >>> Please advise. >>> >>> >>> >>> Best Regards, >>> >>> Milind >> >> No, definitely you don't want to delete the stack user, especially not as your first step! That would get rid of the configuration files, etc. >> that are in ~stack, and generally make your life harder than it needs to be. >> >> Anyway, it isn't necessary. You can do a procedure very much like what you do when upgrading the undercloud, with a couple of extra steps. >> >> As the stack user, edit your undercloud.conf, and make sure there are no more typos. >> >> If the typos were in the network configuration, you should delete the bridge and remove the ifcfg files: >> >> $ sudo ifdown br-ctlplane >> $ sudo ovs-vsctl del-br br-ctlplane >> $ sudo rm /etc/sysconfig/network-scripts/*br-ctlplane >> >> Next, run the underclound installation again: >> >> $ sudo yum update -y # Reboot after if kernel or core packages updated $ openstack undercloud install >> >> Then proceed with the rest of the instructions. You may find that if you already uploaded disk images or imported nodes that they will still be in the database. That's OK, or you can delete and reimport. >> >> -- >> Dan Sneddon | Principal OpenStack Engineer >> dsneddon at redhat.com | redhat.com/openstack >> 650.254.4025 | dsneddon:irc @dxs:twitter >> >> >> >> ________________________________ >> Learn more on how to switch to Sprint and save 50% on most Verizon, AT&T or T-Mobile rates. See sprint.com/50off for details. >> >> ________________________________ >> >> This e-mail may contain Sprint proprietary information intended for the sole use of the recipient(s). Any use by others is prohibited. If you are not the intended recipient, please contact the sender and delete all copies of the message. >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mohammed.arafa at gmail.com Wed Jun 29 11:34:10 2016 From: mohammed.arafa at gmail.com (Mohammed Arafa) Date: Wed, 29 Jun 2016 07:34:10 -0400 Subject: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment In-Reply-To: <24755b61ab624c10bea2152601a0f67f@PREWE13M11.ad.sprint.com> References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> <24755b61ab624c10bea2152601a0f67f@PREWE13M11.ad.sprint.com> Message-ID: You need both short name and fqdn in your hosts file. (Rabbitmq issue. Don't know if that's been fixed) On Jun 29, 2016 1:19 PM, "Gunjan, Milind [CTO]" wrote: > Thanks a lot for pointing out the mistakes. > > > > So, as suggested, I updated the hostname in the host file to match in > similar way as instructed in the docs: > > > > [root at undercloud ~]# cat /etc/hostname > > undercloud.poc > > > > [root at undercloud ~]# cat /etc/hosts > > 127.0.0.1 undercloud.poc > > > > Is there any stable kilo package which can be used for installation ? > > > > Best Regards, > > Milind Gunjan > > > > > > *From:* Mohammed Arafa [mailto:mohammed.arafa at gmail.com] > *Sent:* Wednesday, June 29, 2016 3:25 AM > *To:* Marius Cornea > *Cc:* rdo-list at redhat.com; Gunjan, Milind [CTO] > *Subject:* Re: [rdo-list] Redeploying UnderCloud for baremetal triple-o > deployment > > > > The hostname in the hosts file don't match. > > On Jun 29, 2016 8:58 AM, "Marius Cornea" wrote: > > On Tue, Jun 28, 2016 at 10:45 PM, Gunjan, Milind [CTO] > wrote: > > Hi All, > > > > Thanks a lot for continued support. > > > > I would just like to get clarity regarding the last recommendation. > > My current deployment is failing with following error : > > > > Error: Could not prefetch keystone_service provider 'openstack': > Execution of '/bin/openstack service list --quiet --format csv --long' > returned 1: Gateway Timeout (HTTP 504) > > Error: Could not prefetch keystone_role provider 'openstack': Execution > of '/bin/openstack role list --quiet --format csv' returned 1: Gateway > Timeout (HTTP 504) > > Error: Could not prefetch keystone_endpoint provider 'openstack': > Execution of '/bin/openstack endpoint list --quiet --format csv' returned > 1: Gateway Timeout (HTTP 504) > > > > > > I have previously done RHEL OSP7 deployment and I verified the host file > of RHEL OSP7 undercloud deployment and it was configured with host > gateway used by pxe network. > > > > Similarly, we have set the current host gateway to 192.0.2.1 as shown > below for rdo manager undercloud installation: > > [stack at rdo-undercloud etc]$ cat /etc/hosts > >> 127.0.0.1 localhost localhost.localdomain localhost4 > localhost4.localdomain4 > >> ::1 localhost localhost.localdomain localhost6 > localhost6.localdomain6 > >> 192.0.2.1 rdo-undercloud undercloud-rdo.mydomain > > > > I would like to know if it is required during rdo deployment to have it > with the address of the public interface, e.g: > > $ip_public_nic rdo-undercloud rdo-undercloud.mydomain. > > The reason I asked for it is because of: > > Notice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: ERROR: > (pymysql.err.OperationalError) (1045, u"Access denied for user > 'heat'@'rdo-undercloud' (using password: YES)") > > Typically the undercloud's local_ip is used for these operations but > in your case it's added to the hosts files and maybe this ip name > mapping doesn't allow the installation to proceed. Please note that > the docs[1] point that you should have an FQDN entry in the hosts file > before the undercloud installation(when 192.0.2.1 isn't yet set on the > system), that's why I mentioned the ip address of the public nic. > > [1] > http://docs.openstack.org/developer/tripleo-docs/installation/installation.html > > > > > Thanks again for your time and help. Really appreciate it. > > > > Best Regards, > > milind > > > > -----Original Message----- > > From: Marius Cornea [mailto:marius at remote-lab.net] > > Sent: Tuesday, June 28, 2016 4:28 AM > > To: Gunjan, Milind [CTO] > > Cc: rdo-list at redhat.com > > Subject: Re: [rdo-list] Redeploying UnderCloud for baremetal triple-o > deployment > > > > On Tue, Jun 28, 2016 at 5:18 AM, Gunjan, Milind [CTO] > > wrote: > >> Hi Dan, > >> Thanks a lot for your response. > >> > >> Even after properly updating the undercloud.conf file and checking the > network configuration, undercloud deployment still fails. > >> > >> To recreate the issue , I am mentioning all the configuration steps: > >> 1. Installed CentOS Linux release 7.2.1511 (Core) image on baremetal. > >> 2. created stack user and provided required permission to stack user . > >> 3. setting hostname > >> sudo hostnamectl set-hostname rdo-undercloud.mydomain > >> sudo hostnamectl set-hostname --transient rdo-undercloud.mydomain > >> > >> [stack at rdo-undercloud etc]$ cat /etc/hosts > >> 127.0.0.1 localhost localhost.localdomain localhost4 > localhost4.localdomain4 > >> ::1 localhost localhost.localdomain localhost6 > localhost6.localdomain6 > >> 192.0.2.1 rdo-undercloud undercloud-rdo.mydomain > > > > Could you try removing the 192.0.2.1 entry from /etc/hosts and replace > > it with the address of the public interface, e.g: > > $ip_public_nic rdo-undercloud rdo-undercloud.mydomain > > > > then rerun openstack undercloud install > > > >> 4. enable required repositories > >> sudo yum -y install epel-release > >> sudo curl -o /etc/yum.repos.d/delorean-liberty.repo > https://trunk.rdoproject.org/centos7-liberty/current/delorean.repo > >> sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo > http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo > >> > >> 5. install repos > >> > >> sudo yum -y install yum-plugin-priorities > >> sudo yum install -y python-tripleoclient > >> > >> 6. update undercloud.conf > >> > >> [stack at rdo-undercloud ~]$ cat undercloud.conf > >> [DEFAULT] > >> local_ip = 192.0.2.1/24 > >> undercloud_public_vip = 192.0.2.2 > >> undercloud_admin_vip = 192.0.2.3 > >> local_interface = enp6s0 > >> masquerade_network = 192.0.2.0/24 > >> dhcp_start = 192.0.2.150 > >> dhcp_end = 192.0.2.199 > >> network_cidr = 192.0.2.0/24 > >> network_gateway = 192.0.2.1 > >> discovery_iprange = 192.0.2.200,192.0.2.230 > >> discovery_runbench = false > >> [auth] > >> > >> 7. install undercloud > >> > >> openstack undercloud install > >> > >> install ends in error: > >> Error: > /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user[neutron]: > Could not evaluate: Execution of '/bin/openstack domain show --format shell > Default' returned 1: Could not find resource Default > >> Error: > /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_user[heat]: > Could not evaluate: Execution of '/bin/openstack domain show --format shell > Default' returned 1: Could not find resource Default > >> Error: Could not prefetch keystone_service provider 'openstack': > Execution of '/bin/openstack service list --quiet --format csv --long' > returned 1: Gateway Timeout (HTTP 504) > >> Error: Not managing Keystone_service[glance] due to earlier Keystone > API failures. > >> Error: > /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_service[glance::image]/ensure: > change from absent to present failed: Not managing Keystone_service[glance] > due to earlier Keystone API failures. > >> Error: Could not prefetch keystone_role provider 'openstack': Execution > of '/bin/openstack role list --quiet --format csv' returned 1: Gateway > Timeout (HTTP 504) > >> Error: Not managing Keystone_role[ResellerAdmin] due to earlier > Keystone API failures. > >> Error: > /Stage[main]/Ceilometer::Keystone::Auth/Keystone_role[ResellerAdmin]/ensure: > change from absent to present failed: Not managing > Keystone_role[ResellerAdmin] due to earlier Keystone API failures. > >> Error: Not managing Keystone_service[ironic] due to earlier Keystone > API failures. > >> Error: > /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_service[ironic::baremetal]/ensure: > change from absent to present failed: Not managing Keystone_service[ironic] > due to earlier Keystone API failures. > >> Error: > /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova > service, user nova]/Keystone_user[nova]: Could not evaluate: Execution of > '/bin/openstack domain show --format shell Default' returned 1: Could not > find resource Default > >> Error: > /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user[glance]: > Could not evaluate: Execution of '/bin/openstack domain show --format shell > Default' returned 1: Could not find resource Default > >> Error: Not managing Keystone_service[novav3] due to earlier Keystone > API failures. > >> Error: > /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova > v3 service, user novav3]/Keystone_service[novav3::computev3]/ensure: change > from absent to present failed: Not managing Keystone_service[novav3] due to > earlier Keystone API failures. > >> Error: Not managing Keystone_role[heat_stack_user] due to earlier > Keystone API failures. > >> Error: > /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_user]/ensure: > change from absent to present failed: Not managing > Keystone_role[heat_stack_user] due to earlier Keystone API failures. > >> Error: > /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_user[ironic]: > Could not evaluate: Execution of '/bin/openstack domain show --format shell > Default' returned 1: Could not find resource Default > >> Error: Not managing Keystone_service[nova] due to earlier Keystone API > failures. > >> Error: > /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova > service, user nova]/Keystone_service[nova::compute]/ensure: change from > absent to present failed: Not managing Keystone_service[nova] due to > earlier Keystone API failures. > >> Error: Not managing Keystone_role[swiftoperator] due to earlier > Keystone API failures. > >> Error: > /Stage[main]/Swift::Keystone::Auth/Keystone_role[swiftoperator]/ensure: > change from absent to present failed: Not managing > Keystone_role[swiftoperator] due to earlier Keystone API failures. > >> Error: > /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_user[ceilometer]: > Could not evaluate: Execution of '/bin/openstack domain show --format shell > Default' returned 1: Could not find resource Default > >> Error: Not managing Keystone_service[neutron] due to earlier Keystone > API failures. > >> Error: > /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_service[neutron::network]/ensure: > change from absent to present failed: Not managing > Keystone_service[neutron] due to earlier Keystone API failures. > >> Error: Not managing Keystone_service[ceilometer] due to earlier > Keystone API failures. > >> Error: > /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_service[ceilometer::metering]/ensure: > change from absent to present failed: Not managing > Keystone_service[ceilometer] due to earlier Keystone API failures. > >> Error: Not managing Keystone_service[swift] due to earlier Keystone API > failures. > >> Error: > /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_service[swift::object-store]/ensure: > change from absent to present failed: Not managing Keystone_service[swift] > due to earlier Keystone API failures. > >> Error: Not managing Keystone_service[keystone] due to earlier Keystone > API failures. > >> Error: > /Stage[main]/Keystone::Endpoint/Keystone::Resource::Service_identity[keystone]/Keystone_service[keystone::identity]/ensure: > change from absent to present failed: Not managing > Keystone_service[keystone] due to earlier Keystone API failures. > >> Error: Not managing Keystone_service[heat] due to earlier Keystone API > failures. > >> Error: > /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_service[heat::orchestration]/ensure: > change from absent to present failed: Not managing Keystone_service[heat] > due to earlier Keystone API failures. > >> Error: Could not prefetch keystone_endpoint provider 'openstack': > Execution of '/bin/openstack endpoint list --quiet --format csv' returned > 1: Gateway Timeout (HTTP 504) > >> Error: > /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_user[swift]: > Could not evaluate: Execution of '/bin/openstack domain show --format shell > Default' returned 1: Could not find resource Default > >> Error: Could not prefetch keystone_tenant provider 'openstack': > Execution of '/bin/openstack project list --quiet --format csv --long' > returned 1: Gateway Timeout (HTTP 504) > >> Error: Not managing Keystone_tenant[service] due to earlier Keystone > API failures. > >> Error: > /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[service]/ensure: change > from absent to present failed: Not managing Keystone_tenant[service] due to > earlier Keystone API failures. > >> Error: Not managing Keystone_tenant[admin] due to earlier Keystone API > failures. > >> Error: > /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[admin]/ensure: change > from absent to present failed: Not managing Keystone_tenant[admin] due to > earlier Keystone API failures. > >> Error: Not managing Keystone_role[admin] due to earlier Keystone API > failures. > >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[admin]/ensure: > change from absent to present failed: Not managing Keystone_role[admin] due > to earlier Keystone API failures. > >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Could > not evaluate: Execution of '/bin/openstack domain show --format shell > Default' returned 1: Could not find resource Default > >> Error: Could not prefetch keystone_domain provider 'openstack': > Execution of '/bin/openstack domain list --quiet --format csv' returned 1: > Gateway Timeout (HTTP 504) > >> Notice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: ERROR: > (pymysql.err.OperationalError) (1045, u"Access denied for user 'heat'@'rdo-undercloud' > (using password: YES)") > >> Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: Failed to call > refresh: heat-manage --config-file /etc/heat/heat.conf db_sync returned 1 > instead of one of [0] > >> Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: heat-manage > --config-file /etc/heat/heat.conf db_sync returned 1 instead of one of [0] > >> [2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure > phase. [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > status 1] > >> > >> Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Triggered > 'refresh' from 4 events > >> Notice: Finished catalog run in 5259.44 seconds > >> + rc=6 > >> + set -e > >> + echo 'puppet apply exited with exit code 6' > >> puppet apply exited with exit code 6 > >> + '[' 6 '!=' 2 -a 6 '!=' 0 ']' > >> + exit 6 > >> [2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure > phase. [Command '['dib-run-parts', > '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit > status 1] > >> > >> [2016-06-27 18:54:04,093] (os-refresh-config) [ERROR] Aborting... > >> Traceback (most recent call last): > >> File "", line 1, in > >> File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line > 815, in install > >> _run_orc(instack_env) > >> File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line > 699, in _run_orc > >> _run_live_command(args, instack_env, 'os-refresh-config') > >> File > "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line > 370, in _run_live_command > >> raise RuntimeError('%s failed. See log for details.' % name) > >> RuntimeError: os-refresh-config failed. See log for details. > >> Command 'instack-install-undercloud' returned non-zero exit status 1 > >> > >> > >> I am not able to understand the exact cause of undercloud install > failure. It would be really helpful if you guys can point be in direction > to understand the exact cause of issue and any possible resolution. > >> > >> Thanks a lot. > >> > >> Best Regards, > >> Milind > >> > >> > >> Best Regards, > >> Milind > >> -----Original Message----- > >> From: Dan Sneddon [mailto:dsneddon at redhat.com] > >> Sent: Monday, June 27, 2016 12:40 PM > >> To: Gunjan, Milind [CTO] ; > rdo-list at redhat.com > >> Subject: Re: [rdo-list] Redeploying UnderCloud > >> > >> On 06/27/2016 06:41 AM, Gunjan, Milind [CTO] wrote: > >>> Hi All, > >>> > >>> Greeting. > >>> > >>> > >>> > >>> This is my first post and I am fairly new to RDO OpenStack. I am > >>> working on RDO Triple-O deployment on baremetal. Due to incorrect > >>> values in undercloud.conf file , my undercloud deployment failed. I > >>> would like to redeploy undercloud and I am trying to understand what > >>> steps has to be taken before redeploying undercloud. All the > >>> deployment was under stack user . So first step will be to delete > >>> stack user. I am not sure what has to be done regarding the networking > >>> configuration autogenerated by os-net-config during the older install. > >>> > >>> Please advise. > >>> > >>> > >>> > >>> Best Regards, > >>> > >>> Milind > >> > >> No, definitely you don't want to delete the stack user, especially not > as your first step! That would get rid of the configuration files, etc. > >> that are in ~stack, and generally make your life harder than it needs > to be. > >> > >> Anyway, it isn't necessary. You can do a procedure very much like what > you do when upgrading the undercloud, with a couple of extra steps. > >> > >> As the stack user, edit your undercloud.conf, and make sure there are > no more typos. > >> > >> If the typos were in the network configuration, you should delete the > bridge and remove the ifcfg files: > >> > >> $ sudo ifdown br-ctlplane > >> $ sudo ovs-vsctl del-br br-ctlplane > >> $ sudo rm /etc/sysconfig/network-scripts/*br-ctlplane > >> > >> Next, run the underclound installation again: > >> > >> $ sudo yum update -y # Reboot after if kernel or core packages updated > $ openstack undercloud install > >> > >> Then proceed with the rest of the instructions. You may find that if > you already uploaded disk images or imported nodes that they will still be > in the database. That's OK, or you can delete and reimport. > >> > >> -- > >> Dan Sneddon | Principal OpenStack Engineer > >> dsneddon at redhat.com | redhat.com/openstack > >> 650.254.4025 | dsneddon:irc @dxs:twitter > >> > >> > >> > >> ________________________________ > >> Learn more on how to switch to Sprint and save 50% on most Verizon, > AT&T or T-Mobile rates. See sprint.com/50off for > details. > >> > >> ________________________________ > >> > >> This e-mail may contain Sprint proprietary information intended for the > sole use of the recipient(s). Any use by others is prohibited. If you are > not the intended recipient, please contact the sender and delete all copies > of the message. > >> > >> _______________________________________________ > >> rdo-list mailing list > >> rdo-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rdo-list > >> > >> To unsubscribe: rdo-list-unsubscribe at redhat.com > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Milind.Gunjan at sprint.com Wed Jun 29 11:39:06 2016 From: Milind.Gunjan at sprint.com (Gunjan, Milind [CTO]) Date: Wed, 29 Jun 2016 11:39:06 +0000 Subject: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment In-Reply-To: References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> <24755b61ab624c10bea2152601a0f67f@PREWE13M11.ad.sprint.com> Message-ID: <2a4caf42c3b4471d9435dae86f201748@PREWE13M11.ad.sprint.com> Okay . Updated the host file with shortname and fqdn. root at undercloud ~]# cat /etc/hosts 127.0.0.1 undercloud undercloud.poc I am looking to install stable release of Openstack kilo / liberty. Can you please point me to the stable repo which I can pull for the install. Best Regards, Milind From: Mohammed Arafa [mailto:mohammed.arafa at gmail.com] Sent: Wednesday, June 29, 2016 7:34 AM To: Gunjan, Milind [CTO] Cc: rdo-list at redhat.com; Marius Cornea Subject: RE: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment You need both short name and fqdn in your hosts file. (Rabbitmq issue. Don't know if that's been fixed) On Jun 29, 2016 1:19 PM, "Gunjan, Milind [CTO]" > wrote: Thanks a lot for pointing out the mistakes. So, as suggested, I updated the hostname in the host file to match in similar way as instructed in the docs: [root at undercloud ~]# cat /etc/hostname undercloud.poc [root at undercloud ~]# cat /etc/hosts 127.0.0.1 undercloud.poc Is there any stable kilo package which can be used for installation ? Best Regards, Milind Gunjan From: Mohammed Arafa [mailto:mohammed.arafa at gmail.com] Sent: Wednesday, June 29, 2016 3:25 AM To: Marius Cornea > Cc: rdo-list at redhat.com; Gunjan, Milind [CTO] > Subject: Re: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment The hostname in the hosts file don't match. On Jun 29, 2016 8:58 AM, "Marius Cornea" > wrote: On Tue, Jun 28, 2016 at 10:45 PM, Gunjan, Milind [CTO] > wrote: > Hi All, > > Thanks a lot for continued support. > > I would just like to get clarity regarding the last recommendation. > My current deployment is failing with following error : > > Error: Could not prefetch keystone_service provider 'openstack': Execution of '/bin/openstack service list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) > Error: Could not prefetch keystone_role provider 'openstack': Execution of '/bin/openstack role list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) > Error: Could not prefetch keystone_endpoint provider 'openstack': Execution of '/bin/openstack endpoint list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) > > > I have previously done RHEL OSP7 deployment and I verified the host file of RHEL OSP7 undercloud deployment and it was configured with host gateway used by pxe network. > > Similarly, we have set the current host gateway to 192.0.2.1 as shown below for rdo manager undercloud installation: > [stack at rdo-undercloud etc]$ cat /etc/hosts >> 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 >> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 >> 192.0.2.1 rdo-undercloud undercloud-rdo.mydomain > > I would like to know if it is required during rdo deployment to have it with the address of the public interface, e.g: > $ip_public_nic rdo-undercloud rdo-undercloud.mydomain. The reason I asked for it is because of: Notice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: ERROR: (pymysql.err.OperationalError) (1045, u"Access denied for user 'heat'@'rdo-undercloud' (using password: YES)") Typically the undercloud's local_ip is used for these operations but in your case it's added to the hosts files and maybe this ip name mapping doesn't allow the installation to proceed. Please note that the docs[1] point that you should have an FQDN entry in the hosts file before the undercloud installation(when 192.0.2.1 isn't yet set on the system), that's why I mentioned the ip address of the public nic. [1] http://docs.openstack.org/developer/tripleo-docs/installation/installation.html > > Thanks again for your time and help. Really appreciate it. > > Best Regards, > milind > > -----Original Message----- > From: Marius Cornea [mailto:marius at remote-lab.net] > Sent: Tuesday, June 28, 2016 4:28 AM > To: Gunjan, Milind [CTO] > > Cc: rdo-list at redhat.com > Subject: Re: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment > > On Tue, Jun 28, 2016 at 5:18 AM, Gunjan, Milind [CTO] > > wrote: >> Hi Dan, >> Thanks a lot for your response. >> >> Even after properly updating the undercloud.conf file and checking the network configuration, undercloud deployment still fails. >> >> To recreate the issue , I am mentioning all the configuration steps: >> 1. Installed CentOS Linux release 7.2.1511 (Core) image on baremetal. >> 2. created stack user and provided required permission to stack user . >> 3. setting hostname >> sudo hostnamectl set-hostname rdo-undercloud.mydomain >> sudo hostnamectl set-hostname --transient rdo-undercloud.mydomain >> >> [stack at rdo-undercloud etc]$ cat /etc/hosts >> 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 >> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 >> 192.0.2.1 rdo-undercloud undercloud-rdo.mydomain > > Could you try removing the 192.0.2.1 entry from /etc/hosts and replace > it with the address of the public interface, e.g: > $ip_public_nic rdo-undercloud rdo-undercloud.mydomain > > then rerun openstack undercloud install > >> 4. enable required repositories >> sudo yum -y install epel-release >> sudo curl -o /etc/yum.repos.d/delorean-liberty.repo https://trunk.rdoproject.org/centos7-liberty/current/delorean.repo >> sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo >> >> 5. install repos >> >> sudo yum -y install yum-plugin-priorities >> sudo yum install -y python-tripleoclient >> >> 6. update undercloud.conf >> >> [stack at rdo-undercloud ~]$ cat undercloud.conf >> [DEFAULT] >> local_ip = 192.0.2.1/24 >> undercloud_public_vip = 192.0.2.2 >> undercloud_admin_vip = 192.0.2.3 >> local_interface = enp6s0 >> masquerade_network = 192.0.2.0/24 >> dhcp_start = 192.0.2.150 >> dhcp_end = 192.0.2.199 >> network_cidr = 192.0.2.0/24 >> network_gateway = 192.0.2.1 >> discovery_iprange = 192.0.2.200,192.0.2.230 >> discovery_runbench = false >> [auth] >> >> 7. install undercloud >> >> openstack undercloud install >> >> install ends in error: >> Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user[neutron]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_user[heat]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Could not prefetch keystone_service provider 'openstack': Execution of '/bin/openstack service list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) >> Error: Not managing Keystone_service[glance] due to earlier Keystone API failures. >> Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_service[glance::image]/ensure: change from absent to present failed: Not managing Keystone_service[glance] due to earlier Keystone API failures. >> Error: Could not prefetch keystone_role provider 'openstack': Execution of '/bin/openstack role list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) >> Error: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. >> Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_role[ResellerAdmin]/ensure: change from absent to present failed: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[ironic] due to earlier Keystone API failures. >> Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_service[ironic::baremetal]/ensure: change from absent to present failed: Not managing Keystone_service[ironic] due to earlier Keystone API failures. >> Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_user[nova]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user[glance]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Not managing Keystone_service[novav3] due to earlier Keystone API failures. >> Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova v3 service, user novav3]/Keystone_service[novav3::computev3]/ensure: change from absent to present failed: Not managing Keystone_service[novav3] due to earlier Keystone API failures. >> Error: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures. >> Error: /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_user]/ensure: change from absent to present failed: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures. >> Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_user[ironic]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Not managing Keystone_service[nova] due to earlier Keystone API failures. >> Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_service[nova::compute]/ensure: change from absent to present failed: Not managing Keystone_service[nova] due to earlier Keystone API failures. >> Error: Not managing Keystone_role[swiftoperator] due to earlier Keystone API failures. >> Error: /Stage[main]/Swift::Keystone::Auth/Keystone_role[swiftoperator]/ensure: change from absent to present failed: Not managing Keystone_role[swiftoperator] due to earlier Keystone API failures. >> Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_user[ceilometer]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Not managing Keystone_service[neutron] due to earlier Keystone API failures. >> Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_service[neutron::network]/ensure: change from absent to present failed: Not managing Keystone_service[neutron] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[ceilometer] due to earlier Keystone API failures. >> Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_service[ceilometer::metering]/ensure: change from absent to present failed: Not managing Keystone_service[ceilometer] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[swift] due to earlier Keystone API failures. >> Error: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_service[swift::object-store]/ensure: change from absent to present failed: Not managing Keystone_service[swift] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[keystone] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Endpoint/Keystone::Resource::Service_identity[keystone]/Keystone_service[keystone::identity]/ensure: change from absent to present failed: Not managing Keystone_service[keystone] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[heat] due to earlier Keystone API failures. >> Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_service[heat::orchestration]/ensure: change from absent to present failed: Not managing Keystone_service[heat] due to earlier Keystone API failures. >> Error: Could not prefetch keystone_endpoint provider 'openstack': Execution of '/bin/openstack endpoint list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) >> Error: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_user[swift]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Could not prefetch keystone_tenant provider 'openstack': Execution of '/bin/openstack project list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) >> Error: Not managing Keystone_tenant[service] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[service]/ensure: change from absent to present failed: Not managing Keystone_tenant[service] due to earlier Keystone API failures. >> Error: Not managing Keystone_tenant[admin] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[admin]/ensure: change from absent to present failed: Not managing Keystone_tenant[admin] due to earlier Keystone API failures. >> Error: Not managing Keystone_role[admin] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[admin]/ensure: change from absent to present failed: Not managing Keystone_role[admin] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Could not prefetch keystone_domain provider 'openstack': Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) >> Notice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: ERROR: (pymysql.err.OperationalError) (1045, u"Access denied for user 'heat'@'rdo-undercloud' (using password: YES)") >> Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: Failed to call refresh: heat-manage --config-file /etc/heat/heat.conf db_sync returned 1 instead of one of [0] >> Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: heat-manage --config-file /etc/heat/heat.conf db_sync returned 1 instead of one of [0] >> [2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] >> >> Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Triggered 'refresh' from 4 events >> Notice: Finished catalog run in 5259.44 seconds >> + rc=6 >> + set -e >> + echo 'puppet apply exited with exit code 6' >> puppet apply exited with exit code 6 >> + '[' 6 '!=' 2 -a 6 '!=' 0 ']' >> + exit 6 >> [2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] >> >> [2016-06-27 18:54:04,093] (os-refresh-config) [ERROR] Aborting... >> Traceback (most recent call last): >> File "", line 1, in >> File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 815, in install >> _run_orc(instack_env) >> File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 699, in _run_orc >> _run_live_command(args, instack_env, 'os-refresh-config') >> File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 370, in _run_live_command >> raise RuntimeError('%s failed. See log for details.' % name) >> RuntimeError: os-refresh-config failed. See log for details. >> Command 'instack-install-undercloud' returned non-zero exit status 1 >> >> >> I am not able to understand the exact cause of undercloud install failure. It would be really helpful if you guys can point be in direction to understand the exact cause of issue and any possible resolution. >> >> Thanks a lot. >> >> Best Regards, >> Milind >> >> >> Best Regards, >> Milind >> -----Original Message----- >> From: Dan Sneddon [mailto:dsneddon at redhat.com] >> Sent: Monday, June 27, 2016 12:40 PM >> To: Gunjan, Milind [CTO] >; rdo-list at redhat.com >> Subject: Re: [rdo-list] Redeploying UnderCloud >> >> On 06/27/2016 06:41 AM, Gunjan, Milind [CTO] wrote: >>> Hi All, >>> >>> Greeting. >>> >>> >>> >>> This is my first post and I am fairly new to RDO OpenStack. I am >>> working on RDO Triple-O deployment on baremetal. Due to incorrect >>> values in undercloud.conf file , my undercloud deployment failed. I >>> would like to redeploy undercloud and I am trying to understand what >>> steps has to be taken before redeploying undercloud. All the >>> deployment was under stack user . So first step will be to delete >>> stack user. I am not sure what has to be done regarding the networking >>> configuration autogenerated by os-net-config during the older install. >>> >>> Please advise. >>> >>> >>> >>> Best Regards, >>> >>> Milind >> >> No, definitely you don't want to delete the stack user, especially not as your first step! That would get rid of the configuration files, etc. >> that are in ~stack, and generally make your life harder than it needs to be. >> >> Anyway, it isn't necessary. You can do a procedure very much like what you do when upgrading the undercloud, with a couple of extra steps. >> >> As the stack user, edit your undercloud.conf, and make sure there are no more typos. >> >> If the typos were in the network configuration, you should delete the bridge and remove the ifcfg files: >> >> $ sudo ifdown br-ctlplane >> $ sudo ovs-vsctl del-br br-ctlplane >> $ sudo rm /etc/sysconfig/network-scripts/*br-ctlplane >> >> Next, run the underclound installation again: >> >> $ sudo yum update -y # Reboot after if kernel or core packages updated $ openstack undercloud install >> >> Then proceed with the rest of the instructions. You may find that if you already uploaded disk images or imported nodes that they will still be in the database. That's OK, or you can delete and reimport. >> >> -- >> Dan Sneddon | Principal OpenStack Engineer >> dsneddon at redhat.com | redhat.com/openstack >> 650.254.4025 | dsneddon:irc @dxs:twitter >> >> >> >> ________________________________ >> Learn more on how to switch to Sprint and save 50% on most Verizon, AT&T or T-Mobile rates. See sprint.com/50off for details. >> >> ________________________________ >> >> This e-mail may contain Sprint proprietary information intended for the sole use of the recipient(s). Any use by others is prohibited. If you are not the intended recipient, please contact the sender and delete all copies of the message. >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Wed Jun 29 12:26:55 2016 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 29 Jun 2016 17:56:55 +0530 Subject: [rdo-list] RDO Bugs statistics on 2016-06-29 Message-ID: # RDO Bugs on 2016-06-29 This email summarizes the active RDO bugs listed in the Red Hat Bugzilla database at . To report a new bug against RDO, go to: ## Summary - Open (NEW, ASSIGNED, ON_DEV): 150 - Fixed (MODIFIED, POST, ON_QA): 43 ## Number of open bugs by component dib-utils [ 1] + distribution [ 9] ++++++++++ Documentation [ 1] + instack [ 1] + instack-undercloud [ 10] +++++++++++ openstack-ceilometer [ 3] +++ openstack-cinder [ 2] ++ openstack-designate [ 1] + openstack-glance [ 1] + openstack-horizon [ 1] + openstack-ironic-disco... [ 1] + openstack-keystone [ 1] + openstack-neutron [ 4] ++++ openstack-nova [ 5] +++++ openstack-packstack [ 34] ++++++++++++++++++++++++++++++++++++++++ openstack-puppet-modules [ 7] ++++++++ openstack-sahara [ 2] ++ openstack-selinux [ 3] +++ openstack-tripleo [ 23] +++++++++++++++++++++++++++ openstack-tripleo-heat... [ 2] ++ openstack-tripleo-imag... [ 2] ++ openstack-trove [ 1] + Package Review [ 14] ++++++++++++++++ python-novaclient [ 1] + rdo-manager [ 14] ++++++++++++++++ rdopkg [ 1] + RFEs [ 2] ++ tempest [ 3] +++ ## Open bugs This is a list of "open" bugs by component. An "open" bug is in state NEW, ASSIGNED, ON_DEV and has not yet been fixed. (150 bugs) ### dib-utils (1 bug) [1263779 ] http://bugzilla.redhat.com/1263779 (NEW) Component: dib-utils Last change: 2016-04-18 Summary: Packstack Ironic admin_url misconfigured in nova.conf ### distribution (9 bugs) [1349665 ] http://bugzilla.redhat.com/1349665 (NEW) Component: distribution Last change: 2016-06-23 Summary: CVE-2016-4972 openstack-murano: RCE via usage of insecure YAML tags [openstack-rdo] [1243533 ] http://bugzilla.redhat.com/1243533 (NEW) Component: distribution Last change: 2016-06-01 Summary: (RDO) Tracker: Review requests for new RDO Liberty packages [1316169 ] http://bugzilla.redhat.com/1316169 (ASSIGNED) Component: distribution Last change: 2016-05-18 Summary: openstack-barbican-api missing pid dir or wrong pid file specified [1329341 ] http://bugzilla.redhat.com/1329341 (NEW) Component: distribution Last change: 2016-06-17 Summary: Tracker: Blockers and Review requests for new RDO Newton packages [1349666 ] http://bugzilla.redhat.com/1349666 (NEW) Component: distribution Last change: 2016-06-23 Summary: CVE-2016-4972 python-muranoclient: openstack-murano: RCE via usage of insecure YAML tags [openstack-rdo] [1301751 ] http://bugzilla.redhat.com/1301751 (NEW) Component: distribution Last change: 2016-04-18 Summary: Move all logging to stdout/err to allow systemd throttling logging of errors [1290163 ] http://bugzilla.redhat.com/1290163 (NEW) Component: distribution Last change: 2016-05-17 Summary: Tracker: Blockers and Review requests for new RDO Mitaka packages [1337335 ] http://bugzilla.redhat.com/1337335 (NEW) Component: distribution Last change: 2016-05-25 Summary: Hiera >= 2.x packaging [1346240 ] http://bugzilla.redhat.com/1346240 (ASSIGNED) Component: distribution Last change: 2016-06-21 Summary: Erlang 18.3.3 update fails ### Documentation (1 bug) [1272108 ] http://bugzilla.redhat.com/1272108 (NEW) Component: Documentation Last change: 2016-04-18 Summary: [DOC] External network should be documents in RDO manager installation ### instack (1 bug) [1315827 ] http://bugzilla.redhat.com/1315827 (NEW) Component: instack Last change: 2016-05-09 Summary: openstack undercloud install fails with "Element pip- and-virtualenv already loaded." ### instack-undercloud (10 bugs) [1347736 ] http://bugzilla.redhat.com/1347736 (NEW) Component: instack-undercloud Last change: 2016-06-17 Summary: Unable to install undercloud because tripleo-common is missing [1175687 ] http://bugzilla.redhat.com/1175687 (NEW) Component: instack-undercloud Last change: 2016-04-18 Summary: instack is not configued properly to log all Horizon/Tuskar messages in the undercloud deployment [1220509 ] http://bugzilla.redhat.com/1220509 (ASSIGNED) Component: instack-undercloud Last change: 2016-04-18 Summary: wget is missing from qcow2 image fails instack-build- images script [1199637 ] http://bugzilla.redhat.com/1199637 (ASSIGNED) Component: instack-undercloud Last change: 2016-04-18 Summary: [RDO][Instack-undercloud]: harmless ERROR: installing 'template' displays when building the images . [1271200 ] http://bugzilla.redhat.com/1271200 (ASSIGNED) Component: instack-undercloud Last change: 2016-04-18 Summary: Overcloud images contain Kilo repos [1216982 ] http://bugzilla.redhat.com/1216982 (ASSIGNED) Component: instack-undercloud Last change: 2016-04-18 Summary: instack-build-images does not stop on certain errors [1134073 ] http://bugzilla.redhat.com/1134073 (NEW) Component: instack-undercloud Last change: 2016-04-22 Summary: Nova default quotas insufficient to deploy baremetal overcloud [1265334 ] http://bugzilla.redhat.com/1265334 (NEW) Component: instack-undercloud Last change: 2016-04-18 Summary: rdo-manager liberty instack undercloud puppet apply fails w/ missing package dep pyinotify [1211800 ] http://bugzilla.redhat.com/1211800 (ASSIGNED) Component: instack-undercloud Last change: 2016-04-18 Summary: Sphinx docs for instack-undercloud have an incorrect network topology [1200081 ] http://bugzilla.redhat.com/1200081 (NEW) Component: instack-undercloud Last change: 2016-04-18 Summary: Installing instack undercloud on Fedora20 VM fails ### openstack-ceilometer (3 bugs) [1331510 ] http://bugzilla.redhat.com/1331510 (ASSIGNED) Component: openstack-ceilometer Last change: 2016-06-01 Summary: Gnocchi 2.0.2-1 release does not have Mitaka default configuration file [1348222 ] http://bugzilla.redhat.com/1348222 (NEW) Component: openstack-ceilometer Last change: 2016-06-20 Summary: Unable to start ceilometer-central and gnocchi since redis module is missing on controller [1265741 ] http://bugzilla.redhat.com/1265741 (NEW) Component: openstack-ceilometer Last change: 2016-04-27 Summary: python-redis is not installed with packstack allinone ### openstack-cinder (2 bugs) [1121256 ] http://bugzilla.redhat.com/1121256 (NEW) Component: openstack-cinder Last change: 2016-04-19 Summary: Configuration file in share forces ignore of auth_uri [1049535 ] http://bugzilla.redhat.com/1049535 (NEW) Component: openstack-cinder Last change: 2016-04-19 Summary: [RFE] permit cinder to create a volume when root_squash is set to on for gluster storage ### openstack-designate (1 bug) [1343663 ] http://bugzilla.redhat.com/1343663 (NEW) Component: openstack-designate Last change: 2016-06-07 Summary: openstack-designate are missing dependancies ### openstack-glance (1 bug) [1312466 ] http://bugzilla.redhat.com/1312466 (NEW) Component: openstack-glance Last change: 2016-04-19 Summary: Support for blueprint cinder-store-upload-download in glance_store ### openstack-horizon (1 bug) [1333508 ] http://bugzilla.redhat.com/1333508 (NEW) Component: openstack-horizon Last change: 2016-05-20 Summary: LBaaS v2 Dashboard UI ### openstack-ironic-discoverd (1 bug) [1211069 ] http://bugzilla.redhat.com/1211069 (ASSIGNED) Component: openstack-ironic-discoverd Last change: 2016-02-26 Summary: [RFE] [RDO-Manager] [discoverd] Add possibility to kill node discovery ### openstack-keystone (1 bug) [1337346 ] http://bugzilla.redhat.com/1337346 (NEW) Component: openstack-keystone Last change: 2016-06-01 Summary: CVE-2016-4911 openstack-keystone: Incorrect Audit IDs in Keystone Fernet Tokens can result in revocation bypass [openstack-rdo] ### openstack-neutron (4 bugs) [1065826 ] http://bugzilla.redhat.com/1065826 (ASSIGNED) Component: openstack-neutron Last change: 2016-06-15 Summary: [RFE] [neutron] neutron services needs more RPM granularity [1282403 ] http://bugzilla.redhat.com/1282403 (NEW) Component: openstack-neutron Last change: 2016-06-15 Summary: Errors when running tempest.api.network.test_ports with IPAM reference driver enabled [1147152 ] http://bugzilla.redhat.com/1147152 (NEW) Component: openstack-neutron Last change: 2016-06-07 Summary: Use neutron-sanity-check in CI checks [1349670 ] http://bugzilla.redhat.com/1349670 (NEW) Component: openstack-neutron Last change: 2016-06-23 Summary: CVE-2015-8914 CVE-2016-5362 CVE-2016-5363 openstack- neutron: various flaws [openstack-rdo] ### openstack-nova (5 bugs) [1228836 ] http://bugzilla.redhat.com/1228836 (NEW) Component: openstack-nova Last change: 2016-04-22 Summary: Is there a way to configure IO throttling for RBD devices via configuration file [1344315 ] http://bugzilla.redhat.com/1344315 (NEW) Component: openstack-nova Last change: 2016-06-09 Summary: SRIOV PF/VF allocation fails with NUMA aware flavor Edit [1123298 ] http://bugzilla.redhat.com/1123298 (ASSIGNED) Component: openstack-nova Last change: 2016-04-22 Summary: logrotate should copytruncate to avoid openstack logging to deleted files [1294747 ] http://bugzilla.redhat.com/1294747 (NEW) Component: openstack-nova Last change: 2016-05-16 Summary: Migration fails when the SRIOV PF is not online [1086247 ] http://bugzilla.redhat.com/1086247 (ASSIGNED) Component: openstack-nova Last change: 2016-05-11 Summary: Ensure translations are installed correctly and picked up at runtime ### openstack-packstack (34 bugs) [1200129 ] http://bugzilla.redhat.com/1200129 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-18 Summary: [RFE] add support for ceilometer workload partitioning via tooz/redis [1194678 ] http://bugzilla.redhat.com/1194678 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: On aarch64, nova.conf should default to vnc_enabled=False [1293693 ] http://bugzilla.redhat.com/1293693 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: Keystone setup fails on missing required parameter [1286995 ] http://bugzilla.redhat.com/1286995 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: PackStack should configure LVM filtering with LVM/iSCSI [1344706 ] http://bugzilla.redhat.com/1344706 (ASSIGNED) Component: openstack-packstack Last change: 2016-06-10 Summary: openstack-ceilometer-compute fails to send metrics [1297692 ] http://bugzilla.redhat.com/1297692 (ON_DEV) Component: openstack-packstack Last change: 2016-05-19 Summary: Raise MariaDB max connections limit [1302766 ] http://bugzilla.redhat.com/1302766 (NEW) Component: openstack-packstack Last change: 2016-05-19 Summary: Add Magnum support using puppet-magnum [1285494 ] http://bugzilla.redhat.com/1285494 (NEW) Component: openstack-packstack Last change: 2016-05-19 Summary: openstack- packstack-7.0.0-0.5.dev1661.gaf13b7e.el7.noarch cripples(?) httpd.conf [1316222 ] http://bugzilla.redhat.com/1316222 (ASSIGNED) Component: openstack-packstack Last change: 2016-05-18 Summary: Packstack installation failed due to wrong http config [1291492 ] http://bugzilla.redhat.com/1291492 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: Unfriendly behavior of IP filtering for VXLAN with EXCLUDE_SERVERS [1227298 ] http://bugzilla.redhat.com/1227298 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: Packstack should support MTU settings [1188491 ] http://bugzilla.redhat.com/1188491 (ASSIGNED) Component: openstack-packstack Last change: 2016-05-19 Summary: Packstack wording is unclear for demo and testing provisioning. [1208812 ] http://bugzilla.redhat.com/1208812 (ASSIGNED) Component: openstack-packstack Last change: 2016-06-15 Summary: add DiskFilter to scheduler_default_filters [1201612 ] http://bugzilla.redhat.com/1201612 (ASSIGNED) Component: openstack-packstack Last change: 2016-05-19 Summary: Interactive - Packstack asks for Tempest details even when Tempest install is declined [1061753 ] http://bugzilla.redhat.com/1061753 (NEW) Component: openstack-packstack Last change: 2016-05-16 Summary: [RFE] Create an option in packstack to increase verbosity level of libvirt [982035 ] http://bugzilla.redhat.com/982035 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-19 Summary: [RFE] Include Fedora cloud images in some nice way [1005073 ] http://bugzilla.redhat.com/1005073 (NEW) Component: openstack-packstack Last change: 2016-04-19 Summary: [RFE] Please add glance and nova lib folder config [903645 ] http://bugzilla.redhat.com/903645 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-18 Summary: RFE: Include the ability in PackStack to support SSL for all REST services and message bus communication [1239027 ] http://bugzilla.redhat.com/1239027 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: please move httpd log files to corresponding dirs [1324070 ] http://bugzilla.redhat.com/1324070 (NEW) Component: openstack-packstack Last change: 2016-04-18 Summary: RFE: PackStack Support for LBaaSv2 [1168113 ] http://bugzilla.redhat.com/1168113 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-18 Summary: The warning message " NetworkManager is active " appears even when the NetworkManager is inactive [1292271 ] http://bugzilla.redhat.com/1292271 (ASSIGNED) Component: openstack-packstack Last change: 2016-05-19 Summary: Receive Msg 'Error: Could not find user glance' [1116019 ] http://bugzilla.redhat.com/1116019 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-18 Summary: AMQP1.0 server configurations needed [1097291 ] http://bugzilla.redhat.com/1097291 (NEW) Component: openstack-packstack Last change: 2016-05-18 Summary: [RFE] SPICE support in packstack [1338496 ] http://bugzilla.redhat.com/1338496 (NEW) Component: openstack-packstack Last change: 2016-05-31 Summary: Failed to install with packstack [1312487 ] http://bugzilla.redhat.com/1312487 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-18 Summary: Packstack with Swift Glance backend does not seem to work [1184806 ] http://bugzilla.redhat.com/1184806 (NEW) Component: openstack-packstack Last change: 2016-04-28 Summary: [RFE] Packstack should support deploying Nova and Glance with RBD images and Ceph as a backend [1172310 ] http://bugzilla.redhat.com/1172310 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-19 Summary: support Keystone LDAP [1012382 ] http://bugzilla.redhat.com/1012382 (ON_DEV) Component: openstack-packstack Last change: 2016-04-19 Summary: swift: Admin user does not have permissions to see containers created by glance service [1286828 ] http://bugzilla.redhat.com/1286828 (NEW) Component: openstack-packstack Last change: 2016-05-19 Summary: Packstack should have the option to install QoS (neutron) [1172467 ] http://bugzilla.redhat.com/1172467 (NEW) Component: openstack-packstack Last change: 2016-04-19 Summary: New user cannot retrieve container listing [1023533 ] http://bugzilla.redhat.com/1023533 (ASSIGNED) Component: openstack-packstack Last change: 2016-04-18 Summary: API services has all admin permission instead of service [1344663 ] http://bugzilla.redhat.com/1344663 (ASSIGNED) Component: openstack-packstack Last change: 2016-06-17 Summary: packstack install fails if CONFIG_HEAT_CFN_INSTALL=y [1063393 ] http://bugzilla.redhat.com/1063393 (ASSIGNED) Component: openstack-packstack Last change: 2016-05-18 Summary: RFE: Provide option to set bind_host/bind_port for API services ### openstack-puppet-modules (7 bugs) [1318332 ] http://bugzilla.redhat.com/1318332 (NEW) Component: openstack-puppet-modules Last change: 2016-04-19 Summary: Cinder workaround should be removed [1297535 ] http://bugzilla.redhat.com/1297535 (ASSIGNED) Component: openstack-puppet-modules Last change: 2016-04-18 Summary: Undercloud installation fails ::aodh::keystone::auth not found for instack [1150902 ] http://bugzilla.redhat.com/1150902 (ASSIGNED) Component: openstack-puppet-modules Last change: 2016-04-18 Summary: selinux prevents httpd to write to /var/log/horizon/horizon.log [1316856 ] http://bugzilla.redhat.com/1316856 (NEW) Component: openstack-puppet-modules Last change: 2016-04-28 Summary: packstack fails to configure ovs bridge for CentOS [1240736 ] http://bugzilla.redhat.com/1240736 (NEW) Component: openstack-puppet-modules Last change: 2016-04-18 Summary: trove guestagent config mods for integration testing [1107907 ] http://bugzilla.redhat.com/1107907 (NEW) Component: openstack-puppet-modules Last change: 2016-05-18 Summary: Offset Swift ports to 6200 [1289761 ] http://bugzilla.redhat.com/1289761 (NEW) Component: openstack-puppet-modules Last change: 2016-05-25 Summary: PackStack installs Nova crontab that nova user can't run ### openstack-sahara (2 bugs) [1305790 ] http://bugzilla.redhat.com/1305790 (NEW) Component: openstack-sahara Last change: 2016-02-09 Summary: Failure to launch Caldera 5.0.4 Hadoop Cluster via Sahara Wizards on RDO Liberty [1305419 ] http://bugzilla.redhat.com/1305419 (NEW) Component: openstack-sahara Last change: 2016-02-10 Summary: Failure to launch Hadoop HDP 2.0.6 Cluster via Sahara Wizards on RDO Liberty ### openstack-selinux (3 bugs) [1320043 ] http://bugzilla.redhat.com/1320043 (NEW) Component: openstack-selinux Last change: 2016-04-19 Summary: rootwrap-daemon can't start after reboot due to AVC denial [1174795 ] http://bugzilla.redhat.com/1174795 (NEW) Component: openstack-selinux Last change: 2016-04-18 Summary: keystone fails to start: raise exception.ConfigFileNotF ound(config_file=paste_config_value) [1341738 ] http://bugzilla.redhat.com/1341738 (NEW) Component: openstack-selinux Last change: 2016-06-01 Summary: AVC: beam.smp tries to write in SSL certificate ### openstack-tripleo (23 bugs) [1056109 ] http://bugzilla.redhat.com/1056109 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: [RFE][tripleo]: Making the overcloud deployment fully HA [1344620 ] http://bugzilla.redhat.com/1344620 (NEW) Component: openstack-tripleo Last change: 2016-06-10 Summary: Nova instance live migration fails: migrateToURI3() got an unexpected keyword argument 'bandwidth' [1056110 ] http://bugzilla.redhat.com/1056110 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: [RFE][tripleo]: Scaling work to do during icehouse [1344507 ] http://bugzilla.redhat.com/1344507 (NEW) Component: openstack-tripleo Last change: 2016-06-21 Summary: Nova novnc console doesn't load 2/3 times: Failed to connect to server (code: 1006) [1344495 ] http://bugzilla.redhat.com/1344495 (NEW) Component: openstack-tripleo Last change: 2016-06-09 Summary: Horizon: Error: Unable to retrieve project list and Error: Unable to retrieve domain list. [1329095 ] http://bugzilla.redhat.com/1329095 (NEW) Component: openstack-tripleo Last change: 2016-04-22 Summary: mariadb and keystone down after an upgrade from liberty to mitaka [1344398 ] http://bugzilla.redhat.com/1344398 (NEW) Component: openstack-tripleo Last change: 2016-06-09 Summary: SSL enabled undercloud doesn't configure https protocol for several endpoints [1343634 ] http://bugzilla.redhat.com/1343634 (NEW) Component: openstack-tripleo Last change: 2016-06-07 Summary: controller-no-external.yaml template still creates external network and fails to deploy [1350605 ] http://bugzilla.redhat.com/1350605 (NEW) Component: openstack-tripleo Last change: 2016-06-27 Summary: Error regarding the TUN device using quickstart.sh [1344467 ] http://bugzilla.redhat.com/1344467 (NEW) Component: openstack-tripleo Last change: 2016-06-09 Summary: Unable to launch instance: Invalid: Volume sets discard option, qemu (1, 6, 0) or later is required. [1344442 ] http://bugzilla.redhat.com/1344442 (NEW) Component: openstack-tripleo Last change: 2016-06-09 Summary: Ceilometer central fails to start: ImportError: No module named redis [1056106 ] http://bugzilla.redhat.com/1056106 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: [RFE][ironic]: Integration of Ironic in to TripleO [1277990 ] http://bugzilla.redhat.com/1277990 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: openstack-ironic-inspector-dnsmasq.service: failed to start during undercloud installation [1344447 ] http://bugzilla.redhat.com/1344447 (NEW) Component: openstack-tripleo Last change: 2016-06-21 Summary: Openstack-gnocchi-statsd fails to start; ImportError: Your rados python module does not support omap feature. Install 'cradox' (recommended) or upgrade 'python- rados' >= 9.1.0 [1277980 ] http://bugzilla.redhat.com/1277980 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: missing python-proliantutils [1344451 ] http://bugzilla.redhat.com/1344451 (NEW) Component: openstack-tripleo Last change: 2016-06-20 Summary: HAProxy logs show up in the os-collect-config journal [1334259 ] http://bugzilla.redhat.com/1334259 (NEW) Component: openstack-tripleo Last change: 2016-05-09 Summary: openstack overcloud image upload fails with "Required file "./ironic-python-agent.initramfs" does not exist." [1340865 ] http://bugzilla.redhat.com/1340865 (NEW) Component: openstack-tripleo Last change: 2016-06-07 Summary: Tripleo QuickStart HA deployment attempts constantly crash [1056112 ] http://bugzilla.redhat.com/1056112 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: [RFE][tripleo]: Deploying different architecture topologies with Tuskar [1174776 ] http://bugzilla.redhat.com/1174776 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: User can not login into the overcloud horizon using the proper credentials [1303614 ] http://bugzilla.redhat.com/1303614 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: overcloud deployment failed AttributeError: 'Proxy' object has no attribute 'api' [1341093 ] http://bugzilla.redhat.com/1341093 (NEW) Component: openstack-tripleo Last change: 2016-06-01 Summary: Tripleo QuickStart HA deployment attempts constantly crash [1056114 ] http://bugzilla.redhat.com/1056114 (NEW) Component: openstack-tripleo Last change: 2016-04-18 Summary: [RFE][tripleo]: Implement a complete overcloud installation story in the UI ### openstack-tripleo-heat-templates (2 bugs) [1342145 ] http://bugzilla.redhat.com/1342145 (NEW) Component: openstack-tripleo-heat-templates Last change: 2016-06-02 Summary: Deploying Manila is not possible due to missing template [1266027 ] http://bugzilla.redhat.com/1266027 (NEW) Component: openstack-tripleo-heat-templates Last change: 2016-04-18 Summary: TripleO should use pymysql database driver since Liberty ### openstack-tripleo-image-elements (2 bugs) [1303567 ] http://bugzilla.redhat.com/1303567 (NEW) Component: openstack-tripleo-image-elements Last change: 2016-04-18 Summary: Overcloud deployment fails using Ceph [1347652 ] http://bugzilla.redhat.com/1347652 (NEW) Component: openstack-tripleo-image-elements Last change: 2016-06-22 Summary: post-install deletes foreign o-r-c configure.d scripts ### openstack-trove (1 bug) [1327068 ] http://bugzilla.redhat.com/1327068 (NEW) Component: openstack-trove Last change: 2016-05-24 Summary: trove guest agent should create a sudoers entry ### Package Review (14 bugs) [1342987 ] http://bugzilla.redhat.com/1342987 (NEW) Component: Package Review Last change: 2016-06-19 Summary: Review Request: openstack-vitrage - OpenStack RCA (Root Cause Analysis) Engine [1344368 ] http://bugzilla.redhat.com/1344368 (NEW) Component: Package Review Last change: 2016-06-09 Summary: Review Request: openstack-ironic-ui - OpenStack Ironic Dashboard [1177361 ] http://bugzilla.redhat.com/1177361 (ASSIGNED) Component: Package Review Last change: 2016-06-15 Summary: Review Request: sahara-image-elements - Image creation tools for Openstack Sahara [1341687 ] http://bugzilla.redhat.com/1341687 (NEW) Component: Package Review Last change: 2016-06-03 Summary: Review request: openstack-neutron-lbaas-ui [1272513 ] http://bugzilla.redhat.com/1272513 (ASSIGNED) Component: Package Review Last change: 2016-05-20 Summary: Review Request: Murano - is an application catalog for OpenStack [1329125 ] http://bugzilla.redhat.com/1329125 (ASSIGNED) Component: Package Review Last change: 2016-04-26 Summary: Review Request: python-oslo-privsep - OpenStack library for privilege separation [1350974 ] http://bugzilla.redhat.com/1350974 (NEW) Component: Package Review Last change: 2016-06-28 Summary: Openstack python-watcherclient [1331486 ] http://bugzilla.redhat.com/1331486 (NEW) Component: Package Review Last change: 2016-05-24 Summary: Tracker bugzilla for puppet packages in RDO Newton cycle [1312328 ] http://bugzilla.redhat.com/1312328 (NEW) Component: Package Review Last change: 2016-05-19 Summary: New Package: openstack-ironic-staging-drivers [1318765 ] http://bugzilla.redhat.com/1318765 (NEW) Component: Package Review Last change: 2016-06-16 Summary: Review Request: openstack-sahara-tests - Sahara Scenario Test Framework [1342227 ] http://bugzilla.redhat.com/1342227 (ASSIGNED) Component: Package Review Last change: 2016-06-06 Summary: Review Request: python-designate-tests-tempest - Tempest Integration of Designate [1279513 ] http://bugzilla.redhat.com/1279513 (ASSIGNED) Component: Package Review Last change: 2016-04-18 Summary: New Package: python-dracclient [1326586 ] http://bugzilla.redhat.com/1326586 (NEW) Component: Package Review Last change: 2016-04-13 Summary: Review request: Sensu [1272524 ] http://bugzilla.redhat.com/1272524 (ASSIGNED) Component: Package Review Last change: 2016-05-19 Summary: Review Request: openstack-mistral - workflow Service for OpenStack cloud ### python-novaclient (1 bug) [1123451 ] http://bugzilla.redhat.com/1123451 (ASSIGNED) Component: python-novaclient Last change: 2016-05-02 Summary: Missing versioned dependency on python-six ### rdo-manager (14 bugs) [1306350 ] http://bugzilla.redhat.com/1306350 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: With RDO-manager, if not configured, the first nic on compute nodes gets addresses from dhcp as a default [1272376 ] http://bugzilla.redhat.com/1272376 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: Duplicate nova hypervisors after rebooting compute nodes [1216981 ] http://bugzilla.redhat.com/1216981 (ASSIGNED) Component: rdo-manager Last change: 2016-04-18 Summary: No way to increase yum timeouts when building images [1270370 ] http://bugzilla.redhat.com/1270370 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: [RDO-Manager] bulk introspection moving the nodes from available to manageable too quickly [getting: NodeLocked:] [1273541 ] http://bugzilla.redhat.com/1273541 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: RDO-Manager needs epel.repo enabled (otherwise undercloud deployment fails.) [1292253 ] http://bugzilla.redhat.com/1292253 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: Production + EPEL + yum-plugin-priorities results in wrong version of hiera [1271726 ] http://bugzilla.redhat.com/1271726 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: 1 of the overcloud VMs (nova) is stack in spawning state [1270910 ] http://bugzilla.redhat.com/1270910 (ASSIGNED) Component: rdo-manager Last change: 2016-04-18 Summary: IP address from external subnet gets assigned to br-ex when using default single-nic-vlans templates [1306364 ] http://bugzilla.redhat.com/1306364 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: With RDO-manager, using bridge mappings, Neutron opensvswitch-agent plugin's config file don't gets populated correctly [1273680 ] http://bugzilla.redhat.com/1273680 (ASSIGNED) Component: rdo-manager Last change: 2016-04-18 Summary: HA overcloud with network isolation deployment fails [1270805 ] http://bugzilla.redhat.com/1270805 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: Glance client returning 'Expected endpoint' [1230582 ] http://bugzilla.redhat.com/1230582 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: there is a newer image that can be used to deploy openstack [1294683 ] http://bugzilla.redhat.com/1294683 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: instack-undercloud: "openstack undercloud install" throws errors and then gets stuck due to selinux. [1271289 ] http://bugzilla.redhat.com/1271289 (NEW) Component: rdo-manager Last change: 2016-04-18 Summary: overcloud-novacompute stuck in spawning state ### rdopkg (1 bug) [1100405 ] http://bugzilla.redhat.com/1100405 (ASSIGNED) Component: rdopkg Last change: 2014-05-22 Summary: [RFE] Add option to force overwrite build files on update download ### RFEs (2 bugs) [1193886 ] http://bugzilla.redhat.com/1193886 (ASSIGNED) Component: RFEs Last change: 2016-04-18 Summary: RFE: wait for DB after boot [1158517 ] http://bugzilla.redhat.com/1158517 (NEW) Component: RFEs Last change: 2016-05-20 Summary: [RFE] Provide easy to use upgrade tool ### tempest (3 bugs) [1345736 ] http://bugzilla.redhat.com/1345736 (NEW) Component: tempest Last change: 2016-06-17 Summary: Installing mitaka openstack-tempest RDO build with some workarounds fails to run tempest smoke tests [1344339 ] http://bugzilla.redhat.com/1344339 (NEW) Component: tempest Last change: 2016-06-14 Summary: install_venv script fails to open requirements file [1250081 ] http://bugzilla.redhat.com/1250081 (NEW) Component: tempest Last change: 2015-08-06 Summary: test_minimum_basic scenario failed to run on rdo- manager ## Fixed bugs This is a list of "fixed" bugs by component. A "fixed" bug is fixed state MODIFIED, POST, ON_QA and has been fixed. You can help out by testing the fix to make sure it works as intended. (43 bugs) ### distribution (5 bugs) [1328980 ] http://bugzilla.redhat.com/1328980 (MODIFIED) Component: distribution Last change: 2016-04-21 Summary: Log handler repeatedly crashes [1344148 ] http://bugzilla.redhat.com/1344148 (ON_QA) Component: distribution Last change: 2016-06-17 Summary: RDO mitaka openstack-tempest build requires updated python-urllib3 [1336566 ] http://bugzilla.redhat.com/1336566 (ON_QA) Component: distribution Last change: 2016-05-20 Summary: Paramiko needs to be updated to 2.0 to match upstream requirement [1317971 ] http://bugzilla.redhat.com/1317971 (POST) Component: distribution Last change: 2016-05-23 Summary: openstack-cloudkitty-common should have a /etc/cloudkitty/api_paste.ini [1134121 ] http://bugzilla.redhat.com/1134121 (POST) Component: distribution Last change: 2016-04-18 Summary: Tuskar Fails After Remove/Reinstall Of RDO ### instack-undercloud (1 bug) [1270033 ] http://bugzilla.redhat.com/1270033 (POST) Component: instack-undercloud Last change: 2016-05-05 Summary: [RDO-Manager] Node inspection fails when changing the default 'inspection_iprange' value in undecloud.conf. ### openstack-ceilometer (1 bug) [1287252 ] http://bugzilla.redhat.com/1287252 (POST) Component: openstack-ceilometer Last change: 2016-04-18 Summary: openstack-ceilometer-alarm-notifier does not start: unit file is missing ### openstack-glance (1 bug) [1074724 ] http://bugzilla.redhat.com/1074724 (POST) Component: openstack-glance Last change: 2016-04-19 Summary: Glance api ssl issue ### openstack-ironic (1 bug) [1348817 ] http://bugzilla.redhat.com/1348817 (ON_QA) Component: openstack-ironic Last change: 2016-06-23 Summary: CVE-2016-4985 openstack-ironic: Ironic Node information including credentials exposed to unauthenticated users [openstack-rdo] ### openstack-ironic-discoverd (1 bug) [1322892 ] http://bugzilla.redhat.com/1322892 (MODIFIED) Component: openstack-ironic-discoverd Last change: 2016-06-17 Summary: No valid interfaces found during introspection ### openstack-keystone (2 bugs) [1341332 ] http://bugzilla.redhat.com/1341332 (POST) Component: openstack-keystone Last change: 2016-06-01 Summary: keystone logrotate configuration should use size configuration [1280530 ] http://bugzilla.redhat.com/1280530 (MODIFIED) Component: openstack-keystone Last change: 2016-05-20 Summary: Fernet tokens cannot read key files with SELInux enabled ### openstack-neutron (1 bug) [1334797 ] http://bugzilla.redhat.com/1334797 (POST) Component: openstack-neutron Last change: 2016-06-15 Summary: Ensure translations are installed correctly and picked up at runtime ### openstack-nova (1 bug) [1301156 ] http://bugzilla.redhat.com/1301156 (POST) Component: openstack-nova Last change: 2016-04-22 Summary: openstack-nova missing specfile requires on castellan>=0.3.1 ### openstack-packstack (18 bugs) [1335612 ] http://bugzilla.redhat.com/1335612 (MODIFIED) Component: openstack-packstack Last change: 2016-05-31 Summary: CONFIG_USE_SUBNETS=y won't work correctly with VLAN [1028690 ] http://bugzilla.redhat.com/1028690 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: packstack requires 2 runs to install ceilometer [1288179 ] http://bugzilla.redhat.com/1288179 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: Mitaka: Packstack image provisioning fails with "Store filesystem could not be configured correctly" [1018900 ] http://bugzilla.redhat.com/1018900 (MODIFIED) Component: openstack-packstack Last change: 2016-05-18 Summary: Packstack fails with "The iptables provider can not handle attribute outiface" [1080369 ] http://bugzilla.redhat.com/1080369 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: packstack fails with KeyError :CONFIG_PROVISION_DEMO_FLOATRANGE if more compute-hosts are added [1302275 ] http://bugzilla.redhat.com/1302275 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: neutron-l3-agent does not start on Mitaka-2 when enabling FWaaS [1302256 ] http://bugzilla.redhat.com/1302256 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: neutron-server does not start on Mitaka-2 when enabling LBaaS [1296899 ] http://bugzilla.redhat.com/1296899 (POST) Component: openstack-packstack Last change: 2016-06-15 Summary: Swift's proxy-server is not configured to use ceilometer [1282746 ] http://bugzilla.redhat.com/1282746 (POST) Component: openstack-packstack Last change: 2016-05-18 Summary: Swift's proxy-server is not configured to use ceilometer [1150652 ] http://bugzilla.redhat.com/1150652 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: PackStack does not provide an option to register hosts to Red Hat Satellite 6 [1297833 ] http://bugzilla.redhat.com/1297833 (POST) Component: openstack-packstack Last change: 2016-04-19 Summary: VPNaaS should use libreswan driver instead of openswan by default [1049537 ] http://bugzilla.redhat.com/1049537 (MODIFIED) Component: openstack-packstack Last change: 2016-05-18 Summary: Horizon help url in RDO points to the RHOS documentation [1187412 ] http://bugzilla.redhat.com/1187412 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: Script wording for service installation should be consistent [1255369 ] http://bugzilla.redhat.com/1255369 (POST) Component: openstack-packstack Last change: 2016-05-19 Summary: Improve session settings for horizon [1298245 ] http://bugzilla.redhat.com/1298245 (MODIFIED) Component: openstack-packstack Last change: 2016-04-18 Summary: Add possibility to change DEFAULT/api_paste_config in trove.conf [1148949 ] http://bugzilla.redhat.com/1148949 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: openstack-packstack: installed "packstack --allinone" on Centos7.0 and configured private networking. The booted VMs are not able to communicate with each other, nor ping the gateway. [1124982 ] http://bugzilla.redhat.com/1124982 (POST) Component: openstack-packstack Last change: 2016-04-18 Summary: Help text for SSL is incorrect regarding passphrase on the cert [1330289 ] http://bugzilla.redhat.com/1330289 (POST) Component: openstack-packstack Last change: 2016-05-21 Summary: Failure to install Controller/Network&&Compute Cluster on RDO Mitaka with keystone API V3 ### openstack-utils (1 bug) [1211989 ] http://bugzilla.redhat.com/1211989 (POST) Component: openstack-utils Last change: 2016-04-18 Summary: openstack-status shows 'disabled on boot' for the mysqld service ### Package Review (4 bugs) [1347193 ] http://bugzilla.redhat.com/1347193 (MODIFIED) Component: Package Review Last change: 2016-06-24 Summary: Openstack Watcher service [1323219 ] http://bugzilla.redhat.com/1323219 (ON_QA) Component: Package Review Last change: 2016-05-12 Summary: Review Request: openstack-trove-ui - OpenStack Dashboard plugin for Trove project [1318310 ] http://bugzilla.redhat.com/1318310 (POST) Component: Package Review Last change: 2016-06-07 Summary: Review Request: openstack-magnum-ui - OpenStack Magnum UI Horizon plugin [1331952 ] http://bugzilla.redhat.com/1331952 (POST) Component: Package Review Last change: 2016-06-01 Summary: Review Request: openstack-mistral-ui - OpenStack Mistral Dashboard ### python-keystoneclient (1 bug) [973263 ] http://bugzilla.redhat.com/973263 (POST) Component: python-keystoneclient Last change: 2016-04-19 Summary: user-get fails when using IDs which are not UUIDs ### rdo-manager (2 bugs) [1271335 ] http://bugzilla.redhat.com/1271335 (POST) Component: rdo-manager Last change: 2016-06-09 Summary: [RFE] Support explicit configuration of L2 population [1268990 ] http://bugzilla.redhat.com/1268990 (POST) Component: rdo-manager Last change: 2016-04-18 Summary: missing from docs Build images fails without : export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean.repo /etc/yum.repos.d/delorean-deps.repo" ### rdo-manager-cli (2 bugs) [1273197 ] http://bugzilla.redhat.com/1273197 (POST) Component: rdo-manager-cli Last change: 2016-04-18 Summary: VXLAN should be default neutron network type [1278972 ] http://bugzilla.redhat.com/1278972 (POST) Component: rdo-manager-cli Last change: 2016-04-18 Summary: rdo-manager liberty delorean dib failing w/ "No module named passlib.utils" ### tempest (1 bug) [1342218 ] http://bugzilla.redhat.com/1342218 (MODIFIED) Component: tempest Last change: 2016-06-03 Summary: RDO openstack-tempest RPM should remove requirements.txt from source Thanks, Chandan Kumar From bderzhavets at hotmail.com Wed Jun 29 13:36:14 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 29 Jun 2016 13:36:14 +0000 Subject: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) In-Reply-To: <2a4caf42c3b4471d9435dae86f201748@PREWE13M11.ad.sprint.com> References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> <24755b61ab624c10bea2152601a0f67f@PREWE13M11.ad.sprint.com> , <2a4caf42c3b4471d9435dae86f201748@PREWE13M11.ad.sprint.com> Message-ID: Attempt to follow steps suggested in http://hardysteven.blogspot.ru/2016/06/tripleo-partial-stack-updates.html ./deploy-overstack crashes 2016-06-29 12:42:41 [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk-ControllerOvercloudServicesDeployment_Step4-nzdoizlgrmx2]: CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 2016-06-29 12:42:42 [ControllerOvercloudServicesDeployment_Step4]: CREATE_FAILED Error: resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 2016-06-29 12:42:43 [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 2016-06-29 12:42:44 [ControllerNodesPostDeployment]: CREATE_FAILED Error: resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 2016-06-29 12:42:44 [2]: SIGNAL_COMPLETE Unknown 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown 2016-06-29 12:42:46 [overcloud]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 2016-06-29 12:42:46 [2]: SIGNAL_COMPLETE Unknown 2016-06-29 12:42:47 [2]: SIGNAL_COMPLETE Unknown 2016-06-29 12:42:47 [ControllerDeployment]: SIGNAL_COMPLETE Unknown 2016-06-29 12:42:48 [NetworkDeployment]: SIGNAL_COMPLETE Unknown 2016-06-29 12:42:48 [2]: SIGNAL_COMPLETE Unknown Stack overcloud CREATE_FAILED Deployment failed: Heat Stack create failed. + heat stack-list + grep -q CREATE_FAILED + deploy_status=1 ++ heat resource-list --nested-depth 5 overcloud ++ grep FAILED ++ grep 'StructuredDeployment ' ++ cut -d '|' -f3 + for failed in '$(heat resource-list --nested-depth 5 overcloud | grep FAILED | grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' + heat deployment-show 655c77fc-6a78-4cca-b4b7-a153a3f4ad52 + for failed in '$(heat resource-list --nested-depth 5 overcloud | grep FAILED | grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' + heat deployment-show 1fe5153c-e017-4ee5-823a-3d1524430c1d + for failed in '$(heat resource-list --nested-depth 5 overcloud | grep FAILED | grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' + heat deployment-show bf6f25f4-d812-41e9-a7a8-122de619a624 + exit 1 ***************************** Troubleshooting steps :- ***************************** [stack at undercloud ~]$ . stackrc [stack at undercloud ~]$ heat resource-list overcloud | grep ControllerNodesPost | ControllerNodesPostDeployment | f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | OS::TripleO::ControllerPostDeployment | CREATE_FAILED | 2016-06-29T12:11:21 | [stack at undercloud ~]$ heat stack-list -n | grep "^| f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3" | f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | CREATE_FAILED | 2016-06-29T12:31:11 | None | 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | [stack at undercloud ~]$ heat event-list -m f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ | resource_name | id | resource_status_reason | resource_status | event_time | +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | 10ec0cf9-b3c9-4191-9966-3f4d47f27e2a | Stack CREATE started . . . . . . . . . . . . . . . . . Step1,2,3 succeeded . . . . . . . . . . . . . . . . . | CREATE_IN_PROGRESS | 2016-06-29T12:31:14 | | ControllerPuppetConfig | a2a1df33-5106-425c-b16d-8d2df709b19f | state changed | CREATE_COMPLETE | 2016-06-29T12:35:02 | | ControllerOvercloudServicesDeployment_Step4 | 1e151333-4de5-4e7b-907c-ea0f42d31a47 | state changed | CREATE_IN_PROGRESS | 2016-06-29T12:35:03 | | ControllerOvercloudServicesDeployment_Step4 | 7bf36334-3d92-4554-b6c0-41294a072ab6 | Error: resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 | CREATE_FAILED | 2016-06-29T12:42:42 | | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | e72fb6f4-c2aa-4fe8-9bd1-5f5ad152685c | Resource CREATE failed: Error: resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 | CREATE_FAILED | 2016-06-29T12:42:43 | +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ [stack at undercloud ~]$ heat stack-show overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | grep NodeConfigIdentifiers | | "NodeConfigIdentifiers": "{u'deployment_identifier': 1467202276, u'controller_config': {u'1': u'os-apply-config deployment 796df02a-7550-414b-a084-8b591a13e6db completed,Root CA cert injection not enabled.,TLS not enabled.,None,', u'0': u'os-apply-config deployment 613ec889-d852-470a-8e4c-6e243e1d2033 completed,Root CA cert injection not enabled.,TLS not enabled.,None,', u'2': u'os-apply-config deployment c8b099d0-3af4-4ba0-a056-a0ce60f40e2d completed,Root CA cert injection not enabled.,TLS not enabled.,None,'}, u'allnodes_extra': u'none'}" | However, when stack creating crashed update wouldn't help. [stack at undercloud ~]$ heat stack-update -x overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk -e update_env.yaml ERROR: PATCH update to non-COMPLETE stack is not supported. DUE TO :- [stack at undercloud ~]$ heat stack-list +--------------------------------------+------------+---------------+---------------------+--------------+ | id | stack_name | stack_status | creation_time | updated_time | +--------------------------------------+------------+---------------+---------------------+--------------+ | 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | overcloud | CREATE_FAILED | 2016-06-29T12:11:20 | None | +--------------------------------------+------------+---------------+---------------------+------ Complete error file `heat deployment-show 655c77fc-6a78-4cca-b4b7-a153a3f4ad52` is attached a gzip archive. Thanks. Boris. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: HeatCrash2.txt.gz Type: application/gzip Size: 19265 bytes Desc: HeatCrash2.txt.gz URL: From bderzhavets at hotmail.com Wed Jun 29 14:03:50 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 29 Jun 2016 14:03:50 +0000 Subject: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) In-Reply-To: References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> <24755b61ab624c10bea2152601a0f67f@PREWE13M11.ad.sprint.com> , <2a4caf42c3b4471d9435dae86f201748@PREWE13M11.ad.sprint.com>, Message-ID: Boris Derzhavets has shared a?OneDrive?file with you. To view it, click the link below. [https://r1.res.office365.com/owa/prem/images/dc-generic_20.png] HeatCrash2.txt 1.gz Reattach gzip archive via One Drive ________________________________ From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets Sent: Wednesday, June 29, 2016 9:36 AM To: John Trowbridge; shardy at redhat.com Cc: rdo-list at redhat.com Subject: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) Attempt to follow steps suggested in http://hardysteven.blogspot.ru/2016/06/tripleo-partial-stack-updates.html ./deploy-overstack crashes 2016-06-29 12:42:41 [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk-ControllerOvercloudServicesDeployment_Step4-nzdoizlgrmx2]: CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 6 2016-06-29 12:42:42 [ControllerOvercloudServicesDeployment_Step4]: CREATE_FAILED Error: resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 2016-06-29 12:42:43 [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 2016-06-29 12:42:44 [ControllerNodesPostDeployment]: CREATE_FAILED Error: resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 2016-06-29 12:42:44 [2]: SIGNAL_COMPLETE Unknown 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown 2016-06-29 12:42:46 [overcloud]: CREATE_FAILED Resource CREATE failed: Error: resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 2016-06-29 12:42:46 [2]: SIGNAL_COMPLETE Unknown 2016-06-29 12:42:47 [2]: SIGNAL_COMPLETE Unknown 2016-06-29 12:42:47 [ControllerDeployment]: SIGNAL_COMPLETE Unknown 2016-06-29 12:42:48 [NetworkDeployment]: SIGNAL_COMPLETE Unknown 2016-06-29 12:42:48 [2]: SIGNAL_COMPLETE Unknown Stack overcloud CREATE_FAILED Deployment failed: Heat Stack create failed. + heat stack-list + grep -q CREATE_FAILED + deploy_status=1 ++ heat resource-list --nested-depth 5 overcloud ++ grep FAILED ++ grep 'StructuredDeployment ' ++ cut -d '|' -f3 + for failed in '$(heat resource-list --nested-depth 5 overcloud | grep FAILED | grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' + heat deployment-show 655c77fc-6a78-4cca-b4b7-a153a3f4ad52 + for failed in '$(heat resource-list --nested-depth 5 overcloud | grep FAILED | grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' + heat deployment-show 1fe5153c-e017-4ee5-823a-3d1524430c1d + for failed in '$(heat resource-list --nested-depth 5 overcloud | grep FAILED | grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' + heat deployment-show bf6f25f4-d812-41e9-a7a8-122de619a624 + exit 1 ***************************** Troubleshooting steps :- ***************************** [stack at undercloud ~]$ . stackrc [stack at undercloud ~]$ heat resource-list overcloud | grep ControllerNodesPost | ControllerNodesPostDeployment | f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | OS::TripleO::ControllerPostDeployment | CREATE_FAILED | 2016-06-29T12:11:21 | [stack at undercloud ~]$ heat stack-list -n | grep "^| f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3" | f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | CREATE_FAILED | 2016-06-29T12:31:11 | None | 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | [stack at undercloud ~]$ heat event-list -m f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ | resource_name | id | resource_status_reason | resource_status | event_time | +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | 10ec0cf9-b3c9-4191-9966-3f4d47f27e2a | Stack CREATE started . . . . . . . . . . . . . . . . . Step1,2,3 succeeded . . . . . . . . . . . . . . . . . | CREATE_IN_PROGRESS | 2016-06-29T12:31:14 | | ControllerPuppetConfig | a2a1df33-5106-425c-b16d-8d2df709b19f | state changed | CREATE_COMPLETE | 2016-06-29T12:35:02 | | ControllerOvercloudServicesDeployment_Step4 | 1e151333-4de5-4e7b-907c-ea0f42d31a47 | state changed | CREATE_IN_PROGRESS | 2016-06-29T12:35:03 | | ControllerOvercloudServicesDeployment_Step4 | 7bf36334-3d92-4554-b6c0-41294a072ab6 | Error: resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 | CREATE_FAILED | 2016-06-29T12:42:42 | | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | e72fb6f4-c2aa-4fe8-9bd1-5f5ad152685c | Resource CREATE failed: Error: resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 6 | CREATE_FAILED | 2016-06-29T12:42:43 | +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ [stack at undercloud ~]$ heat stack-show overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | grep NodeConfigIdentifiers | | "NodeConfigIdentifiers": "{u'deployment_identifier': 1467202276, u'controller_config': {u'1': u'os-apply-config deployment 796df02a-7550-414b-a084-8b591a13e6db completed,Root CA cert injection not enabled.,TLS not enabled.,None,', u'0': u'os-apply-config deployment 613ec889-d852-470a-8e4c-6e243e1d2033 completed,Root CA cert injection not enabled.,TLS not enabled.,None,', u'2': u'os-apply-config deployment c8b099d0-3af4-4ba0-a056-a0ce60f40e2d completed,Root CA cert injection not enabled.,TLS not enabled.,None,'}, u'allnodes_extra': u'none'}" | However, when stack creating crashed update wouldn't help. [stack at undercloud ~]$ heat stack-update -x overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk -e update_env.yaml ERROR: PATCH update to non-COMPLETE stack is not supported. DUE TO :- [stack at undercloud ~]$ heat stack-list +--------------------------------------+------------+---------------+---------------------+--------------+ | id | stack_name | stack_status | creation_time | updated_time | +--------------------------------------+------------+---------------+---------------------+--------------+ | 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | overcloud | CREATE_FAILED | 2016-06-29T12:11:20 | None | +--------------------------------------+------------+---------------+---------------------+------ Complete error file `heat deployment-show 655c77fc-6a78-4cca-b4b7-a153a3f4ad52` is attached a gzip archive. Thanks. Boris. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichavero at redhat.com Wed Jun 29 15:58:29 2016 From: ichavero at redhat.com (Ivan Chavero) Date: Wed, 29 Jun 2016 11:58:29 -0400 (EDT) Subject: [rdo-list] [Minute] RDO meeting (2016-06-29) Minutes In-Reply-To: <305169365.2472021.1467215714158.JavaMail.zimbra@redhat.com> Message-ID: <1964262589.2472419.1467215909966.JavaMail.zimbra@redhat.com> ============================== #rdo: RDO meeting (2016-06-29) ============================== Meeting started by imcsk8 at 15:00:45 UTC. The full logs are available at https://meetbot.fedoraproject.org/rdo/2016-06-29/rdo_meeting_(2016-06-29).2016-06-29-15.00.log.html . Meeting summary --------------- * roll call (imcsk8, 15:00:57) * pinning some packages in RDO Trunk-not-all-master ? (imcsk8, 15:06:32) * ACTION: apevec post rdo-trunk-upper-constraints to rdo-list (apevec, 15:14:36) * Does it make sense to have an RDO ISO installer? (imcsk8, 15:23:51) * LINK: https://github.com/asadpiz/org_centos_cloud (number80, 15:29:06) * ACTION: imcsk8 to send message to ML about ISO PoC installer (imcsk8, 15:34:22) * open floor (imcsk8, 15:35:32) Meeting ended at 15:52:50 UTC. Action Items ------------ * apevec post rdo-trunk-upper-constraints to rdo-list * imcsk8 to send message to ML about ISO PoC installer Action Items, by person ----------------------- * apevec * apevec post rdo-trunk-upper-constraints to rdo-list * imcsk8 * imcsk8 to send message to ML about ISO PoC installer * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * apevec (66) * imcsk8 (34) * trown (27) * amoralej (16) * jpena (15) * number80 (12) * zodbot (11) * openstack (6) * leifmadsen (4) * chandankumar (4) * weshay (3) * kbsingh (1) * rdobot (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot From dsneddon at redhat.com Wed Jun 29 17:42:22 2016 From: dsneddon at redhat.com (Dan Sneddon) Date: Wed, 29 Jun 2016 10:42:22 -0700 Subject: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) In-Reply-To: References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> <24755b61ab624c10bea2152601a0f67f@PREWE13M11.ad.sprint.com> <2a4caf42c3b4471d9435dae86f201748@PREWE13M11.ad.sprint.com> Message-ID: <9e3942c3-6ebf-f4c6-41a2-23d11a1e8ffe@redhat.com> On 06/29/2016 07:03 AM, Boris Derzhavets wrote: > Boris Derzhavets has shared a?OneDrive?file with you. To view it, click > the link below. > > > > HeatCrash2.txt 1.gz > [HeatCrash2.txt 1.gz] > > Reattach gzip archive via One Drive > > > > ----------------------------------------------------------------------- > *From:* rdo-list-bounces at redhat.com on > behalf of Boris Derzhavets > *Sent:* Wednesday, June 29, 2016 9:36 AM > *To:* John Trowbridge; shardy at redhat.com > *Cc:* rdo-list at redhat.com > *Subject:* [rdo-list] HA overcloud-deploy.sh crashes again ( > ControllerOvercloudServicesDeployment_Step4 ) > > > Attempt to follow steps suggested > in http://hardysteven.blogspot.ru/2016/06/tripleo-partial-stack-updates.html > > > ./deploy-overstack crashes > > > 2016-06-29 12:42:41 > [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk-ControllerOvercloudServicesDeployment_Step4-nzdoizlgrmx2]: > CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment > to server failed: deploy_status_code : Deployment exited with non-zero > status code: 6 > 2016-06-29 12:42:42 [ControllerOvercloudServicesDeployment_Step4]: > CREATE_FAILED Error: > resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: > Deployment to server failed: deploy_status_code: Deployment exited with > non-zero status code: 6 > 2016-06-29 12:42:43 > [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk]: CREATE_FAILED > Resource CREATE failed: Error: > resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: > Deployment to server failed: deploy_status_code: Deployment exited with > non-zero status code: 6 > 2016-06-29 12:42:44 [ControllerNodesPostDeployment]: CREATE_FAILED > Error: > resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: > Deployment to server failed: deploy_status_code: Deployment exited with > non-zero status code: 6 > 2016-06-29 12:42:44 [2]: SIGNAL_COMPLETE Unknown > 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown > 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown > 2016-06-29 12:42:46 [overcloud]: CREATE_FAILED Resource CREATE failed: > Error: > resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: > Deployment to server failed: deploy_status_code: Deployment exited with > non-zero status code: 6 > 2016-06-29 12:42:46 [2]: SIGNAL_COMPLETE Unknown > 2016-06-29 12:42:47 [2]: SIGNAL_COMPLETE Unknown > 2016-06-29 12:42:47 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > 2016-06-29 12:42:48 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > 2016-06-29 12:42:48 [2]: SIGNAL_COMPLETE Unknown > Stack overcloud CREATE_FAILED > Deployment failed: Heat Stack create failed. > + heat stack-list > + grep -q CREATE_FAILED > + deploy_status=1 > ++ heat resource-list --nested-depth 5 overcloud > ++ grep FAILED > ++ grep 'StructuredDeployment ' > ++ cut -d '|' -f3 > + for failed in '$(heat resource-list --nested-depth 5 > overcloud | grep FAILED | > grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' > + heat deployment-show 655c77fc-6a78-4cca-b4b7-a153a3f4ad52 > + for failed in '$(heat resource-list --nested-depth 5 > overcloud | grep FAILED | > grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' > + heat deployment-show 1fe5153c-e017-4ee5-823a-3d1524430c1d > + for failed in '$(heat resource-list --nested-depth 5 > overcloud | grep FAILED | > grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' > + heat deployment-show bf6f25f4-d812-41e9-a7a8-122de619a624 > + exit 1 > > ***************************** > Troubleshooting steps :- > ***************************** > > [stack at undercloud ~]$ . stackrc > [stack at undercloud ~]$ heat resource-list overcloud | grep > ControllerNodesPost > | ControllerNodesPostDeployment | > f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | > OS::TripleO::ControllerPostDeployment | CREATE_FAILED | > 2016-06-29T12:11:21 | > > > [stack at undercloud ~]$ heat stack-list -n | grep "^| > f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3" > | f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | > overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk > | CREATE_FAILED | 2016-06-29T12:31:11 | None | > 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | > > > > [stack at undercloud ~]$ heat event-list -m > f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 > overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk > > +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ > | resource_name | > id | > resource_status_reason > | resource_status | event_time | > +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ > | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | > 10ec0cf9-b3c9-4191-9966-3f4d47f27e2a | Stack CREATE started > . . . . . . . . . . . . . . . . . > Step1,2,3 succeeded > . . . . . . . . . . . . . . . . . > > | CREATE_IN_PROGRESS | 2016-06-29T12:31:14 | > | ControllerPuppetConfig | > a2a1df33-5106-425c-b16d-8d2df709b19f | state > changed > | CREATE_COMPLETE | 2016-06-29T12:35:02 | > | ControllerOvercloudServicesDeployment_Step4 | > 1e151333-4de5-4e7b-907c-ea0f42d31a47 | state > changed > | CREATE_IN_PROGRESS | 2016-06-29T12:35:03 | > | ControllerOvercloudServicesDeployment_Step4 | > 7bf36334-3d92-4554-b6c0-41294a072ab6 | Error: > resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: > Deployment to server failed: deploy_status_code: Deployment exited with > non-zero status code: 6 | CREATE_FAILED | > 2016-06-29T12:42:42 | > | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk > | e72fb6f4-c2aa-4fe8-9bd1-5f5ad152685c | Resource CREATE failed: > Error: > resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: > Deployment to server failed: deploy_status_code: Deployment exited with > non-zero status code: 6 | CREATE_FAILED | 2016-06-29T12:42:43 | > +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ > > [stack at undercloud ~]$ heat stack-show > overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | grep > NodeConfigIdentifiers > | | "NodeConfigIdentifiers": > "{u'deployment_identifier': 1467202276, u'controller_config': {u'1': > u'os-apply-config deployment 796df02a-7550-414b-a084-8b591a13e6db > completed,Root CA cert injection not enabled.,TLS not enabled.,None,', > u'0': u'os-apply-config deployment 613ec889-d852-470a-8e4c-6e243e1d2033 > completed,Root CA cert injection not enabled.,TLS not enabled.,None,', > u'2': u'os-apply-config deployment c8b099d0-3af4-4ba0-a056-a0ce60f40e2d > completed,Root CA cert injection not enabled.,TLS not enabled.,None,'}, > u'allnodes_extra': u'none'}" | > > However, when stack creating crashed update wouldn't help. > > [stack at undercloud ~]$ heat stack-update -x > overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk -e update_env.yaml > ERROR: PATCH update to non-COMPLETE stack is not supported. > > DUE TO :- > > [stack at undercloud ~]$ heat stack-list > +--------------------------------------+------------+---------------+---------------------+--------------+ > | id | stack_name | stack_status | > creation_time | updated_time | > +--------------------------------------+------------+---------------+---------------------+--------------+ > | 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | overcloud | CREATE_FAILED | > 2016-06-29T12:11:20 | None | > +--------------------------------------+------------+---------------+---------------------+------ > > > Complete error file `heat deployment-show > 655c77fc-6a78-4cca-b4b7-a153a3f4ad52` is attached a gzip archive. > > > Thanks. > > Boris. > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > The failure occurred during the post-deployment, which means that the initial deployment succeeded, but then the steps that are done to the completed overcloud failed. This is most commonly attributable to network problems between the Undercloud and the Overcloud Public API. The Undercloud needs to reach the Public API in order to do some of the post-configuration steps. If this API isn't reachable, you end up with the error you saw above. You can test this connectivity by pinging the Public API VIP from the Undercloud. Starting with the failed deployment, run "neutron port-list" against the Underlcloud and look for the IP on the port named "public_virtual_ip". You should be able to ping this address from the Undercloud. If you can't reach that IP, then you need to check the connectivity/routing between the Undercloud and the External network on the Overcloud. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From dsneddon at redhat.com Wed Jun 29 17:46:19 2016 From: dsneddon at redhat.com (Dan Sneddon) Date: Wed, 29 Jun 2016 10:46:19 -0700 Subject: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) In-Reply-To: <9e3942c3-6ebf-f4c6-41a2-23d11a1e8ffe@redhat.com> References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> <24755b61ab624c10bea2152601a0f67f@PREWE13M11.ad.sprint.com> <2a4caf42c3b4471d9435dae86f201748@PREWE13M11.ad.sprint.com> <9e3942c3-6ebf-f4c6-41a2-23d11a1e8ffe@redhat.com> Message-ID: On 06/29/2016 10:42 AM, Dan Sneddon wrote: > On 06/29/2016 07:03 AM, Boris Derzhavets wrote: >> Boris Derzhavets has shared a?OneDrive?file with you. To view it, click >> the link below. >> >> >> >> HeatCrash2.txt 1.gz >> [HeatCrash2.txt 1.gz] >> >> Reattach gzip archive via One Drive >> >> >> >> ----------------------------------------------------------------------- >> *From:* rdo-list-bounces at redhat.com on >> behalf of Boris Derzhavets >> *Sent:* Wednesday, June 29, 2016 9:36 AM >> *To:* John Trowbridge; shardy at redhat.com >> *Cc:* rdo-list at redhat.com >> *Subject:* [rdo-list] HA overcloud-deploy.sh crashes again ( >> ControllerOvercloudServicesDeployment_Step4 ) >> >> >> Attempt to follow steps suggested >> in http://hardysteven.blogspot.ru/2016/06/tripleo-partial-stack-updates.html >> >> >> ./deploy-overstack crashes >> >> >> 2016-06-29 12:42:41 >> [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk-ControllerOvercloudServicesDeployment_Step4-nzdoizlgrmx2]: >> CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment >> to server failed: deploy_status_code : Deployment exited with non-zero >> status code: 6 >> 2016-06-29 12:42:42 [ControllerOvercloudServicesDeployment_Step4]: >> CREATE_FAILED Error: >> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 >> 2016-06-29 12:42:43 >> [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk]: CREATE_FAILED >> Resource CREATE failed: Error: >> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 >> 2016-06-29 12:42:44 [ControllerNodesPostDeployment]: CREATE_FAILED >> Error: >> resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 >> 2016-06-29 12:42:44 [2]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:46 [overcloud]: CREATE_FAILED Resource CREATE failed: >> Error: >> resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 >> 2016-06-29 12:42:46 [2]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:47 [2]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:47 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:48 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:48 [2]: SIGNAL_COMPLETE Unknown >> Stack overcloud CREATE_FAILED >> Deployment failed: Heat Stack create failed. >> + heat stack-list >> + grep -q CREATE_FAILED >> + deploy_status=1 >> ++ heat resource-list --nested-depth 5 overcloud >> ++ grep FAILED >> ++ grep 'StructuredDeployment ' >> ++ cut -d '|' -f3 >> + for failed in '$(heat resource-list --nested-depth 5 >> overcloud | grep FAILED | >> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >> + heat deployment-show 655c77fc-6a78-4cca-b4b7-a153a3f4ad52 >> + for failed in '$(heat resource-list --nested-depth 5 >> overcloud | grep FAILED | >> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >> + heat deployment-show 1fe5153c-e017-4ee5-823a-3d1524430c1d >> + for failed in '$(heat resource-list --nested-depth 5 >> overcloud | grep FAILED | >> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >> + heat deployment-show bf6f25f4-d812-41e9-a7a8-122de619a624 >> + exit 1 >> >> ***************************** >> Troubleshooting steps :- >> ***************************** >> >> [stack at undercloud ~]$ . stackrc >> [stack at undercloud ~]$ heat resource-list overcloud | grep >> ControllerNodesPost >> | ControllerNodesPostDeployment | >> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | >> OS::TripleO::ControllerPostDeployment | CREATE_FAILED | >> 2016-06-29T12:11:21 | >> >> >> [stack at undercloud ~]$ heat stack-list -n | grep "^| >> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3" >> | f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | >> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >> | CREATE_FAILED | 2016-06-29T12:31:11 | None | >> 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | >> >> >> >> [stack at undercloud ~]$ heat event-list -m >> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 >> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >> >> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >> | resource_name | >> id | >> resource_status_reason >> | resource_status | event_time | >> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >> | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | >> 10ec0cf9-b3c9-4191-9966-3f4d47f27e2a | Stack CREATE started >> . . . . . . . . . . . . . . . . . >> Step1,2,3 succeeded >> . . . . . . . . . . . . . . . . . >> >> | CREATE_IN_PROGRESS | 2016-06-29T12:31:14 | >> | ControllerPuppetConfig | >> a2a1df33-5106-425c-b16d-8d2df709b19f | state >> changed >> | CREATE_COMPLETE | 2016-06-29T12:35:02 | >> | ControllerOvercloudServicesDeployment_Step4 | >> 1e151333-4de5-4e7b-907c-ea0f42d31a47 | state >> changed >> | CREATE_IN_PROGRESS | 2016-06-29T12:35:03 | >> | ControllerOvercloudServicesDeployment_Step4 | >> 7bf36334-3d92-4554-b6c0-41294a072ab6 | Error: >> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 | CREATE_FAILED | >> 2016-06-29T12:42:42 | >> | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >> | e72fb6f4-c2aa-4fe8-9bd1-5f5ad152685c | Resource CREATE failed: >> Error: >> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 | CREATE_FAILED | 2016-06-29T12:42:43 | >> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >> >> [stack at undercloud ~]$ heat stack-show >> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | grep >> NodeConfigIdentifiers >> | | "NodeConfigIdentifiers": >> "{u'deployment_identifier': 1467202276, u'controller_config': {u'1': >> u'os-apply-config deployment 796df02a-7550-414b-a084-8b591a13e6db >> completed,Root CA cert injection not enabled.,TLS not enabled.,None,', >> u'0': u'os-apply-config deployment 613ec889-d852-470a-8e4c-6e243e1d2033 >> completed,Root CA cert injection not enabled.,TLS not enabled.,None,', >> u'2': u'os-apply-config deployment c8b099d0-3af4-4ba0-a056-a0ce60f40e2d >> completed,Root CA cert injection not enabled.,TLS not enabled.,None,'}, >> u'allnodes_extra': u'none'}" | >> >> However, when stack creating crashed update wouldn't help. >> >> [stack at undercloud ~]$ heat stack-update -x >> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk -e update_env.yaml >> ERROR: PATCH update to non-COMPLETE stack is not supported. >> >> DUE TO :- >> >> [stack at undercloud ~]$ heat stack-list >> +--------------------------------------+------------+---------------+---------------------+--------------+ >> | id | stack_name | stack_status | >> creation_time | updated_time | >> +--------------------------------------+------------+---------------+---------------------+--------------+ >> | 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | overcloud | CREATE_FAILED | >> 2016-06-29T12:11:20 | None | >> +--------------------------------------+------------+---------------+---------------------+------ >> >> >> Complete error file `heat deployment-show >> 655c77fc-6a78-4cca-b4b7-a153a3f4ad52` is attached a gzip archive. >> >> >> Thanks. >> >> Boris. >> >> >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > The failure occurred during the post-deployment, which means that the > initial deployment succeeded, but then the steps that are done to the > completed overcloud failed. > > This is most commonly attributable to network problems between the > Undercloud and the Overcloud Public API. The Undercloud needs to reach > the Public API in order to do some of the post-configuration steps. If > this API isn't reachable, you end up with the error you saw above. > > You can test this connectivity by pinging the Public API VIP from the > Undercloud. Starting with the failed deployment, run "neutron > port-list" against the Underlcloud and look for the IP on the port > named "public_virtual_ip". You should be able to ping this address from > the Undercloud. If you can't reach that IP, then you need to check the > connectivity/routing between the Undercloud and the External network on > the Overcloud. > I should also mention common causes of this problem: * Incorrect value for ExternalInterfaceDefaultRoute in the network environment file. * Controllers do not have the default route on the External network in the NIC config templates (required for reachability from remote subnets). * Incorrect subnet mask on the ExternalNetCidr in the network environment. * Incorrect ExternalAllocationPools values in the network environment. * Incorrect Ethernet switch config for the Controllers. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter From mscherer at redhat.com Wed Jun 29 20:24:13 2016 From: mscherer at redhat.com (Michael Scherer) Date: Wed, 29 Jun 2016 22:24:13 +0200 Subject: [rdo-list] OS1 is down for the moment Message-ID: <1467231853.2859.28.camel@redhat.com> Hi *, so the cloud where several services are hosted is currently down. So that mean, if you receive this email that: - new website will not auto deploy (ovirt, community.redhat.com, rdo) - main website and associated services is down (rdo, theopensourceway.org) The team is working on it, I will send update when it is back (soon). -- Michael Scherer Sysadmin, Community Infrastructure and Platform, OSAS -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: From bderzhavets at hotmail.com Wed Jun 29 21:14:14 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 29 Jun 2016 21:14:14 +0000 Subject: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) In-Reply-To: References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> <24755b61ab624c10bea2152601a0f67f@PREWE13M11.ad.sprint.com> <2a4caf42c3b4471d9435dae86f201748@PREWE13M11.ad.sprint.com> <9e3942c3-6ebf-f4c6-41a2-23d11a1e8ffe@redhat.com>, Message-ID: ________________________________ From: rdo-list-bounces at redhat.com on behalf of Dan Sneddon Sent: Wednesday, June 29, 2016 1:46 PM To: rdo-list at redhat.com Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) On 06/29/2016 10:42 AM, Dan Sneddon wrote: > On 06/29/2016 07:03 AM, Boris Derzhavets wrote: >> Boris Derzhavets has shared a?OneDrive?file with you. To view it, click >> the link below. >> >> [https://p.sfx.ms/icons/v2/Large/Default.png] HeatCrash2.txt 1.gz 1drv.ms GZ File >> >> HeatCrash2.txt 1.gz >> [HeatCrash2.txt 1.gz] >> >> Reattach gzip archive via One Drive >> >> >> >> ----------------------------------------------------------------------- >> *From:* rdo-list-bounces at redhat.com on >> behalf of Boris Derzhavets >> *Sent:* Wednesday, June 29, 2016 9:36 AM >> *To:* John Trowbridge; shardy at redhat.com >> *Cc:* rdo-list at redhat.com >> *Subject:* [rdo-list] HA overcloud-deploy.sh crashes again ( >> ControllerOvercloudServicesDeployment_Step4 ) >> >> >> Attempt to follow steps suggested >> in http://hardysteven.blogspot.ru/2016/06/tripleo-partial-stack-updates.html >> >> >> ./deploy-overstack crashes >> >> >> 2016-06-29 12:42:41 >> [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk-ControllerOvercloudServicesDeployment_Step4-nzdoizlgrmx2]: >> CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment >> to server failed: deploy_status_code : Deployment exited with non-zero >> status code: 6 >> 2016-06-29 12:42:42 [ControllerOvercloudServicesDeployment_Step4]: >> CREATE_FAILED Error: >> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 >> 2016-06-29 12:42:43 >> [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk]: CREATE_FAILED >> Resource CREATE failed: Error: >> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 >> 2016-06-29 12:42:44 [ControllerNodesPostDeployment]: CREATE_FAILED >> Error: >> resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 >> 2016-06-29 12:42:44 [2]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:46 [overcloud]: CREATE_FAILED Resource CREATE failed: >> Error: >> resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 >> 2016-06-29 12:42:46 [2]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:47 [2]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:47 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:48 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:48 [2]: SIGNAL_COMPLETE Unknown >> Stack overcloud CREATE_FAILED >> Deployment failed: Heat Stack create failed. >> + heat stack-list >> + grep -q CREATE_FAILED >> + deploy_status=1 >> ++ heat resource-list --nested-depth 5 overcloud >> ++ grep FAILED >> ++ grep 'StructuredDeployment ' >> ++ cut -d '|' -f3 >> + for failed in '$(heat resource-list --nested-depth 5 >> overcloud | grep FAILED | >> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >> + heat deployment-show 655c77fc-6a78-4cca-b4b7-a153a3f4ad52 >> + for failed in '$(heat resource-list --nested-depth 5 >> overcloud | grep FAILED | >> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >> + heat deployment-show 1fe5153c-e017-4ee5-823a-3d1524430c1d >> + for failed in '$(heat resource-list --nested-depth 5 >> overcloud | grep FAILED | >> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >> + heat deployment-show bf6f25f4-d812-41e9-a7a8-122de619a624 >> + exit 1 >> >> ***************************** >> Troubleshooting steps :- >> ***************************** >> >> [stack at undercloud ~]$ . stackrc >> [stack at undercloud ~]$ heat resource-list overcloud | grep >> ControllerNodesPost >> | ControllerNodesPostDeployment | >> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | >> OS::TripleO::ControllerPostDeployment | CREATE_FAILED | >> 2016-06-29T12:11:21 | >> >> >> [stack at undercloud ~]$ heat stack-list -n | grep "^| >> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3" >> | f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | >> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >> | CREATE_FAILED | 2016-06-29T12:31:11 | None | >> 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | >> >> >> >> [stack at undercloud ~]$ heat event-list -m >> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 >> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >> >> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >> | resource_name | >> id | >> resource_status_reason >> | resource_status | event_time | >> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >> | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | >> 10ec0cf9-b3c9-4191-9966-3f4d47f27e2a | Stack CREATE started >> . . . . . . . . . . . . . . . . . >> Step1,2,3 succeeded >> . . . . . . . . . . . . . . . . . >> >> | CREATE_IN_PROGRESS | 2016-06-29T12:31:14 | >> | ControllerPuppetConfig | >> a2a1df33-5106-425c-b16d-8d2df709b19f | state >> changed >> | CREATE_COMPLETE | 2016-06-29T12:35:02 | >> | ControllerOvercloudServicesDeployment_Step4 | >> 1e151333-4de5-4e7b-907c-ea0f42d31a47 | state >> changed >> | CREATE_IN_PROGRESS | 2016-06-29T12:35:03 | >> | ControllerOvercloudServicesDeployment_Step4 | >> 7bf36334-3d92-4554-b6c0-41294a072ab6 | Error: >> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 | CREATE_FAILED | >> 2016-06-29T12:42:42 | >> | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >> | e72fb6f4-c2aa-4fe8-9bd1-5f5ad152685c | Resource CREATE failed: >> Error: >> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 | CREATE_FAILED | 2016-06-29T12:42:43 | >> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >> >> [stack at undercloud ~]$ heat stack-show >> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | grep >> NodeConfigIdentifiers >> | | "NodeConfigIdentifiers": >> "{u'deployment_identifier': 1467202276, u'controller_config': {u'1': >> u'os-apply-config deployment 796df02a-7550-414b-a084-8b591a13e6db >> completed,Root CA cert injection not enabled.,TLS not enabled.,None,', >> u'0': u'os-apply-config deployment 613ec889-d852-470a-8e4c-6e243e1d2033 >> completed,Root CA cert injection not enabled.,TLS not enabled.,None,', >> u'2': u'os-apply-config deployment c8b099d0-3af4-4ba0-a056-a0ce60f40e2d >> completed,Root CA cert injection not enabled.,TLS not enabled.,None,'}, >> u'allnodes_extra': u'none'}" | >> >> However, when stack creating crashed update wouldn't help. >> >> [stack at undercloud ~]$ heat stack-update -x >> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk -e update_env.yaml >> ERROR: PATCH update to non-COMPLETE stack is not supported. >> >> DUE TO :- >> >> [stack at undercloud ~]$ heat stack-list >> +--------------------------------------+------------+---------------+---------------------+--------------+ >> | id | stack_name | stack_status | >> creation_time | updated_time | >> +--------------------------------------+------------+---------------+---------------------+--------------+ >> | 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | overcloud | CREATE_FAILED | >> 2016-06-29T12:11:20 | None | >> +--------------------------------------+------------+---------------+---------------------+------ >> >> >> Complete error file `heat deployment-show >> 655c77fc-6a78-4cca-b4b7-a153a3f4ad52` is attached a gzip archive. >> >> >> Thanks. >> >> Boris. >> >> >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > The failure occurred during the post-deployment, which means that the > initial deployment succeeded, but then the steps that are done to the > completed overcloud failed. > > This is most commonly attributable to network problems between the > Undercloud and the Overcloud Public API. The Undercloud needs to reach > the Public API in order to do some of the post-configuration steps. If > this API isn't reachable, you end up with the error you saw above. > > You can test this connectivity by pinging the Public API VIP from the > Undercloud. Starting with the failed deployment, run "neutron > port-list" against the Underlcloud and look for the IP on the port > named "public_virtual_ip". You should be able to ping this address from > the Undercloud. If you can't reach that IP, then you need to check the > connectivity/routing between the Undercloud and the External network on > the Overcloud. > I should also mention common causes of this problem: * Incorrect value for ExternalInterfaceDefaultRoute in the network environment file. * Controllers do not have the default route on the External network in the NIC config templates (required for reachability from remote subnets). * Incorrect subnet mask on the ExternalNetCidr in the network environment. * Incorrect ExternalAllocationPools values in the network environment. * Incorrect Ethernet switch config for the Controllers. Issue has been reproduced with exactly same error 4 times starting since 06/25/16 on daily basis with exactly same error at Step4 of overcloud-ControllerNodesPostDeployment. In meantime I cannot reproduce the error. Config 3xNode HA Controller + 1xCompute works . There was one more issue 3xNode HA Controller + 2xCompute failed immediately when overcloud-deploy.sh started due to only 4 nodes could be introspected. I will test it tomorrow morning. Thanks a lot. Boris. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmanchad at redhat.com Wed Jun 29 22:55:53 2016 From: dmanchad at redhat.com (David Manchado Cuesta) Date: Thu, 30 Jun 2016 00:55:53 +0200 Subject: [rdo-list] OS1 is down for the moment In-Reply-To: <1467231853.2859.28.camel@redhat.com> References: <1467231853.2859.28.camel@redhat.com> Message-ID: Michael, rdoproject.org and theopensourceway.org are back, do not hesitate to let me know if there is any other service that might be impacted. Regards --- David Manchado SW Engineer - RHOS Ops Team On Wed, Jun 29, 2016 at 10:24 PM, Michael Scherer wrote: > Hi *, > > so the cloud where several services are hosted is currently down. > > So that mean, if you receive this email that: > - new website will not auto deploy (ovirt, community.redhat.com, rdo) > - main website and associated services is down (rdo, > theopensourceway.org) > > The team is working on it, I will send update when it is back (soon). > > > -- > Michael Scherer > Sysadmin, Community Infrastructure and Platform, OSAS > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Best, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at gbraad.nl Thu Jun 30 08:09:43 2016 From: me at gbraad.nl (Gerard Braad) Date: Thu, 30 Jun 2016 16:09:43 +0800 Subject: [rdo-list] [centos-ci] Artifacts server does not support ranges / resume of downloads? In-Reply-To: References: Message-ID: Hi, On Wed, Jun 29, 2016 at 6:51 PM, Ha?kel wrote: > 2016-06-28 4:25 GMT+02:00 Gerard Braad : >> https://gist.github.com/gbraad/45cbe30415b0dc631f5e8d20beaffebf > Since this part of infrastructure is managed by CentOS Core Team, > CC'ing centos-devel list. I tried as John(trown) suggested and use the buildlogs.centos.org at the moment, which seems to perform a 302. to buildlogs.cdn.centos.org. Due to my location (China) I have to add this entry to my /etc/hosts as: 185.59.223.23 buildlogs.cdn.centos.org It seems this host allows for peaks that go over 2M/s, but overall does not improve much as times out more often. Did something in the configuration of artifacts.ci.centos.org change? Currently it allows ranges for resume. And I'd rather download at 500k/s (to 2M/s) speed, as long as it is more reliable... regards, Gerard -- Gerard Braad | http://gbraad.nl [ Doing Open Source Matters ] From bderzhavets at hotmail.com Thu Jun 30 09:19:17 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 30 Jun 2016 09:19:17 +0000 Subject: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) In-Reply-To: References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> <24755b61ab624c10bea2152601a0f67f@PREWE13M11.ad.sprint.com> <2a4caf42c3b4471d9435dae86f201748@PREWE13M11.ad.sprint.com> <9e3942c3-6ebf-f4c6-41a2-23d11a1e8ffe@redhat.com>, , Message-ID: ________________________________ From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets Sent: Wednesday, June 29, 2016 5:14 PM To: Dan Sneddon; rdo-list at redhat.com Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) Yes , attempt to deploy ######################## # HA +2xCompute ######################## control_memory: 6144 compute_memory: 6144 undercloud_memory: 8192 # Giving the undercloud additional CPUs can greatly improve heat's # performance (and result in a shorter deploy time). undercloud_vcpu: 4 # Create three controller nodes and one compute node. overcloud_nodes: - name: control_0 flavor: control - name: control_1 flavor: control - name: control_2 flavor: control - name: compute_0 flavor: compute - name: compute_1 flavor: compute # We don't need introspection in a virtual environment (because we are # creating all the "hardware" we really know the necessary # information). introspect: false # Tell tripleo about our environment. network_isolation: true extra_args: >- --control-scale 3 --compute-scale 2 --neutron-network-type vxlan --neutron-tunnel-types vxlan -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml --ntp-server pool.ntp.org deploy_timeout: 75 tempest: false pingtest: true Results during overcloud deployment :- 2016-06-30 09:09:31 [NovaCompute]: CREATE_FAILED ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" 2016-06-30 09:09:31 [NovaCompute]: DELETE_IN_PROGRESS state changed 2016-06-30 09:09:34 [NovaCompute]: DELETE_COMPLETE state changed 2016-06-30 09:09:44 [NovaCompute]: CREATE_IN_PROGRESS state changed 2016-06-30 09:09:48 [NovaCompute]: CREATE_FAILED ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" . . . . . 2016-06-30 09:11:36 [overcloud]: CREATE_FAILED Resource CREATE failed: ResourceInError: resources.Compute.resources[0].resources.NovaCompute: Went to status ERROR due to "Message: Build of instance bf483c34-7010-48ea-8f58-fe192c91093f aborted: Failed to provision instance bf483c34-7010-48ea-8f58-fe192 2016-06-30 09:11:36 [1]: SIGNAL_COMPLETE Unknown 2016-06-30 09:11:36 [ControllerDeployment]: SIGNAL_COMPLETE Unknown 2016-06-30 09:11:36 [1]: CREATE_COMPLETE state changed 2016-06-30 09:11:36 [overcloud-ControllerCephDeployment-62xh7uhtpjqp]: CREATE_COMPLETE Stack CREATE completed successfully 2016-06-30 09:11:37 [NetworkDeployment]: SIGNAL_COMPLETE Unknown 2016-06-30 09:11:37 [1]: SIGNAL_COMPLETE Unknown Stack overcloud CREATE_FAILED Deployment failed: Heat Stack create failed. + heat stack-list + grep -q CREATE_FAILED + deploy_status=1 ++ heat resource-list --nested-depth 5 overcloud ++ grep FAILED ++ grep 'StructuredDeployment ' ++ cut -d '|' -f3 + exit 1 Thanks. Boris ________________________________ From: rdo-list-bounces at redhat.com on behalf of Dan Sneddon Sent: Wednesday, June 29, 2016 1:46 PM To: rdo-list at redhat.com Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) On 06/29/2016 10:42 AM, Dan Sneddon wrote: > On 06/29/2016 07:03 AM, Boris Derzhavets wrote: >> Boris Derzhavets has shared a?OneDrive?file with you. To view it, click >> the link below. >> >> [https://p.sfx.ms/icons/v2/Large/Default.png] HeatCrash2.txt 1.gz 1drv.ms GZ File >> >> HeatCrash2.txt 1.gz >> [HeatCrash2.txt 1.gz] >> >> Reattach gzip archive via One Drive >> >> >> >> ----------------------------------------------------------------------- >> *From:* rdo-list-bounces at redhat.com on >> behalf of Boris Derzhavets >> *Sent:* Wednesday, June 29, 2016 9:36 AM >> *To:* John Trowbridge; shardy at redhat.com >> *Cc:* rdo-list at redhat.com >> *Subject:* [rdo-list] HA overcloud-deploy.sh crashes again ( >> ControllerOvercloudServicesDeployment_Step4 ) >> >> >> Attempt to follow steps suggested >> in http://hardysteven.blogspot.ru/2016/06/tripleo-partial-stack-updates.html >> >> >> ./deploy-overstack crashes >> >> >> 2016-06-29 12:42:41 >> [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk-ControllerOvercloudServicesDeployment_Step4-nzdoizlgrmx2]: >> CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment >> to server failed: deploy_status_code : Deployment exited with non-zero >> status code: 6 >> 2016-06-29 12:42:42 [ControllerOvercloudServicesDeployment_Step4]: >> CREATE_FAILED Error: >> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 >> 2016-06-29 12:42:43 >> [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk]: CREATE_FAILED >> Resource CREATE failed: Error: >> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 >> 2016-06-29 12:42:44 [ControllerNodesPostDeployment]: CREATE_FAILED >> Error: >> resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 >> 2016-06-29 12:42:44 [2]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:46 [overcloud]: CREATE_FAILED Resource CREATE failed: >> Error: >> resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 >> 2016-06-29 12:42:46 [2]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:47 [2]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:47 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:48 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >> 2016-06-29 12:42:48 [2]: SIGNAL_COMPLETE Unknown >> Stack overcloud CREATE_FAILED >> Deployment failed: Heat Stack create failed. >> + heat stack-list >> + grep -q CREATE_FAILED >> + deploy_status=1 >> ++ heat resource-list --nested-depth 5 overcloud >> ++ grep FAILED >> ++ grep 'StructuredDeployment ' >> ++ cut -d '|' -f3 >> + for failed in '$(heat resource-list --nested-depth 5 >> overcloud | grep FAILED | >> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >> + heat deployment-show 655c77fc-6a78-4cca-b4b7-a153a3f4ad52 >> + for failed in '$(heat resource-list --nested-depth 5 >> overcloud | grep FAILED | >> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >> + heat deployment-show 1fe5153c-e017-4ee5-823a-3d1524430c1d >> + for failed in '$(heat resource-list --nested-depth 5 >> overcloud | grep FAILED | >> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >> + heat deployment-show bf6f25f4-d812-41e9-a7a8-122de619a624 >> + exit 1 >> >> ***************************** >> Troubleshooting steps :- >> ***************************** >> >> [stack at undercloud ~]$ . stackrc >> [stack at undercloud ~]$ heat resource-list overcloud | grep >> ControllerNodesPost >> | ControllerNodesPostDeployment | >> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | >> OS::TripleO::ControllerPostDeployment | CREATE_FAILED | >> 2016-06-29T12:11:21 | >> >> >> [stack at undercloud ~]$ heat stack-list -n | grep "^| >> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3" >> | f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | >> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >> | CREATE_FAILED | 2016-06-29T12:31:11 | None | >> 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | >> >> >> >> [stack at undercloud ~]$ heat event-list -m >> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 >> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >> >> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >> | resource_name | >> id | >> resource_status_reason >> | resource_status | event_time | >> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >> | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | >> 10ec0cf9-b3c9-4191-9966-3f4d47f27e2a | Stack CREATE started >> . . . . . . . . . . . . . . . . . >> Step1,2,3 succeeded >> . . . . . . . . . . . . . . . . . >> >> | CREATE_IN_PROGRESS | 2016-06-29T12:31:14 | >> | ControllerPuppetConfig | >> a2a1df33-5106-425c-b16d-8d2df709b19f | state >> changed >> | CREATE_COMPLETE | 2016-06-29T12:35:02 | >> | ControllerOvercloudServicesDeployment_Step4 | >> 1e151333-4de5-4e7b-907c-ea0f42d31a47 | state >> changed >> | CREATE_IN_PROGRESS | 2016-06-29T12:35:03 | >> | ControllerOvercloudServicesDeployment_Step4 | >> 7bf36334-3d92-4554-b6c0-41294a072ab6 | Error: >> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 | CREATE_FAILED | >> 2016-06-29T12:42:42 | >> | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >> | e72fb6f4-c2aa-4fe8-9bd1-5f5ad152685c | Resource CREATE failed: >> Error: >> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >> Deployment to server failed: deploy_status_code: Deployment exited with >> non-zero status code: 6 | CREATE_FAILED | 2016-06-29T12:42:43 | >> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >> >> [stack at undercloud ~]$ heat stack-show >> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | grep >> NodeConfigIdentifiers >> | | "NodeConfigIdentifiers": >> "{u'deployment_identifier': 1467202276, u'controller_config': {u'1': >> u'os-apply-config deployment 796df02a-7550-414b-a084-8b591a13e6db >> completed,Root CA cert injection not enabled.,TLS not enabled.,None,', >> u'0': u'os-apply-config deployment 613ec889-d852-470a-8e4c-6e243e1d2033 >> completed,Root CA cert injection not enabled.,TLS not enabled.,None,', >> u'2': u'os-apply-config deployment c8b099d0-3af4-4ba0-a056-a0ce60f40e2d >> completed,Root CA cert injection not enabled.,TLS not enabled.,None,'}, >> u'allnodes_extra': u'none'}" | >> >> However, when stack creating crashed update wouldn't help. >> >> [stack at undercloud ~]$ heat stack-update -x >> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk -e update_env.yaml >> ERROR: PATCH update to non-COMPLETE stack is not supported. >> >> DUE TO :- >> >> [stack at undercloud ~]$ heat stack-list >> +--------------------------------------+------------+---------------+---------------------+--------------+ >> | id | stack_name | stack_status | >> creation_time | updated_time | >> +--------------------------------------+------------+---------------+---------------------+--------------+ >> | 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | overcloud | CREATE_FAILED | >> 2016-06-29T12:11:20 | None | >> +--------------------------------------+------------+---------------+---------------------+------ >> >> >> Complete error file `heat deployment-show >> 655c77fc-6a78-4cca-b4b7-a153a3f4ad52` is attached a gzip archive. >> >> >> Thanks. >> >> Boris. >> >> >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> > > The failure occurred during the post-deployment, which means that the > initial deployment succeeded, but then the steps that are done to the > completed overcloud failed. > > This is most commonly attributable to network problems between the > Undercloud and the Overcloud Public API. The Undercloud needs to reach > the Public API in order to do some of the post-configuration steps. If > this API isn't reachable, you end up with the error you saw above. > > You can test this connectivity by pinging the Public API VIP from the > Undercloud. Starting with the failed deployment, run "neutron > port-list" against the Underlcloud and look for the IP on the port > named "public_virtual_ip". You should be able to ping this address from > the Undercloud. If you can't reach that IP, then you need to check the > connectivity/routing between the Undercloud and the External network on > the Overcloud. > I should also mention common causes of this problem: * Incorrect value for ExternalInterfaceDefaultRoute in the network environment file. * Controllers do not have the default route on the External network in the NIC config templates (required for reachability from remote subnets). * Incorrect subnet mask on the ExternalNetCidr in the network environment. * Incorrect ExternalAllocationPools values in the network environment. * Incorrect Ethernet switch config for the Controllers. Issue has been reproduced with exactly same error 4 times starting since 06/25/16 on daily basis with exactly same error at Step4 of overcloud-ControllerNodesPostDeployment. In meantime I cannot reproduce the error. Config 3xNode HA Controller + 1xCompute works . There was one more issue 3xNode HA Controller + 2xCompute failed immediately when overcloud-deploy.sh started due to only 4 nodes could be introspected. I will test it tomorrow morning. Thanks a lot. Boris. -- Dan Sneddon | Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack 650.254.4025 | dsneddon:irc @dxs:twitter _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier.pena at redhat.com Thu Jun 30 11:14:30 2016 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 30 Jun 2016 07:14:30 -0400 (EDT) Subject: [rdo-list] [DLRN] Switching the fedora-master worker from f23 to f24 In-Reply-To: <268199787.3176005.1467123686463.JavaMail.zimbra@redhat.com> References: <268199787.3176005.1467123686463.JavaMail.zimbra@redhat.com> Message-ID: <1289792403.3842115.1467285270340.JavaMail.zimbra@redhat.com> > Hi RDO, > > Now that Fedora 24 has been released, we are removing the old fedora-master > worker (based on f23) and setting up the new one, based on f24. Also, the > fedora-rawhide-master worker will be reconfigured to use > http://trunk.rdoproject.org/f25 as its base URL. > > It will take some time to cleanup the old data and bootstrap the new worker, > we will keep you updated on the current status. > Hi all, The new worker is now ready, remember its new URL is http://trunk.rdoproject.org/f24, and http://trunk.rdoproject.org/f25 for the Rawhide packages. Some packages are failing to build, I hope to fix this over time. Thanks, Javier From hguemar at fedoraproject.org Thu Jun 30 12:54:22 2016 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Thu, 30 Jun 2016 14:54:22 +0200 Subject: [rdo-list] [centos-ci] Artifacts server does not support ranges / resume of downloads? In-Reply-To: References: Message-ID: FYI, most of the CentOS Core Team are at RH Summit. From trown at redhat.com Thu Jun 30 14:14:59 2016 From: trown at redhat.com (John Trowbridge) Date: Thu, 30 Jun 2016 10:14:59 -0400 Subject: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) In-Reply-To: References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> <24755b61ab624c10bea2152601a0f67f@PREWE13M11.ad.sprint.com> <2a4caf42c3b4471d9435dae86f201748@PREWE13M11.ad.sprint.com> <9e3942c3-6ebf-f4c6-41a2-23d11a1e8ffe@redhat.com> Message-ID: <57752963.6000209@redhat.com> On 06/30/2016 05:19 AM, Boris Derzhavets wrote: > > > > ________________________________ > From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets > Sent: Wednesday, June 29, 2016 5:14 PM > To: Dan Sneddon; rdo-list at redhat.com > Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) > > Yes , attempt to deploy > > ######################## > # HA +2xCompute > ######################## > control_memory: 6144 > compute_memory: 6144 > > undercloud_memory: 8192 > > # Giving the undercloud additional CPUs can greatly improve heat's > # performance (and result in a shorter deploy time). > undercloud_vcpu: 4 Increasing this without also increasing the memory on the undercloud will usually end in sadness, because more CPUs means more worker processes means more memory consumption. In general straying from the values in CI, is unlikely to work unless you have significantly better hardware than what runs in CI (32G hosts with decent CPU). https://github.com/openstack/tripleo-quickstart/blob/master/config/general_config/ha.yml#L13 It is not 100% that is the root cause of your issue, as the logs below look like we hit issues either with Ironic deployment to the nodes, or some issue with Nova scheduler. Note, that is definitely a different problem (and possibly transient), than the one reported in the beginning of this thread. > > # Create three controller nodes and one compute node. > overcloud_nodes: > - name: control_0 > flavor: control > - name: control_1 > flavor: control > - name: control_2 > flavor: control > > - name: compute_0 > flavor: compute > - name: compute_1 > flavor: compute > > # We don't need introspection in a virtual environment (because we are > # creating all the "hardware" we really know the necessary > # information). > introspect: false > > # Tell tripleo about our environment. > network_isolation: true > extra_args: >- > --control-scale 3 --compute-scale 2 --neutron-network-type vxlan > --neutron-tunnel-types vxlan > -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > --ntp-server pool.ntp.org > deploy_timeout: 75 > tempest: false > pingtest: true > > Results during overcloud deployment :- > > 2016-06-30 09:09:31 [NovaCompute]: CREATE_FAILED ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" > 2016-06-30 09:09:31 [NovaCompute]: DELETE_IN_PROGRESS state changed > 2016-06-30 09:09:34 [NovaCompute]: DELETE_COMPLETE state changed > 2016-06-30 09:09:44 [NovaCompute]: CREATE_IN_PROGRESS state changed > 2016-06-30 09:09:48 [NovaCompute]: CREATE_FAILED ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" > . . . . . > > 2016-06-30 09:11:36 [overcloud]: CREATE_FAILED Resource CREATE failed: ResourceInError: resources.Compute.resources[0].resources.NovaCompute: Went to status ERROR due to "Message: Build of instance bf483c34-7010-48ea-8f58-fe192c91093f aborted: Failed to provision instance bf483c34-7010-48ea-8f58-fe192 > 2016-06-30 09:11:36 [1]: SIGNAL_COMPLETE Unknown > 2016-06-30 09:11:36 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > 2016-06-30 09:11:36 [1]: CREATE_COMPLETE state changed > 2016-06-30 09:11:36 [overcloud-ControllerCephDeployment-62xh7uhtpjqp]: CREATE_COMPLETE Stack CREATE completed successfully > 2016-06-30 09:11:37 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > 2016-06-30 09:11:37 [1]: SIGNAL_COMPLETE Unknown > Stack overcloud CREATE_FAILED > Deployment failed: Heat Stack create failed. > + heat stack-list > + grep -q CREATE_FAILED > + deploy_status=1 > ++ heat resource-list --nested-depth 5 overcloud > ++ grep FAILED > ++ grep 'StructuredDeployment ' > ++ cut -d '|' -f3 > + exit 1 > > > Thanks. > > Boris > > > ________________________________ > From: rdo-list-bounces at redhat.com on behalf of Dan Sneddon > Sent: Wednesday, June 29, 2016 1:46 PM > To: rdo-list at redhat.com > Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) > > On 06/29/2016 10:42 AM, Dan Sneddon wrote: >> On 06/29/2016 07:03 AM, Boris Derzhavets wrote: >>> Boris Derzhavets has shared a?OneDrive?file with you. To view it, click >>> the link below. >>> >>> > [https://p.sfx.ms/icons/v2/Large/Default.png] > > HeatCrash2.txt 1.gz > 1drv.ms > GZ File > > >>> >>> HeatCrash2.txt 1.gz >>> [HeatCrash2.txt 1.gz] >>> >>> Reattach gzip archive via One Drive >>> >>> >>> >>> ----------------------------------------------------------------------- >>> *From:* rdo-list-bounces at redhat.com on >>> behalf of Boris Derzhavets >>> *Sent:* Wednesday, June 29, 2016 9:36 AM >>> *To:* John Trowbridge; shardy at redhat.com >>> *Cc:* rdo-list at redhat.com >>> *Subject:* [rdo-list] HA overcloud-deploy.sh crashes again ( >>> ControllerOvercloudServicesDeployment_Step4 ) >>> >>> >>> Attempt to follow steps suggested >>> in http://hardysteven.blogspot.ru/2016/06/tripleo-partial-stack-updates.html >>> >>> >>> ./deploy-overstack crashes >>> >>> >>> 2016-06-29 12:42:41 >>> [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk-ControllerOvercloudServicesDeployment_Step4-nzdoizlgrmx2]: >>> CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment >>> to server failed: deploy_status_code : Deployment exited with non-zero >>> status code: 6 >>> 2016-06-29 12:42:42 [ControllerOvercloudServicesDeployment_Step4]: >>> CREATE_FAILED Error: >>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>> Deployment to server failed: deploy_status_code: Deployment exited with >>> non-zero status code: 6 >>> 2016-06-29 12:42:43 >>> [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk]: CREATE_FAILED >>> Resource CREATE failed: Error: >>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>> Deployment to server failed: deploy_status_code: Deployment exited with >>> non-zero status code: 6 >>> 2016-06-29 12:42:44 [ControllerNodesPostDeployment]: CREATE_FAILED >>> Error: >>> resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>> Deployment to server failed: deploy_status_code: Deployment exited with >>> non-zero status code: 6 >>> 2016-06-29 12:42:44 [2]: SIGNAL_COMPLETE Unknown >>> 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown >>> 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown >>> 2016-06-29 12:42:46 [overcloud]: CREATE_FAILED Resource CREATE failed: >>> Error: >>> resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>> Deployment to server failed: deploy_status_code: Deployment exited with >>> non-zero status code: 6 >>> 2016-06-29 12:42:46 [2]: SIGNAL_COMPLETE Unknown >>> 2016-06-29 12:42:47 [2]: SIGNAL_COMPLETE Unknown >>> 2016-06-29 12:42:47 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >>> 2016-06-29 12:42:48 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >>> 2016-06-29 12:42:48 [2]: SIGNAL_COMPLETE Unknown >>> Stack overcloud CREATE_FAILED >>> Deployment failed: Heat Stack create failed. >>> + heat stack-list >>> + grep -q CREATE_FAILED >>> + deploy_status=1 >>> ++ heat resource-list --nested-depth 5 overcloud >>> ++ grep FAILED >>> ++ grep 'StructuredDeployment ' >>> ++ cut -d '|' -f3 >>> + for failed in '$(heat resource-list --nested-depth 5 >>> overcloud | grep FAILED | >>> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >>> + heat deployment-show 655c77fc-6a78-4cca-b4b7-a153a3f4ad52 >>> + for failed in '$(heat resource-list --nested-depth 5 >>> overcloud | grep FAILED | >>> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >>> + heat deployment-show 1fe5153c-e017-4ee5-823a-3d1524430c1d >>> + for failed in '$(heat resource-list --nested-depth 5 >>> overcloud | grep FAILED | >>> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >>> + heat deployment-show bf6f25f4-d812-41e9-a7a8-122de619a624 >>> + exit 1 >>> >>> ***************************** >>> Troubleshooting steps :- >>> ***************************** >>> >>> [stack at undercloud ~]$ . stackrc >>> [stack at undercloud ~]$ heat resource-list overcloud | grep >>> ControllerNodesPost >>> | ControllerNodesPostDeployment | >>> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | >>> OS::TripleO::ControllerPostDeployment | CREATE_FAILED | >>> 2016-06-29T12:11:21 | >>> >>> >>> [stack at undercloud ~]$ heat stack-list -n | grep "^| >>> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3" >>> | f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | >>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >>> | CREATE_FAILED | 2016-06-29T12:31:11 | None | >>> 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | >>> >>> >>> >>> [stack at undercloud ~]$ heat event-list -m >>> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 >>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >>> >>> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >>> | resource_name | >>> id | >>> resource_status_reason >>> | resource_status | event_time | >>> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >>> | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | >>> 10ec0cf9-b3c9-4191-9966-3f4d47f27e2a | Stack CREATE started >>> . . . . . . . . . . . . . . . . . >>> Step1,2,3 succeeded >>> . . . . . . . . . . . . . . . . . >>> >>> | CREATE_IN_PROGRESS | 2016-06-29T12:31:14 | >>> | ControllerPuppetConfig | >>> a2a1df33-5106-425c-b16d-8d2df709b19f | state >>> changed >>> | CREATE_COMPLETE | 2016-06-29T12:35:02 | >>> | ControllerOvercloudServicesDeployment_Step4 | >>> 1e151333-4de5-4e7b-907c-ea0f42d31a47 | state >>> changed >>> | CREATE_IN_PROGRESS | 2016-06-29T12:35:03 | >>> | ControllerOvercloudServicesDeployment_Step4 | >>> 7bf36334-3d92-4554-b6c0-41294a072ab6 | Error: >>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>> Deployment to server failed: deploy_status_code: Deployment exited with >>> non-zero status code: 6 | CREATE_FAILED | >>> 2016-06-29T12:42:42 | >>> | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >>> | e72fb6f4-c2aa-4fe8-9bd1-5f5ad152685c | Resource CREATE failed: >>> Error: >>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>> Deployment to server failed: deploy_status_code: Deployment exited with >>> non-zero status code: 6 | CREATE_FAILED | 2016-06-29T12:42:43 | >>> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >>> >>> [stack at undercloud ~]$ heat stack-show >>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | grep >>> NodeConfigIdentifiers >>> | | "NodeConfigIdentifiers": >>> "{u'deployment_identifier': 1467202276, u'controller_config': {u'1': >>> u'os-apply-config deployment 796df02a-7550-414b-a084-8b591a13e6db >>> completed,Root CA cert injection not enabled.,TLS not enabled.,None,', >>> u'0': u'os-apply-config deployment 613ec889-d852-470a-8e4c-6e243e1d2033 >>> completed,Root CA cert injection not enabled.,TLS not enabled.,None,', >>> u'2': u'os-apply-config deployment c8b099d0-3af4-4ba0-a056-a0ce60f40e2d >>> completed,Root CA cert injection not enabled.,TLS not enabled.,None,'}, >>> u'allnodes_extra': u'none'}" | >>> >>> However, when stack creating crashed update wouldn't help. >>> >>> [stack at undercloud ~]$ heat stack-update -x >>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk -e update_env.yaml >>> ERROR: PATCH update to non-COMPLETE stack is not supported. >>> >>> DUE TO :- >>> >>> [stack at undercloud ~]$ heat stack-list >>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>> | id | stack_name | stack_status | >>> creation_time | updated_time | >>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>> | 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | overcloud | CREATE_FAILED | >>> 2016-06-29T12:11:20 | None | >>> +--------------------------------------+------------+---------------+---------------------+------ >>> >>> >>> Complete error file `heat deployment-show >>> 655c77fc-6a78-4cca-b4b7-a153a3f4ad52` is attached a gzip archive. >>> >>> >>> Thanks. >>> >>> Boris. >>> >>> >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> The failure occurred during the post-deployment, which means that the >> initial deployment succeeded, but then the steps that are done to the >> completed overcloud failed. >> >> This is most commonly attributable to network problems between the >> Undercloud and the Overcloud Public API. The Undercloud needs to reach >> the Public API in order to do some of the post-configuration steps. If >> this API isn't reachable, you end up with the error you saw above. >> >> You can test this connectivity by pinging the Public API VIP from the >> Undercloud. Starting with the failed deployment, run "neutron >> port-list" against the Underlcloud and look for the IP on the port >> named "public_virtual_ip". You should be able to ping this address from >> the Undercloud. If you can't reach that IP, then you need to check the >> connectivity/routing between the Undercloud and the External network on >> the Overcloud. >> > > I should also mention common causes of this problem: > > * Incorrect value for ExternalInterfaceDefaultRoute in the network > environment file. > * Controllers do not have the default route on the External network in > the NIC config templates (required for reachability from remote subnets). > * Incorrect subnet mask on the ExternalNetCidr in the network environment. > * Incorrect ExternalAllocationPools values in the network environment. > * Incorrect Ethernet switch config for the Controllers. > > Issue has been reproduced with exactly same error 4 times > starting since 06/25/16 on daily basis with exactly same error at Step4 > of overcloud-ControllerNodesPostDeployment. > In meantime I cannot reproduce the error. > Config 3xNode HA Controller + 1xCompute works . > There was one more issue 3xNode HA Controller + 2xCompute > failed immediately when overcloud-deploy.sh started due to > only 4 nodes could be introspected. I will test it tomorrow morning. > > Thanks a lot. > Boris. > > -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > 650.254.4025 | dsneddon:irc @dxs:twitter > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > This body part will be downloaded on demand. > From bderzhavets at hotmail.com Thu Jun 30 16:56:44 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 30 Jun 2016 16:56:44 +0000 Subject: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) In-Reply-To: <57752963.6000209@redhat.com> References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> <24755b61ab624c10bea2152601a0f67f@PREWE13M11.ad.sprint.com> <2a4caf42c3b4471d9435dae86f201748@PREWE13M11.ad.sprint.com> <9e3942c3-6ebf-f4c6-41a2-23d11a1e8ffe@redhat.com> , <57752963.6000209@redhat.com> Message-ID: ________________________________ From: John Trowbridge Sent: Thursday, June 30, 2016 10:14 AM To: Boris Derzhavets; Dan Sneddon; rdo-list at redhat.com Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) On 06/30/2016 05:19 AM, Boris Derzhavets wrote: > > > > ________________________________ > From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets > Sent: Wednesday, June 29, 2016 5:14 PM > To: Dan Sneddon; rdo-list at redhat.com > Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) > > Yes , attempt to deploy > > ######################## > # HA +2xCompute > ######################## > control_memory: 6144 > compute_memory: 6144 > > undercloud_memory: 8192 > > # Giving the undercloud additional CPUs can greatly improve heat's > # performance (and result in a shorter deploy time). > undercloud_vcpu: 4 Increasing this without also increasing the memory on the undercloud will usually end in sadness, because more CPUs means more worker processes means more memory consumption. In general straying from the values in CI, is unlikely to work unless you have significantly better hardware than what runs in CI (32G hosts with decent CPU). It will be verified tomorrow with undercloud_vcpu: 2 This test would be a fair . It will take about 2 hr. But, I still believe that it is not root cause of issue with Configuration - 3xController(HA) + 2xCompute having :- undercloud_memory: 8192 undercloud_vcpu: 4 which was tested many times OK since 06/05 up to 06/24 with no problems. Thank you very much for feedback Boris. https://github.com/openstack/tripleo-quickstart/blob/master/config/general_config/ha.yml#L13 It is not 100% that is the root cause of your issue, as the logs below look like we hit issues either with Ironic deployment to the nodes, or some issue with Nova scheduler. Note, that is definitely a different problem (and possibly transient), than the one reported in the beginning of this thread. > > # Create three controller nodes and one compute node. > overcloud_nodes: > - name: control_0 > flavor: control > - name: control_1 > flavor: control > - name: control_2 > flavor: control > > - name: compute_0 > flavor: compute > - name: compute_1 > flavor: compute > > # We don't need introspection in a virtual environment (because we are > # creating all the "hardware" we really know the necessary > # information). > introspect: false > > # Tell tripleo about our environment. > network_isolation: true > extra_args: >- > --control-scale 3 --compute-scale 2 --neutron-network-type vxlan > --neutron-tunnel-types vxlan > -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml > --ntp-server pool.ntp.org > deploy_timeout: 75 > tempest: false > pingtest: true > > Results during overcloud deployment :- > > 2016-06-30 09:09:31 [NovaCompute]: CREATE_FAILED ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" > 2016-06-30 09:09:31 [NovaCompute]: DELETE_IN_PROGRESS state changed > 2016-06-30 09:09:34 [NovaCompute]: DELETE_COMPLETE state changed > 2016-06-30 09:09:44 [NovaCompute]: CREATE_IN_PROGRESS state changed > 2016-06-30 09:09:48 [NovaCompute]: CREATE_FAILED ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" > . . . . . > > 2016-06-30 09:11:36 [overcloud]: CREATE_FAILED Resource CREATE failed: ResourceInError: resources.Compute.resources[0].resources.NovaCompute: Went to status ERROR due to "Message: Build of instance bf483c34-7010-48ea-8f58-fe192c91093f aborted: Failed to provision instance bf483c34-7010-48ea-8f58-fe192 > 2016-06-30 09:11:36 [1]: SIGNAL_COMPLETE Unknown > 2016-06-30 09:11:36 [ControllerDeployment]: SIGNAL_COMPLETE Unknown > 2016-06-30 09:11:36 [1]: CREATE_COMPLETE state changed > 2016-06-30 09:11:36 [overcloud-ControllerCephDeployment-62xh7uhtpjqp]: CREATE_COMPLETE Stack CREATE completed successfully > 2016-06-30 09:11:37 [NetworkDeployment]: SIGNAL_COMPLETE Unknown > 2016-06-30 09:11:37 [1]: SIGNAL_COMPLETE Unknown > Stack overcloud CREATE_FAILED > Deployment failed: Heat Stack create failed. > + heat stack-list > + grep -q CREATE_FAILED > + deploy_status=1 > ++ heat resource-list --nested-depth 5 overcloud > ++ grep FAILED > ++ grep 'StructuredDeployment ' > ++ cut -d '|' -f3 > + exit 1 > > > Thanks. > > Boris > > > ________________________________ > From: rdo-list-bounces at redhat.com on behalf of Dan Sneddon > Sent: Wednesday, June 29, 2016 1:46 PM > To: rdo-list at redhat.com > Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) > > On 06/29/2016 10:42 AM, Dan Sneddon wrote: >> On 06/29/2016 07:03 AM, Boris Derzhavets wrote: >>> Boris Derzhavets has shared a?OneDrive?file with you. To view it, click >>> the link below. >>> >>> > [https://p.sfx.ms/icons/v2/Large/Default.png] > > HeatCrash2.txt 1.gz > 1drv.ms > GZ File > > >>> >>> HeatCrash2.txt 1.gz >>> [HeatCrash2.txt 1.gz] >>> >>> Reattach gzip archive via One Drive >>> >>> >>> >>> ----------------------------------------------------------------------- >>> *From:* rdo-list-bounces at redhat.com on >>> behalf of Boris Derzhavets >>> *Sent:* Wednesday, June 29, 2016 9:36 AM >>> *To:* John Trowbridge; shardy at redhat.com >>> *Cc:* rdo-list at redhat.com >>> *Subject:* [rdo-list] HA overcloud-deploy.sh crashes again ( >>> ControllerOvercloudServicesDeployment_Step4 ) >>> >>> >>> Attempt to follow steps suggested >>> in http://hardysteven.blogspot.ru/2016/06/tripleo-partial-stack-updates.html >>> >>> >>> ./deploy-overstack crashes >>> >>> >>> 2016-06-29 12:42:41 >>> [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk-ControllerOvercloudServicesDeployment_Step4-nzdoizlgrmx2]: >>> CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment >>> to server failed: deploy_status_code : Deployment exited with non-zero >>> status code: 6 >>> 2016-06-29 12:42:42 [ControllerOvercloudServicesDeployment_Step4]: >>> CREATE_FAILED Error: >>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>> Deployment to server failed: deploy_status_code: Deployment exited with >>> non-zero status code: 6 >>> 2016-06-29 12:42:43 >>> [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk]: CREATE_FAILED >>> Resource CREATE failed: Error: >>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>> Deployment to server failed: deploy_status_code: Deployment exited with >>> non-zero status code: 6 >>> 2016-06-29 12:42:44 [ControllerNodesPostDeployment]: CREATE_FAILED >>> Error: >>> resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>> Deployment to server failed: deploy_status_code: Deployment exited with >>> non-zero status code: 6 >>> 2016-06-29 12:42:44 [2]: SIGNAL_COMPLETE Unknown >>> 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown >>> 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown >>> 2016-06-29 12:42:46 [overcloud]: CREATE_FAILED Resource CREATE failed: >>> Error: >>> resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>> Deployment to server failed: deploy_status_code: Deployment exited with >>> non-zero status code: 6 >>> 2016-06-29 12:42:46 [2]: SIGNAL_COMPLETE Unknown >>> 2016-06-29 12:42:47 [2]: SIGNAL_COMPLETE Unknown >>> 2016-06-29 12:42:47 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >>> 2016-06-29 12:42:48 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >>> 2016-06-29 12:42:48 [2]: SIGNAL_COMPLETE Unknown >>> Stack overcloud CREATE_FAILED >>> Deployment failed: Heat Stack create failed. >>> + heat stack-list >>> + grep -q CREATE_FAILED >>> + deploy_status=1 >>> ++ heat resource-list --nested-depth 5 overcloud >>> ++ grep FAILED >>> ++ grep 'StructuredDeployment ' >>> ++ cut -d '|' -f3 >>> + for failed in '$(heat resource-list --nested-depth 5 >>> overcloud | grep FAILED | >>> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >>> + heat deployment-show 655c77fc-6a78-4cca-b4b7-a153a3f4ad52 >>> + for failed in '$(heat resource-list --nested-depth 5 >>> overcloud | grep FAILED | >>> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >>> + heat deployment-show 1fe5153c-e017-4ee5-823a-3d1524430c1d >>> + for failed in '$(heat resource-list --nested-depth 5 >>> overcloud | grep FAILED | >>> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >>> + heat deployment-show bf6f25f4-d812-41e9-a7a8-122de619a624 >>> + exit 1 >>> >>> ***************************** >>> Troubleshooting steps :- >>> ***************************** >>> >>> [stack at undercloud ~]$ . stackrc >>> [stack at undercloud ~]$ heat resource-list overcloud | grep >>> ControllerNodesPost >>> | ControllerNodesPostDeployment | >>> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | >>> OS::TripleO::ControllerPostDeployment | CREATE_FAILED | >>> 2016-06-29T12:11:21 | >>> >>> >>> [stack at undercloud ~]$ heat stack-list -n | grep "^| >>> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3" >>> | f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | >>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >>> | CREATE_FAILED | 2016-06-29T12:31:11 | None | >>> 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | >>> >>> >>> >>> [stack at undercloud ~]$ heat event-list -m >>> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 >>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >>> >>> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >>> | resource_name | >>> id | >>> resource_status_reason >>> | resource_status | event_time | >>> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >>> | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | >>> 10ec0cf9-b3c9-4191-9966-3f4d47f27e2a | Stack CREATE started >>> . . . . . . . . . . . . . . . . . >>> Step1,2,3 succeeded >>> . . . . . . . . . . . . . . . . . >>> >>> | CREATE_IN_PROGRESS | 2016-06-29T12:31:14 | >>> | ControllerPuppetConfig | >>> a2a1df33-5106-425c-b16d-8d2df709b19f | state >>> changed >>> | CREATE_COMPLETE | 2016-06-29T12:35:02 | >>> | ControllerOvercloudServicesDeployment_Step4 | >>> 1e151333-4de5-4e7b-907c-ea0f42d31a47 | state >>> changed >>> | CREATE_IN_PROGRESS | 2016-06-29T12:35:03 | >>> | ControllerOvercloudServicesDeployment_Step4 | >>> 7bf36334-3d92-4554-b6c0-41294a072ab6 | Error: >>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>> Deployment to server failed: deploy_status_code: Deployment exited with >>> non-zero status code: 6 | CREATE_FAILED | >>> 2016-06-29T12:42:42 | >>> | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >>> | e72fb6f4-c2aa-4fe8-9bd1-5f5ad152685c | Resource CREATE failed: >>> Error: >>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>> Deployment to server failed: deploy_status_code: Deployment exited with >>> non-zero status code: 6 | CREATE_FAILED | 2016-06-29T12:42:43 | >>> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >>> >>> [stack at undercloud ~]$ heat stack-show >>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | grep >>> NodeConfigIdentifiers >>> | | "NodeConfigIdentifiers": >>> "{u'deployment_identifier': 1467202276, u'controller_config': {u'1': >>> u'os-apply-config deployment 796df02a-7550-414b-a084-8b591a13e6db >>> completed,Root CA cert injection not enabled.,TLS not enabled.,None,', >>> u'0': u'os-apply-config deployment 613ec889-d852-470a-8e4c-6e243e1d2033 >>> completed,Root CA cert injection not enabled.,TLS not enabled.,None,', >>> u'2': u'os-apply-config deployment c8b099d0-3af4-4ba0-a056-a0ce60f40e2d >>> completed,Root CA cert injection not enabled.,TLS not enabled.,None,'}, >>> u'allnodes_extra': u'none'}" | >>> >>> However, when stack creating crashed update wouldn't help. >>> >>> [stack at undercloud ~]$ heat stack-update -x >>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk -e update_env.yaml >>> ERROR: PATCH update to non-COMPLETE stack is not supported. >>> >>> DUE TO :- >>> >>> [stack at undercloud ~]$ heat stack-list >>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>> | id | stack_name | stack_status | >>> creation_time | updated_time | >>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>> | 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | overcloud | CREATE_FAILED | >>> 2016-06-29T12:11:20 | None | >>> +--------------------------------------+------------+---------------+---------------------+------ >>> >>> >>> Complete error file `heat deployment-show >>> 655c77fc-6a78-4cca-b4b7-a153a3f4ad52` is attached a gzip archive. >>> >>> >>> Thanks. >>> >>> Boris. >>> >>> >>> >>> _______________________________________________ >>> rdo-list mailing list >>> rdo-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rdo-list >>> >>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>> >> >> The failure occurred during the post-deployment, which means that the >> initial deployment succeeded, but then the steps that are done to the >> completed overcloud failed. >> >> This is most commonly attributable to network problems between the >> Undercloud and the Overcloud Public API. The Undercloud needs to reach >> the Public API in order to do some of the post-configuration steps. If >> this API isn't reachable, you end up with the error you saw above. >> >> You can test this connectivity by pinging the Public API VIP from the >> Undercloud. Starting with the failed deployment, run "neutron >> port-list" against the Underlcloud and look for the IP on the port >> named "public_virtual_ip". You should be able to ping this address from >> the Undercloud. If you can't reach that IP, then you need to check the >> connectivity/routing between the Undercloud and the External network on >> the Overcloud. >> > > I should also mention common causes of this problem: > > * Incorrect value for ExternalInterfaceDefaultRoute in the network > environment file. > * Controllers do not have the default route on the External network in > the NIC config templates (required for reachability from remote subnets). > * Incorrect subnet mask on the ExternalNetCidr in the network environment. > * Incorrect ExternalAllocationPools values in the network environment. > * Incorrect Ethernet switch config for the Controllers. > > Issue has been reproduced with exactly same error 4 times > starting since 06/25/16 on daily basis with exactly same error at Step4 > of overcloud-ControllerNodesPostDeployment. > In meantime I cannot reproduce the error. > Config 3xNode HA Controller + 1xCompute works . > There was one more issue 3xNode HA Controller + 2xCompute > failed immediately when overcloud-deploy.sh started due to > only 4 nodes could be introspected. I will test it tomorrow morning. > > Thanks a lot. > Boris. > > -- > Dan Sneddon | Principal OpenStack Engineer > dsneddon at redhat.com | redhat.com/openstack > 650.254.4025 | dsneddon:irc @dxs:twitter > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > This body part will be downloaded on demand. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From trown at redhat.com Thu Jun 30 17:47:02 2016 From: trown at redhat.com (John Trowbridge) Date: Thu, 30 Jun 2016 13:47:02 -0400 Subject: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) In-Reply-To: References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> <24755b61ab624c10bea2152601a0f67f@PREWE13M11.ad.sprint.com> <2a4caf42c3b4471d9435dae86f201748@PREWE13M11.ad.sprint.com> <9e3942c3-6ebf-f4c6-41a2-23d11a1e8ffe@redhat.com> <57752963.6000209@redhat.com> Message-ID: <57755B16.70706@redhat.com> On 06/30/2016 12:56 PM, Boris Derzhavets wrote: > > > > ________________________________ > From: John Trowbridge > Sent: Thursday, June 30, 2016 10:14 AM > To: Boris Derzhavets; Dan Sneddon; rdo-list at redhat.com > Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) > > > > On 06/30/2016 05:19 AM, Boris Derzhavets wrote: >> >> >> >> ________________________________ >> From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets >> Sent: Wednesday, June 29, 2016 5:14 PM >> To: Dan Sneddon; rdo-list at redhat.com >> Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) >> >> Yes , attempt to deploy >> >> ######################## >> # HA +2xCompute >> ######################## >> control_memory: 6144 >> compute_memory: 6144 >> >> undercloud_memory: 8192 >> >> # Giving the undercloud additional CPUs can greatly improve heat's >> # performance (and result in a shorter deploy time). >> undercloud_vcpu: 4 > > Increasing this without also increasing the memory on the undercloud > will usually end in sadness, because more CPUs means more worker > processes means more memory consumption. In general straying from the > values in CI, is unlikely to work unless you have significantly better > hardware than what runs in CI (32G hosts with decent CPU). > > It will be verified tomorrow with > undercloud_vcpu: 2 > This test would be a fair . It will take about 2 hr. > But, I still believe that it is not root cause of issue with > Configuration - 3xController(HA) + 2xCompute having :- > undercloud_memory: 8192 > undercloud_vcpu: 4 > which was tested many times OK since 06/05 up to 06/24 > with no problems. Just realized that you are also deploying 2x compute nodes. Just FYI, even the basic HA setup barely fits on a 32G host. In fact on 3 of the 4 nodes in CI, we rarely get a pass of HA because the resources are so tight. Will actually be switching that job to a single controller job with pacemaker for exactly that reason (email to RDO list about that will come later this afternoon). How big is the virthost you are using? > > Thank you very much for feedback > Boris. > > https://github.com/openstack/tripleo-quickstart/blob/master/config/general_config/ha.yml#L13 > > It is not 100% that is the root cause of your issue, as the logs below > look like we hit issues either with Ironic deployment to the nodes, or > some issue with Nova scheduler. Note, that is definitely a different > problem (and possibly transient), than the one reported in the beginning > of this thread. > >> >> # Create three controller nodes and one compute node. >> overcloud_nodes: >> - name: control_0 >> flavor: control >> - name: control_1 >> flavor: control >> - name: control_2 >> flavor: control >> >> - name: compute_0 >> flavor: compute >> - name: compute_1 >> flavor: compute >> >> # We don't need introspection in a virtual environment (because we are >> # creating all the "hardware" we really know the necessary >> # information). >> introspect: false >> >> # Tell tripleo about our environment. >> network_isolation: true >> extra_args: >- >> --control-scale 3 --compute-scale 2 --neutron-network-type vxlan >> --neutron-tunnel-types vxlan >> -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >> --ntp-server pool.ntp.org >> deploy_timeout: 75 >> tempest: false >> pingtest: true >> >> Results during overcloud deployment :- >> >> 2016-06-30 09:09:31 [NovaCompute]: CREATE_FAILED ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" >> 2016-06-30 09:09:31 [NovaCompute]: DELETE_IN_PROGRESS state changed >> 2016-06-30 09:09:34 [NovaCompute]: DELETE_COMPLETE state changed >> 2016-06-30 09:09:44 [NovaCompute]: CREATE_IN_PROGRESS state changed >> 2016-06-30 09:09:48 [NovaCompute]: CREATE_FAILED ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" >> . . . . . >> >> 2016-06-30 09:11:36 [overcloud]: CREATE_FAILED Resource CREATE failed: ResourceInError: resources.Compute.resources[0].resources.NovaCompute: Went to status ERROR due to "Message: Build of instance bf483c34-7010-48ea-8f58-fe192c91093f aborted: Failed to provision instance bf483c34-7010-48ea-8f58-fe192 >> 2016-06-30 09:11:36 [1]: SIGNAL_COMPLETE Unknown >> 2016-06-30 09:11:36 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >> 2016-06-30 09:11:36 [1]: CREATE_COMPLETE state changed >> 2016-06-30 09:11:36 [overcloud-ControllerCephDeployment-62xh7uhtpjqp]: CREATE_COMPLETE Stack CREATE completed successfully >> 2016-06-30 09:11:37 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >> 2016-06-30 09:11:37 [1]: SIGNAL_COMPLETE Unknown >> Stack overcloud CREATE_FAILED >> Deployment failed: Heat Stack create failed. >> + heat stack-list >> + grep -q CREATE_FAILED >> + deploy_status=1 >> ++ heat resource-list --nested-depth 5 overcloud >> ++ grep FAILED >> ++ grep 'StructuredDeployment ' >> ++ cut -d '|' -f3 >> + exit 1 >> >> >> Thanks. >> >> Boris >> >> >> ________________________________ >> From: rdo-list-bounces at redhat.com on behalf of Dan Sneddon >> Sent: Wednesday, June 29, 2016 1:46 PM >> To: rdo-list at redhat.com >> Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) >> >> On 06/29/2016 10:42 AM, Dan Sneddon wrote: >>> On 06/29/2016 07:03 AM, Boris Derzhavets wrote: >>>> Boris Derzhavets has shared a?OneDrive?file with you. To view it, click >>>> the link below. >>>> >>>> >> [https://p.sfx.ms/icons/v2/Large/Default.png] >> >> HeatCrash2.txt 1.gz >> 1drv.ms >> GZ File >> >> >>>> >>>> HeatCrash2.txt 1.gz >>>> [HeatCrash2.txt 1.gz] >>>> >>>> Reattach gzip archive via One Drive >>>> >>>> >>>> >>>> ----------------------------------------------------------------------- >>>> *From:* rdo-list-bounces at redhat.com on >>>> behalf of Boris Derzhavets >>>> *Sent:* Wednesday, June 29, 2016 9:36 AM >>>> *To:* John Trowbridge; shardy at redhat.com >>>> *Cc:* rdo-list at redhat.com >>>> *Subject:* [rdo-list] HA overcloud-deploy.sh crashes again ( >>>> ControllerOvercloudServicesDeployment_Step4 ) >>>> >>>> >>>> Attempt to follow steps suggested >>>> in http://hardysteven.blogspot.ru/2016/06/tripleo-partial-stack-updates.html >>>> >>>> >>>> ./deploy-overstack crashes >>>> >>>> >>>> 2016-06-29 12:42:41 >>>> [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk-ControllerOvercloudServicesDeployment_Step4-nzdoizlgrmx2]: >>>> CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment >>>> to server failed: deploy_status_code : Deployment exited with non-zero >>>> status code: 6 >>>> 2016-06-29 12:42:42 [ControllerOvercloudServicesDeployment_Step4]: >>>> CREATE_FAILED Error: >>>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 >>>> 2016-06-29 12:42:43 >>>> [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk]: CREATE_FAILED >>>> Resource CREATE failed: Error: >>>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 >>>> 2016-06-29 12:42:44 [ControllerNodesPostDeployment]: CREATE_FAILED >>>> Error: >>>> resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 >>>> 2016-06-29 12:42:44 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:46 [overcloud]: CREATE_FAILED Resource CREATE failed: >>>> Error: >>>> resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 >>>> 2016-06-29 12:42:46 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:47 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:47 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:48 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:48 [2]: SIGNAL_COMPLETE Unknown >>>> Stack overcloud CREATE_FAILED >>>> Deployment failed: Heat Stack create failed. >>>> + heat stack-list >>>> + grep -q CREATE_FAILED >>>> + deploy_status=1 >>>> ++ heat resource-list --nested-depth 5 overcloud >>>> ++ grep FAILED >>>> ++ grep 'StructuredDeployment ' >>>> ++ cut -d '|' -f3 >>>> + for failed in '$(heat resource-list --nested-depth 5 >>>> overcloud | grep FAILED | >>>> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >>>> + heat deployment-show 655c77fc-6a78-4cca-b4b7-a153a3f4ad52 >>>> + for failed in '$(heat resource-list --nested-depth 5 >>>> overcloud | grep FAILED | >>>> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >>>> + heat deployment-show 1fe5153c-e017-4ee5-823a-3d1524430c1d >>>> + for failed in '$(heat resource-list --nested-depth 5 >>>> overcloud | grep FAILED | >>>> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >>>> + heat deployment-show bf6f25f4-d812-41e9-a7a8-122de619a624 >>>> + exit 1 >>>> >>>> ***************************** >>>> Troubleshooting steps :- >>>> ***************************** >>>> >>>> [stack at undercloud ~]$ . stackrc >>>> [stack at undercloud ~]$ heat resource-list overcloud | grep >>>> ControllerNodesPost >>>> | ControllerNodesPostDeployment | >>>> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | >>>> OS::TripleO::ControllerPostDeployment | CREATE_FAILED | >>>> 2016-06-29T12:11:21 | >>>> >>>> >>>> [stack at undercloud ~]$ heat stack-list -n | grep "^| >>>> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3" >>>> | f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | >>>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >>>> | CREATE_FAILED | 2016-06-29T12:31:11 | None | >>>> 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | >>>> >>>> >>>> >>>> [stack at undercloud ~]$ heat event-list -m >>>> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 >>>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >>>> >>>> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >>>> | resource_name | >>>> id | >>>> resource_status_reason >>>> | resource_status | event_time | >>>> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >>>> | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | >>>> 10ec0cf9-b3c9-4191-9966-3f4d47f27e2a | Stack CREATE started >>>> . . . . . . . . . . . . . . . . . >>>> Step1,2,3 succeeded >>>> . . . . . . . . . . . . . . . . . >>>> >>>> | CREATE_IN_PROGRESS | 2016-06-29T12:31:14 | >>>> | ControllerPuppetConfig | >>>> a2a1df33-5106-425c-b16d-8d2df709b19f | state >>>> changed >>>> | CREATE_COMPLETE | 2016-06-29T12:35:02 | >>>> | ControllerOvercloudServicesDeployment_Step4 | >>>> 1e151333-4de5-4e7b-907c-ea0f42d31a47 | state >>>> changed >>>> | CREATE_IN_PROGRESS | 2016-06-29T12:35:03 | >>>> | ControllerOvercloudServicesDeployment_Step4 | >>>> 7bf36334-3d92-4554-b6c0-41294a072ab6 | Error: >>>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 | CREATE_FAILED | >>>> 2016-06-29T12:42:42 | >>>> | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >>>> | e72fb6f4-c2aa-4fe8-9bd1-5f5ad152685c | Resource CREATE failed: >>>> Error: >>>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 | CREATE_FAILED | 2016-06-29T12:42:43 | >>>> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >>>> >>>> [stack at undercloud ~]$ heat stack-show >>>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | grep >>>> NodeConfigIdentifiers >>>> | | "NodeConfigIdentifiers": >>>> "{u'deployment_identifier': 1467202276, u'controller_config': {u'1': >>>> u'os-apply-config deployment 796df02a-7550-414b-a084-8b591a13e6db >>>> completed,Root CA cert injection not enabled.,TLS not enabled.,None,', >>>> u'0': u'os-apply-config deployment 613ec889-d852-470a-8e4c-6e243e1d2033 >>>> completed,Root CA cert injection not enabled.,TLS not enabled.,None,', >>>> u'2': u'os-apply-config deployment c8b099d0-3af4-4ba0-a056-a0ce60f40e2d >>>> completed,Root CA cert injection not enabled.,TLS not enabled.,None,'}, >>>> u'allnodes_extra': u'none'}" | >>>> >>>> However, when stack creating crashed update wouldn't help. >>>> >>>> [stack at undercloud ~]$ heat stack-update -x >>>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk -e update_env.yaml >>>> ERROR: PATCH update to non-COMPLETE stack is not supported. >>>> >>>> DUE TO :- >>>> >>>> [stack at undercloud ~]$ heat stack-list >>>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>>> | id | stack_name | stack_status | >>>> creation_time | updated_time | >>>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>>> | 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | overcloud | CREATE_FAILED | >>>> 2016-06-29T12:11:20 | None | >>>> +--------------------------------------+------------+---------------+---------------------+------ >>>> >>>> >>>> Complete error file `heat deployment-show >>>> 655c77fc-6a78-4cca-b4b7-a153a3f4ad52` is attached a gzip archive. >>>> >>>> >>>> Thanks. >>>> >>>> Boris. >>>> >>>> >>>> >>>> _______________________________________________ >>>> rdo-list mailing list >>>> rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >>> >>> The failure occurred during the post-deployment, which means that the >>> initial deployment succeeded, but then the steps that are done to the >>> completed overcloud failed. >>> >>> This is most commonly attributable to network problems between the >>> Undercloud and the Overcloud Public API. The Undercloud needs to reach >>> the Public API in order to do some of the post-configuration steps. If >>> this API isn't reachable, you end up with the error you saw above. >>> >>> You can test this connectivity by pinging the Public API VIP from the >>> Undercloud. Starting with the failed deployment, run "neutron >>> port-list" against the Underlcloud and look for the IP on the port >>> named "public_virtual_ip". You should be able to ping this address from >>> the Undercloud. If you can't reach that IP, then you need to check the >>> connectivity/routing between the Undercloud and the External network on >>> the Overcloud. >>> >> >> I should also mention common causes of this problem: >> >> * Incorrect value for ExternalInterfaceDefaultRoute in the network >> environment file. >> * Controllers do not have the default route on the External network in >> the NIC config templates (required for reachability from remote subnets). >> * Incorrect subnet mask on the ExternalNetCidr in the network environment. >> * Incorrect ExternalAllocationPools values in the network environment. >> * Incorrect Ethernet switch config for the Controllers. >> >> Issue has been reproduced with exactly same error 4 times >> starting since 06/25/16 on daily basis with exactly same error at Step4 >> of overcloud-ControllerNodesPostDeployment. >> In meantime I cannot reproduce the error. >> Config 3xNode HA Controller + 1xCompute works . >> There was one more issue 3xNode HA Controller + 2xCompute >> failed immediately when overcloud-deploy.sh started due to >> only 4 nodes could be introspected. I will test it tomorrow morning. >> >> Thanks a lot. >> Boris. >> >> -- >> Dan Sneddon | Principal OpenStack Engineer >> dsneddon at redhat.com | redhat.com/openstack >> 650.254.4025 | dsneddon:irc @dxs:twitter >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> >> This body part will be downloaded on demand. >> From bderzhavets at hotmail.com Thu Jun 30 18:04:09 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 30 Jun 2016 18:04:09 +0000 Subject: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) In-Reply-To: <57755B16.70706@redhat.com> References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> <24755b61ab624c10bea2152601a0f67f@PREWE13M11.ad.sprint.com> <2a4caf42c3b4471d9435dae86f201748@PREWE13M11.ad.sprint.com> <9e3942c3-6ebf-f4c6-41a2-23d11a1e8ffe@redhat.com> <57752963.6000209@redhat.com> , <57755B16.70706@redhat.com> Message-ID: ________________________________ From: John Trowbridge Sent: Thursday, June 30, 2016 1:47 PM To: Boris Derzhavets; Dan Sneddon; rdo-list at redhat.com Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) On 06/30/2016 12:56 PM, Boris Derzhavets wrote: > > > > ________________________________ > From: John Trowbridge > Sent: Thursday, June 30, 2016 10:14 AM > To: Boris Derzhavets; Dan Sneddon; rdo-list at redhat.com > Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) > > > > On 06/30/2016 05:19 AM, Boris Derzhavets wrote: >> >> >> >> ________________________________ >> From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets >> Sent: Wednesday, June 29, 2016 5:14 PM >> To: Dan Sneddon; rdo-list at redhat.com >> Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) >> >> Yes , attempt to deploy >> >> ######################## >> # HA +2xCompute >> ######################## >> control_memory: 6144 >> compute_memory: 6144 >> >> undercloud_memory: 8192 >> >> # Giving the undercloud additional CPUs can greatly improve heat's >> # performance (and result in a shorter deploy time). >> undercloud_vcpu: 4 > > Increasing this without also increasing the memory on the undercloud > will usually end in sadness, because more CPUs means more worker > processes means more memory consumption. In general straying from the > values in CI, is unlikely to work unless you have significantly better > hardware than what runs in CI (32G hosts with decent CPU). > > It will be verified tomorrow with > undercloud_vcpu: 2 > This test would be a fair . It will take about 2 hr. > But, I still believe that it is not root cause of issue with > Configuration - 3xController(HA) + 2xCompute having :- > undercloud_memory: 8192 > undercloud_vcpu: 4 > which was tested many times OK since 06/05 up to 06/24 > with no problems. Just realized that you are also deploying 2x compute nodes. Just FYI, even the basic HA setup barely fits on a 32G host. In fact on 3 of the 4 nodes in CI, we rarely get a pass of HA because the resources are so tight. Will actually be switching that job to a single controller job with pacemaker for exactly that reason (email to RDO list about that will come later this afternoon). How big is the virthost you are using? 32 GB . I have never seen problems with RAM with config 3xController(HA) + 2xCompute Currently I am in front of VIRTHOST console running 3xController(HA) + 1xCompute . I've made top's snapshot and attached to message > > Thank you very much for feedback > Boris. > > https://github.com/openstack/tripleo-quickstart/blob/master/config/general_config/ha.yml#L13 > > It is not 100% that is the root cause of your issue, as the logs below > look like we hit issues either with Ironic deployment to the nodes, or > some issue with Nova scheduler. Note, that is definitely a different > problem (and possibly transient), than the one reported in the beginning > of this thread. > >> >> # Create three controller nodes and one compute node. >> overcloud_nodes: >> - name: control_0 >> flavor: control >> - name: control_1 >> flavor: control >> - name: control_2 >> flavor: control >> >> - name: compute_0 >> flavor: compute >> - name: compute_1 >> flavor: compute >> >> # We don't need introspection in a virtual environment (because we are >> # creating all the "hardware" we really know the necessary >> # information). >> introspect: false >> >> # Tell tripleo about our environment. >> network_isolation: true >> extra_args: >- >> --control-scale 3 --compute-scale 2 --neutron-network-type vxlan >> --neutron-tunnel-types vxlan >> -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >> --ntp-server pool.ntp.org >> deploy_timeout: 75 >> tempest: false >> pingtest: true >> >> Results during overcloud deployment :- >> >> 2016-06-30 09:09:31 [NovaCompute]: CREATE_FAILED ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" >> 2016-06-30 09:09:31 [NovaCompute]: DELETE_IN_PROGRESS state changed >> 2016-06-30 09:09:34 [NovaCompute]: DELETE_COMPLETE state changed >> 2016-06-30 09:09:44 [NovaCompute]: CREATE_IN_PROGRESS state changed >> 2016-06-30 09:09:48 [NovaCompute]: CREATE_FAILED ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" >> . . . . . >> >> 2016-06-30 09:11:36 [overcloud]: CREATE_FAILED Resource CREATE failed: ResourceInError: resources.Compute.resources[0].resources.NovaCompute: Went to status ERROR due to "Message: Build of instance bf483c34-7010-48ea-8f58-fe192c91093f aborted: Failed to provision instance bf483c34-7010-48ea-8f58-fe192 >> 2016-06-30 09:11:36 [1]: SIGNAL_COMPLETE Unknown >> 2016-06-30 09:11:36 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >> 2016-06-30 09:11:36 [1]: CREATE_COMPLETE state changed >> 2016-06-30 09:11:36 [overcloud-ControllerCephDeployment-62xh7uhtpjqp]: CREATE_COMPLETE Stack CREATE completed successfully >> 2016-06-30 09:11:37 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >> 2016-06-30 09:11:37 [1]: SIGNAL_COMPLETE Unknown >> Stack overcloud CREATE_FAILED >> Deployment failed: Heat Stack create failed. >> + heat stack-list >> + grep -q CREATE_FAILED >> + deploy_status=1 >> ++ heat resource-list --nested-depth 5 overcloud >> ++ grep FAILED >> ++ grep 'StructuredDeployment ' >> ++ cut -d '|' -f3 >> + exit 1 >> >> >> Thanks. >> >> Boris >> >> >> ________________________________ >> From: rdo-list-bounces at redhat.com on behalf of Dan Sneddon >> Sent: Wednesday, June 29, 2016 1:46 PM >> To: rdo-list at redhat.com >> Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) >> >> On 06/29/2016 10:42 AM, Dan Sneddon wrote: >>> On 06/29/2016 07:03 AM, Boris Derzhavets wrote: >>>> Boris Derzhavets has shared a?OneDrive?file with you. To view it, click >>>> the link below. >>>> >>>> >> [https://p.sfx.ms/icons/v2/Large/Default.png] >> >> HeatCrash2.txt 1.gz >> 1drv.ms >> GZ File >> >> >>>> >>>> HeatCrash2.txt 1.gz >>>> [HeatCrash2.txt 1.gz] >>>> >>>> Reattach gzip archive via One Drive >>>> >>>> >>>> >>>> ----------------------------------------------------------------------- >>>> *From:* rdo-list-bounces at redhat.com on >>>> behalf of Boris Derzhavets >>>> *Sent:* Wednesday, June 29, 2016 9:36 AM >>>> *To:* John Trowbridge; shardy at redhat.com >>>> *Cc:* rdo-list at redhat.com >>>> *Subject:* [rdo-list] HA overcloud-deploy.sh crashes again ( >>>> ControllerOvercloudServicesDeployment_Step4 ) >>>> >>>> >>>> Attempt to follow steps suggested >>>> in http://hardysteven.blogspot.ru/2016/06/tripleo-partial-stack-updates.html >>>> >>>> >>>> ./deploy-overstack crashes >>>> >>>> >>>> 2016-06-29 12:42:41 >>>> [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk-ControllerOvercloudServicesDeployment_Step4-nzdoizlgrmx2]: >>>> CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment >>>> to server failed: deploy_status_code : Deployment exited with non-zero >>>> status code: 6 >>>> 2016-06-29 12:42:42 [ControllerOvercloudServicesDeployment_Step4]: >>>> CREATE_FAILED Error: >>>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 >>>> 2016-06-29 12:42:43 >>>> [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk]: CREATE_FAILED >>>> Resource CREATE failed: Error: >>>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 >>>> 2016-06-29 12:42:44 [ControllerNodesPostDeployment]: CREATE_FAILED >>>> Error: >>>> resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 >>>> 2016-06-29 12:42:44 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:46 [overcloud]: CREATE_FAILED Resource CREATE failed: >>>> Error: >>>> resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 >>>> 2016-06-29 12:42:46 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:47 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:47 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:48 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:48 [2]: SIGNAL_COMPLETE Unknown >>>> Stack overcloud CREATE_FAILED >>>> Deployment failed: Heat Stack create failed. >>>> + heat stack-list >>>> + grep -q CREATE_FAILED >>>> + deploy_status=1 >>>> ++ heat resource-list --nested-depth 5 overcloud >>>> ++ grep FAILED >>>> ++ grep 'StructuredDeployment ' >>>> ++ cut -d '|' -f3 >>>> + for failed in '$(heat resource-list --nested-depth 5 >>>> overcloud | grep FAILED | >>>> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >>>> + heat deployment-show 655c77fc-6a78-4cca-b4b7-a153a3f4ad52 >>>> + for failed in '$(heat resource-list --nested-depth 5 >>>> overcloud | grep FAILED | >>>> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >>>> + heat deployment-show 1fe5153c-e017-4ee5-823a-3d1524430c1d >>>> + for failed in '$(heat resource-list --nested-depth 5 >>>> overcloud | grep FAILED | >>>> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >>>> + heat deployment-show bf6f25f4-d812-41e9-a7a8-122de619a624 >>>> + exit 1 >>>> >>>> ***************************** >>>> Troubleshooting steps :- >>>> ***************************** >>>> >>>> [stack at undercloud ~]$ . stackrc >>>> [stack at undercloud ~]$ heat resource-list overcloud | grep >>>> ControllerNodesPost >>>> | ControllerNodesPostDeployment | >>>> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | >>>> OS::TripleO::ControllerPostDeployment | CREATE_FAILED | >>>> 2016-06-29T12:11:21 | >>>> >>>> >>>> [stack at undercloud ~]$ heat stack-list -n | grep "^| >>>> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3" >>>> | f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | >>>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >>>> | CREATE_FAILED | 2016-06-29T12:31:11 | None | >>>> 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | >>>> >>>> >>>> >>>> [stack at undercloud ~]$ heat event-list -m >>>> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 >>>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >>>> >>>> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >>>> | resource_name | >>>> id | >>>> resource_status_reason >>>> | resource_status | event_time | >>>> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >>>> | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | >>>> 10ec0cf9-b3c9-4191-9966-3f4d47f27e2a | Stack CREATE started >>>> . . . . . . . . . . . . . . . . . >>>> Step1,2,3 succeeded >>>> . . . . . . . . . . . . . . . . . >>>> >>>> | CREATE_IN_PROGRESS | 2016-06-29T12:31:14 | >>>> | ControllerPuppetConfig | >>>> a2a1df33-5106-425c-b16d-8d2df709b19f | state >>>> changed >>>> | CREATE_COMPLETE | 2016-06-29T12:35:02 | >>>> | ControllerOvercloudServicesDeployment_Step4 | >>>> 1e151333-4de5-4e7b-907c-ea0f42d31a47 | state >>>> changed >>>> | CREATE_IN_PROGRESS | 2016-06-29T12:35:03 | >>>> | ControllerOvercloudServicesDeployment_Step4 | >>>> 7bf36334-3d92-4554-b6c0-41294a072ab6 | Error: >>>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 | CREATE_FAILED | >>>> 2016-06-29T12:42:42 | >>>> | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >>>> | e72fb6f4-c2aa-4fe8-9bd1-5f5ad152685c | Resource CREATE failed: >>>> Error: >>>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 | CREATE_FAILED | 2016-06-29T12:42:43 | >>>> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >>>> >>>> [stack at undercloud ~]$ heat stack-show >>>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | grep >>>> NodeConfigIdentifiers >>>> | | "NodeConfigIdentifiers": >>>> "{u'deployment_identifier': 1467202276, u'controller_config': {u'1': >>>> u'os-apply-config deployment 796df02a-7550-414b-a084-8b591a13e6db >>>> completed,Root CA cert injection not enabled.,TLS not enabled.,None,', >>>> u'0': u'os-apply-config deployment 613ec889-d852-470a-8e4c-6e243e1d2033 >>>> completed,Root CA cert injection not enabled.,TLS not enabled.,None,', >>>> u'2': u'os-apply-config deployment c8b099d0-3af4-4ba0-a056-a0ce60f40e2d >>>> completed,Root CA cert injection not enabled.,TLS not enabled.,None,'}, >>>> u'allnodes_extra': u'none'}" | >>>> >>>> However, when stack creating crashed update wouldn't help. >>>> >>>> [stack at undercloud ~]$ heat stack-update -x >>>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk -e update_env.yaml >>>> ERROR: PATCH update to non-COMPLETE stack is not supported. >>>> >>>> DUE TO :- >>>> >>>> [stack at undercloud ~]$ heat stack-list >>>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>>> | id | stack_name | stack_status | >>>> creation_time | updated_time | >>>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>>> | 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | overcloud | CREATE_FAILED | >>>> 2016-06-29T12:11:20 | None | >>>> +--------------------------------------+------------+---------------+---------------------+------ >>>> >>>> >>>> Complete error file `heat deployment-show >>>> 655c77fc-6a78-4cca-b4b7-a153a3f4ad52` is attached a gzip archive. >>>> >>>> >>>> Thanks. >>>> >>>> Boris. >>>> >>>> >>>> >>>> _______________________________________________ >>>> rdo-list mailing list >>>> rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >>> >>> The failure occurred during the post-deployment, which means that the >>> initial deployment succeeded, but then the steps that are done to the >>> completed overcloud failed. >>> >>> This is most commonly attributable to network problems between the >>> Undercloud and the Overcloud Public API. The Undercloud needs to reach >>> the Public API in order to do some of the post-configuration steps. If >>> this API isn't reachable, you end up with the error you saw above. >>> >>> You can test this connectivity by pinging the Public API VIP from the >>> Undercloud. Starting with the failed deployment, run "neutron >>> port-list" against the Underlcloud and look for the IP on the port >>> named "public_virtual_ip". You should be able to ping this address from >>> the Undercloud. If you can't reach that IP, then you need to check the >>> connectivity/routing between the Undercloud and the External network on >>> the Overcloud. >>> >> >> I should also mention common causes of this problem: >> >> * Incorrect value for ExternalInterfaceDefaultRoute in the network >> environment file. >> * Controllers do not have the default route on the External network in >> the NIC config templates (required for reachability from remote subnets). >> * Incorrect subnet mask on the ExternalNetCidr in the network environment. >> * Incorrect ExternalAllocationPools values in the network environment. >> * Incorrect Ethernet switch config for the Controllers. >> >> Issue has been reproduced with exactly same error 4 times >> starting since 06/25/16 on daily basis with exactly same error at Step4 >> of overcloud-ControllerNodesPostDeployment. >> In meantime I cannot reproduce the error. >> Config 3xNode HA Controller + 1xCompute works . >> There was one more issue 3xNode HA Controller + 2xCompute >> failed immediately when overcloud-deploy.sh started due to >> only 4 nodes could be introspected. I will test it tomorrow morning. >> >> Thanks a lot. >> Boris. >> >> -- >> Dan Sneddon | Principal OpenStack Engineer >> dsneddon at redhat.com | redhat.com/openstack >> 650.254.4025 | dsneddon:irc @dxs:twitter >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> >> This body part will be downloaded on demand. >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bderzhavets at hotmail.com Thu Jun 30 18:17:56 2016 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Thu, 30 Jun 2016 18:17:56 +0000 Subject: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) In-Reply-To: <57755B16.70706@redhat.com> References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> <24755b61ab624c10bea2152601a0f67f@PREWE13M11.ad.sprint.com> <2a4caf42c3b4471d9435dae86f201748@PREWE13M11.ad.sprint.com> <9e3942c3-6ebf-f4c6-41a2-23d11a1e8ffe@redhat.com> <57752963.6000209@redhat.com> , <57755B16.70706@redhat.com> Message-ID: Boris Derzhavets has shared a?OneDrive?file with you. To view it, click the link below. [https://r1.res.office365.com/owa/prem/images/dc-png_20.png] TOP0630.png ________________________________ From: John Trowbridge Sent: Thursday, June 30, 2016 1:47 PM To: Boris Derzhavets; Dan Sneddon; rdo-list at redhat.com Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) On 06/30/2016 12:56 PM, Boris Derzhavets wrote: > > > > ________________________________ > From: John Trowbridge > Sent: Thursday, June 30, 2016 10:14 AM > To: Boris Derzhavets; Dan Sneddon; rdo-list at redhat.com > Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) > > > > On 06/30/2016 05:19 AM, Boris Derzhavets wrote: >> >> >> >> ________________________________ >> From: rdo-list-bounces at redhat.com on behalf of Boris Derzhavets >> Sent: Wednesday, June 29, 2016 5:14 PM >> To: Dan Sneddon; rdo-list at redhat.com >> Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) >> >> Yes , attempt to deploy >> >> ######################## >> # HA +2xCompute >> ######################## >> control_memory: 6144 >> compute_memory: 6144 >> >> undercloud_memory: 8192 >> >> # Giving the undercloud additional CPUs can greatly improve heat's >> # performance (and result in a shorter deploy time). >> undercloud_vcpu: 4 > > Increasing this without also increasing the memory on the undercloud > will usually end in sadness, because more CPUs means more worker > processes means more memory consumption. In general straying from the > values in CI, is unlikely to work unless you have significantly better > hardware than what runs in CI (32G hosts with decent CPU). > > It will be verified tomorrow with > undercloud_vcpu: 2 > This test would be a fair . It will take about 2 hr. > But, I still believe that it is not root cause of issue with > Configuration - 3xController(HA) + 2xCompute having :- > undercloud_memory: 8192 > undercloud_vcpu: 4 > which was tested many times OK since 06/05 up to 06/24 > with no problems. Just realized that you are also deploying 2x compute nodes. Just FYI, even the basic HA setup barely fits on a 32G host. In fact on 3 of the 4 nodes in CI, we rarely get a pass of HA because the resources are so tight. Will actually be switching that job to a single controller job with pacemaker for exactly that reason (email to RDO list about that will come later this afternoon). How big is the virthost you are using? 32 GB . I am currently at VIRTHOST Console and made TOP's snapshot of running config 3xController(HA) + 1xCompute. Snapshot is attached ( Disregard previous message, it was sent occasionally ) Config :- 3xController(HA) + 2xCompute. I was monitoring via TOP many times . I didn't see big problems with RAM Current problem is most probably related with introspection nodes supposed to built overcloud. > > Thank you very much for feedback > Boris. > > https://github.com/openstack/tripleo-quickstart/blob/master/config/general_config/ha.yml#L13 > > It is not 100% that is the root cause of your issue, as the logs below > look like we hit issues either with Ironic deployment to the nodes, or > some issue with Nova scheduler. Note, that is definitely a different > problem (and possibly transient), than the one reported in the beginning > of this thread. > >> >> # Create three controller nodes and one compute node. >> overcloud_nodes: >> - name: control_0 >> flavor: control >> - name: control_1 >> flavor: control >> - name: control_2 >> flavor: control >> >> - name: compute_0 >> flavor: compute >> - name: compute_1 >> flavor: compute >> >> # We don't need introspection in a virtual environment (because we are >> # creating all the "hardware" we really know the necessary >> # information). >> introspect: false >> >> # Tell tripleo about our environment. >> network_isolation: true >> extra_args: >- >> --control-scale 3 --compute-scale 2 --neutron-network-type vxlan >> --neutron-tunnel-types vxlan >> -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml >> --ntp-server pool.ntp.org >> deploy_timeout: 75 >> tempest: false >> pingtest: true >> >> Results during overcloud deployment :- >> >> 2016-06-30 09:09:31 [NovaCompute]: CREATE_FAILED ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" >> 2016-06-30 09:09:31 [NovaCompute]: DELETE_IN_PROGRESS state changed >> 2016-06-30 09:09:34 [NovaCompute]: DELETE_COMPLETE state changed >> 2016-06-30 09:09:44 [NovaCompute]: CREATE_IN_PROGRESS state changed >> 2016-06-30 09:09:48 [NovaCompute]: CREATE_FAILED ResourceInError: resources.NovaCompute: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500" >> . . . . . >> >> 2016-06-30 09:11:36 [overcloud]: CREATE_FAILED Resource CREATE failed: ResourceInError: resources.Compute.resources[0].resources.NovaCompute: Went to status ERROR due to "Message: Build of instance bf483c34-7010-48ea-8f58-fe192c91093f aborted: Failed to provision instance bf483c34-7010-48ea-8f58-fe192 >> 2016-06-30 09:11:36 [1]: SIGNAL_COMPLETE Unknown >> 2016-06-30 09:11:36 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >> 2016-06-30 09:11:36 [1]: CREATE_COMPLETE state changed >> 2016-06-30 09:11:36 [overcloud-ControllerCephDeployment-62xh7uhtpjqp]: CREATE_COMPLETE Stack CREATE completed successfully >> 2016-06-30 09:11:37 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >> 2016-06-30 09:11:37 [1]: SIGNAL_COMPLETE Unknown >> Stack overcloud CREATE_FAILED >> Deployment failed: Heat Stack create failed. >> + heat stack-list >> + grep -q CREATE_FAILED >> + deploy_status=1 >> ++ heat resource-list --nested-depth 5 overcloud >> ++ grep FAILED >> ++ grep 'StructuredDeployment ' >> ++ cut -d '|' -f3 >> + exit 1 >> >> >> Thanks. >> >> Boris >> >> >> ________________________________ >> From: rdo-list-bounces at redhat.com on behalf of Dan Sneddon >> Sent: Wednesday, June 29, 2016 1:46 PM >> To: rdo-list at redhat.com >> Subject: Re: [rdo-list] HA overcloud-deploy.sh crashes again ( ControllerOvercloudServicesDeployment_Step4 ) >> >> On 06/29/2016 10:42 AM, Dan Sneddon wrote: >>> On 06/29/2016 07:03 AM, Boris Derzhavets wrote: >>>> Boris Derzhavets has shared a?OneDrive?file with you. To view it, click >>>> the link below. >>>> >>>> >> [https://p.sfx.ms/icons/v2/Large/Default.png] >> >> HeatCrash2.txt 1.gz >> 1drv.ms >> GZ File >> >> >>>> >>>> HeatCrash2.txt 1.gz >>>> [HeatCrash2.txt 1.gz] >>>> >>>> Reattach gzip archive via One Drive >>>> >>>> >>>> >>>> ----------------------------------------------------------------------- >>>> *From:* rdo-list-bounces at redhat.com on >>>> behalf of Boris Derzhavets >>>> *Sent:* Wednesday, June 29, 2016 9:36 AM >>>> *To:* John Trowbridge; shardy at redhat.com >>>> *Cc:* rdo-list at redhat.com >>>> *Subject:* [rdo-list] HA overcloud-deploy.sh crashes again ( >>>> ControllerOvercloudServicesDeployment_Step4 ) >>>> >>>> >>>> Attempt to follow steps suggested >>>> in http://hardysteven.blogspot.ru/2016/06/tripleo-partial-stack-updates.html >>>> >>>> >>>> ./deploy-overstack crashes >>>> >>>> >>>> 2016-06-29 12:42:41 >>>> [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk-ControllerOvercloudServicesDeployment_Step4-nzdoizlgrmx2]: >>>> CREATE_FAILED Resource CREATE failed: Error: resources[0]: Deployment >>>> to server failed: deploy_status_code : Deployment exited with non-zero >>>> status code: 6 >>>> 2016-06-29 12:42:42 [ControllerOvercloudServicesDeployment_Step4]: >>>> CREATE_FAILED Error: >>>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 >>>> 2016-06-29 12:42:43 >>>> [overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk]: CREATE_FAILED >>>> Resource CREATE failed: Error: >>>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 >>>> 2016-06-29 12:42:44 [ControllerNodesPostDeployment]: CREATE_FAILED >>>> Error: >>>> resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 >>>> 2016-06-29 12:42:44 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:45 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:46 [overcloud]: CREATE_FAILED Resource CREATE failed: >>>> Error: >>>> resources.ControllerNodesPostDeployment.resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 >>>> 2016-06-29 12:42:46 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:47 [2]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:47 [ControllerDeployment]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:48 [NetworkDeployment]: SIGNAL_COMPLETE Unknown >>>> 2016-06-29 12:42:48 [2]: SIGNAL_COMPLETE Unknown >>>> Stack overcloud CREATE_FAILED >>>> Deployment failed: Heat Stack create failed. >>>> + heat stack-list >>>> + grep -q CREATE_FAILED >>>> + deploy_status=1 >>>> ++ heat resource-list --nested-depth 5 overcloud >>>> ++ grep FAILED >>>> ++ grep 'StructuredDeployment ' >>>> ++ cut -d '|' -f3 >>>> + for failed in '$(heat resource-list --nested-depth 5 >>>> overcloud | grep FAILED | >>>> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >>>> + heat deployment-show 655c77fc-6a78-4cca-b4b7-a153a3f4ad52 >>>> + for failed in '$(heat resource-list --nested-depth 5 >>>> overcloud | grep FAILED | >>>> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >>>> + heat deployment-show 1fe5153c-e017-4ee5-823a-3d1524430c1d >>>> + for failed in '$(heat resource-list --nested-depth 5 >>>> overcloud | grep FAILED | >>>> grep '\''StructuredDeployment '\'' | cut -d '\''|'\'' -f3)' >>>> + heat deployment-show bf6f25f4-d812-41e9-a7a8-122de619a624 >>>> + exit 1 >>>> >>>> ***************************** >>>> Troubleshooting steps :- >>>> ***************************** >>>> >>>> [stack at undercloud ~]$ . stackrc >>>> [stack at undercloud ~]$ heat resource-list overcloud | grep >>>> ControllerNodesPost >>>> | ControllerNodesPostDeployment | >>>> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | >>>> OS::TripleO::ControllerPostDeployment | CREATE_FAILED | >>>> 2016-06-29T12:11:21 | >>>> >>>> >>>> [stack at undercloud ~]$ heat stack-list -n | grep "^| >>>> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3" >>>> | f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 | >>>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >>>> | CREATE_FAILED | 2016-06-29T12:31:11 | None | >>>> 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | >>>> >>>> >>>> >>>> [stack at undercloud ~]$ heat event-list -m >>>> f1d6a474-c946-46bf-ab0c-2fdaeb55d0b3 >>>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >>>> >>>> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >>>> | resource_name | >>>> id | >>>> resource_status_reason >>>> | resource_status | event_time | >>>> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >>>> | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | >>>> 10ec0cf9-b3c9-4191-9966-3f4d47f27e2a | Stack CREATE started >>>> . . . . . . . . . . . . . . . . . >>>> Step1,2,3 succeeded >>>> . . . . . . . . . . . . . . . . . >>>> >>>> | CREATE_IN_PROGRESS | 2016-06-29T12:31:14 | >>>> | ControllerPuppetConfig | >>>> a2a1df33-5106-425c-b16d-8d2df709b19f | state >>>> changed >>>> | CREATE_COMPLETE | 2016-06-29T12:35:02 | >>>> | ControllerOvercloudServicesDeployment_Step4 | >>>> 1e151333-4de5-4e7b-907c-ea0f42d31a47 | state >>>> changed >>>> | CREATE_IN_PROGRESS | 2016-06-29T12:35:03 | >>>> | ControllerOvercloudServicesDeployment_Step4 | >>>> 7bf36334-3d92-4554-b6c0-41294a072ab6 | Error: >>>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 | CREATE_FAILED | >>>> 2016-06-29T12:42:42 | >>>> | overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk >>>> | e72fb6f4-c2aa-4fe8-9bd1-5f5ad152685c | Resource CREATE failed: >>>> Error: >>>> resources.ControllerOvercloudServicesDeployment_Step4.resources[0]: >>>> Deployment to server failed: deploy_status_code: Deployment exited with >>>> non-zero status code: 6 | CREATE_FAILED | 2016-06-29T12:42:43 | >>>> +------------------------------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------+---------------------+ >>>> >>>> [stack at undercloud ~]$ heat stack-show >>>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk | grep >>>> NodeConfigIdentifiers >>>> | | "NodeConfigIdentifiers": >>>> "{u'deployment_identifier': 1467202276, u'controller_config': {u'1': >>>> u'os-apply-config deployment 796df02a-7550-414b-a084-8b591a13e6db >>>> completed,Root CA cert injection not enabled.,TLS not enabled.,None,', >>>> u'0': u'os-apply-config deployment 613ec889-d852-470a-8e4c-6e243e1d2033 >>>> completed,Root CA cert injection not enabled.,TLS not enabled.,None,', >>>> u'2': u'os-apply-config deployment c8b099d0-3af4-4ba0-a056-a0ce60f40e2d >>>> completed,Root CA cert injection not enabled.,TLS not enabled.,None,'}, >>>> u'allnodes_extra': u'none'}" | >>>> >>>> However, when stack creating crashed update wouldn't help. >>>> >>>> [stack at undercloud ~]$ heat stack-update -x >>>> overcloud-ControllerNodesPostDeployment-2r4tlv5icaxk -e update_env.yaml >>>> ERROR: PATCH update to non-COMPLETE stack is not supported. >>>> >>>> DUE TO :- >>>> >>>> [stack at undercloud ~]$ heat stack-list >>>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>>> | id | stack_name | stack_status | >>>> creation_time | updated_time | >>>> +--------------------------------------+------------+---------------+---------------------+--------------+ >>>> | 17f82f6e-e0ca-44c6-9058-de82c00d4f79 | overcloud | CREATE_FAILED | >>>> 2016-06-29T12:11:20 | None | >>>> +--------------------------------------+------------+---------------+---------------------+------ >>>> >>>> >>>> Complete error file `heat deployment-show >>>> 655c77fc-6a78-4cca-b4b7-a153a3f4ad52` is attached a gzip archive. >>>> >>>> >>>> Thanks. >>>> >>>> Boris. >>>> >>>> >>>> >>>> _______________________________________________ >>>> rdo-list mailing list >>>> rdo-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rdo-list >>>> >>>> To unsubscribe: rdo-list-unsubscribe at redhat.com >>>> >>> >>> The failure occurred during the post-deployment, which means that the >>> initial deployment succeeded, but then the steps that are done to the >>> completed overcloud failed. >>> >>> This is most commonly attributable to network problems between the >>> Undercloud and the Overcloud Public API. The Undercloud needs to reach >>> the Public API in order to do some of the post-configuration steps. If >>> this API isn't reachable, you end up with the error you saw above. >>> >>> You can test this connectivity by pinging the Public API VIP from the >>> Undercloud. Starting with the failed deployment, run "neutron >>> port-list" against the Underlcloud and look for the IP on the port >>> named "public_virtual_ip". You should be able to ping this address from >>> the Undercloud. If you can't reach that IP, then you need to check the >>> connectivity/routing between the Undercloud and the External network on >>> the Overcloud. >>> >> >> I should also mention common causes of this problem: >> >> * Incorrect value for ExternalInterfaceDefaultRoute in the network >> environment file. >> * Controllers do not have the default route on the External network in >> the NIC config templates (required for reachability from remote subnets). >> * Incorrect subnet mask on the ExternalNetCidr in the network environment. >> * Incorrect ExternalAllocationPools values in the network environment. >> * Incorrect Ethernet switch config for the Controllers. >> >> Issue has been reproduced with exactly same error 4 times >> starting since 06/25/16 on daily basis with exactly same error at Step4 >> of overcloud-ControllerNodesPostDeployment. >> In meantime I cannot reproduce the error. >> Config 3xNode HA Controller + 1xCompute works . >> There was one more issue 3xNode HA Controller + 2xCompute >> failed immediately when overcloud-deploy.sh started due to >> only 4 nodes could be introspected. I will test it tomorrow morning. >> >> Thanks a lot. >> Boris. >> >> -- >> Dan Sneddon | Principal OpenStack Engineer >> dsneddon at redhat.com | redhat.com/openstack >> 650.254.4025 | dsneddon:irc @dxs:twitter >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> >> This body part will be downloaded on demand. >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From trown at redhat.com Thu Jun 30 18:46:04 2016 From: trown at redhat.com (John Trowbridge) Date: Thu, 30 Jun 2016 14:46:04 -0400 Subject: [rdo-list] Replacing the tripleo-quickstart HA job with a single controller pacemaker job Message-ID: <577568EC.8050802@redhat.com> Howdy folks, Just wanted to give a heads up that I plan to replace the "high-availability" tripleo-quickstart job in the CI promotion pipeline[1], with a job with a lower footprint. In CI, we get a virthost with 32G of RAM and a mediocre CPU. It is really hard to fit 5 really active VMs on that, and we have never had the HA job stable enough to use as a gate for that reason. Instead, we will test the pacemaker code path in tripleo by using a single controller setup with pacemaker enabled. We were never actually testing HA (ie failover scenarios) in the current job, so this should be a pretty minimal loss in coverage. Since this allows us to drop two CPU intensive nodes from the deploy, we can add a ceph node to that job. This will end up with more code coverage then the current HA job, and will hopefully will end up being stable enough to use as a gate as well. Longer term, it would be good to restore an actual HA job, maybe even adding some failure scenario tests to the job. I have a couple of ideas about how we could do this, but none are feasible in the short term. 1. Use pre-existing servers for deploying[2] This would allow running the HA job against any cloud, where we could size the nodes appropriately to make the job stable. 2. Use an OVB cloud for the HA job. Soon we should have an OVB (openstack virtual baremetal) cloud to run tests in. OVB would have all of the benefits of the solution above (unrestricted VM size), and would also provide us a way to test Ironic in a more realistic way since it mocks IPMI rather than our current method of using a fake ironic driver (which just does virsh commands over SSH). 3. Add a feature to tripleo-quickstart to bridge multiple virthosts If we could deploy our virtual machines across 2 different hosts, we would then have much more room to deploy the HA job. If anyone has some better ideas, they are very welcome! -- trown [1] https://ci.centos.org/view/rdo/view/promotion-pipeline/ [2] https://review.openstack.org/#/c/324777/ From rbowen at redhat.com Thu Jun 30 19:44:07 2016 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 30 Jun 2016 12:44:07 -0700 Subject: [rdo-list] openstack-dashboard-theme empty Message-ID: <4316ad4a-c170-d16b-f242-7ea0f054f9e3@redhat.com> It was pointed out to me at Red Hat Summit today that the openstack-dashboard-theme packages in both the Mitaka repo http://mirror.centos.org/centos/7/cloud/x86_64/openstack-mitaka/ and the trunk repo http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tested/ are empty. -- Rich Bowen - rbowen at redhat.com RDO Community Liaison http://rdocommunity.org @RDOCommunity From apevec at redhat.com Thu Jun 30 23:40:54 2016 From: apevec at redhat.com (Alan Pevec) Date: Fri, 1 Jul 2016 01:40:54 +0200 Subject: [rdo-list] openstack-dashboard-theme empty In-Reply-To: <4316ad4a-c170-d16b-f242-7ea0f054f9e3@redhat.com> References: <4316ad4a-c170-d16b-f242-7ea0f054f9e3@redhat.com> Message-ID: On Thu, Jun 30, 2016 at 9:44 PM, Rich Bowen wrote: > It was pointed out to me at Red Hat Summit today that the > openstack-dashboard-theme packages in both the Mitaka repo > http://mirror.centos.org/centos/7/cloud/x86_64/openstack-mitaka/ and the > trunk repo > http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tested/ > are empty. Correct, Matthias is working on creating standalone theme project and package. Cheers, Alan From Milind.Gunjan at sprint.com Thu Jun 30 20:27:21 2016 From: Milind.Gunjan at sprint.com (Gunjan, Milind [CTO]) Date: Thu, 30 Jun 2016 20:27:21 +0000 Subject: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment References: <39b7c1ff61f54d9484df7c305145b5af@PREWE13M11.ad.sprint.com> <08315fa5784d4cbe9cd93b1cd7a601fe@PREWE13M11.ad.sprint.com> <24755b61ab624c10bea2152601a0f67f@PREWE13M11.ad.sprint.com> Message-ID: Hi Guys, I am still having issues while installing the undercloud. I have exhausted all the options and tried to do undercloud install as per the tripleo guidelines. http://tripleo.org/installation/installation.html But, each time I hit the same errors: Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user[neutron]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_user[heat]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default Error: Could not prefetch keystone_service provider 'openstack': Execution of '/bin/openstack service list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) Error: Not managing Keystone_service[glance] due to earlier Keystone API failures. Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_service[glance::image]/ensure: change from absent to present failed: Not managing Keystone_service[glance] due to earlier Keystone API failures. Error: Could not prefetch keystone_role provider 'openstack': Execution of '/bin/openstack role list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) Error: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_role[ResellerAdmin]/ensure: change from absent to present failed: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. Error: Not managing Keystone_service[ironic] due to earlier Keystone API failures. Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_service[ironic::baremetal]/ensure: change from absent to present failed: Not managing Keystone_service[ironic] due to earlier Keystone API failures. Please suggest. From: Gunjan, Milind [CTO] Sent: Wednesday, June 29, 2016 7:39 AM To: 'Mohammed Arafa' Cc: rdo-list at redhat.com; Marius Cornea Subject: RE: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment Okay . Updated the host file with shortname and fqdn. root at undercloud ~]# cat /etc/hosts 127.0.0.1 undercloud undercloud.poc I am looking to install stable release of Openstack kilo / liberty. Can you please point me to the stable repo which I can pull for the install. Best Regards, Milind From: Mohammed Arafa [mailto:mohammed.arafa at gmail.com] Sent: Wednesday, June 29, 2016 7:34 AM To: Gunjan, Milind [CTO] > Cc: rdo-list at redhat.com; Marius Cornea > Subject: RE: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment You need both short name and fqdn in your hosts file. (Rabbitmq issue. Don't know if that's been fixed) On Jun 29, 2016 1:19 PM, "Gunjan, Milind [CTO]" > wrote: Thanks a lot for pointing out the mistakes. So, as suggested, I updated the hostname in the host file to match in similar way as instructed in the docs: [root at undercloud ~]# cat /etc/hostname undercloud.poc [root at undercloud ~]# cat /etc/hosts 127.0.0.1 undercloud.poc Is there any stable kilo package which can be used for installation ? Best Regards, Milind Gunjan From: Mohammed Arafa [mailto:mohammed.arafa at gmail.com] Sent: Wednesday, June 29, 2016 3:25 AM To: Marius Cornea > Cc: rdo-list at redhat.com; Gunjan, Milind [CTO] > Subject: Re: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment The hostname in the hosts file don't match. On Jun 29, 2016 8:58 AM, "Marius Cornea" > wrote: On Tue, Jun 28, 2016 at 10:45 PM, Gunjan, Milind [CTO] > wrote: > Hi All, > > Thanks a lot for continued support. > > I would just like to get clarity regarding the last recommendation. > My current deployment is failing with following error : > > Error: Could not prefetch keystone_service provider 'openstack': Execution of '/bin/openstack service list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) > Error: Could not prefetch keystone_role provider 'openstack': Execution of '/bin/openstack role list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) > Error: Could not prefetch keystone_endpoint provider 'openstack': Execution of '/bin/openstack endpoint list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) > > > I have previously done RHEL OSP7 deployment and I verified the host file of RHEL OSP7 undercloud deployment and it was configured with host gateway used by pxe network. > > Similarly, we have set the current host gateway to 192.0.2.1 as shown below for rdo manager undercloud installation: > [stack at rdo-undercloud etc]$ cat /etc/hosts >> 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 >> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 >> 192.0.2.1 rdo-undercloud undercloud-rdo.mydomain > > I would like to know if it is required during rdo deployment to have it with the address of the public interface, e.g: > $ip_public_nic rdo-undercloud rdo-undercloud.mydomain. The reason I asked for it is because of: Notice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: ERROR: (pymysql.err.OperationalError) (1045, u"Access denied for user 'heat'@'rdo-undercloud' (using password: YES)") Typically the undercloud's local_ip is used for these operations but in your case it's added to the hosts files and maybe this ip name mapping doesn't allow the installation to proceed. Please note that the docs[1] point that you should have an FQDN entry in the hosts file before the undercloud installation(when 192.0.2.1 isn't yet set on the system), that's why I mentioned the ip address of the public nic. [1] http://docs.openstack.org/developer/tripleo-docs/installation/installation.html > > Thanks again for your time and help. Really appreciate it. > > Best Regards, > milind > > -----Original Message----- > From: Marius Cornea [mailto:marius at remote-lab.net] > Sent: Tuesday, June 28, 2016 4:28 AM > To: Gunjan, Milind [CTO] > > Cc: rdo-list at redhat.com > Subject: Re: [rdo-list] Redeploying UnderCloud for baremetal triple-o deployment > > On Tue, Jun 28, 2016 at 5:18 AM, Gunjan, Milind [CTO] > > wrote: >> Hi Dan, >> Thanks a lot for your response. >> >> Even after properly updating the undercloud.conf file and checking the network configuration, undercloud deployment still fails. >> >> To recreate the issue , I am mentioning all the configuration steps: >> 1. Installed CentOS Linux release 7.2.1511 (Core) image on baremetal. >> 2. created stack user and provided required permission to stack user . >> 3. setting hostname >> sudo hostnamectl set-hostname rdo-undercloud.mydomain >> sudo hostnamectl set-hostname --transient rdo-undercloud.mydomain >> >> [stack at rdo-undercloud etc]$ cat /etc/hosts >> 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 >> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 >> 192.0.2.1 rdo-undercloud undercloud-rdo.mydomain > > Could you try removing the 192.0.2.1 entry from /etc/hosts and replace > it with the address of the public interface, e.g: > $ip_public_nic rdo-undercloud rdo-undercloud.mydomain > > then rerun openstack undercloud install > >> 4. enable required repositories >> sudo yum -y install epel-release >> sudo curl -o /etc/yum.repos.d/delorean-liberty.repo https://trunk.rdoproject.org/centos7-liberty/current/delorean.repo >> sudo curl -o /etc/yum.repos.d/delorean-deps-liberty.repo http://trunk.rdoproject.org/centos7-liberty/delorean-deps.repo >> >> 5. install repos >> >> sudo yum -y install yum-plugin-priorities >> sudo yum install -y python-tripleoclient >> >> 6. update undercloud.conf >> >> [stack at rdo-undercloud ~]$ cat undercloud.conf >> [DEFAULT] >> local_ip = 192.0.2.1/24 >> undercloud_public_vip = 192.0.2.2 >> undercloud_admin_vip = 192.0.2.3 >> local_interface = enp6s0 >> masquerade_network = 192.0.2.0/24 >> dhcp_start = 192.0.2.150 >> dhcp_end = 192.0.2.199 >> network_cidr = 192.0.2.0/24 >> network_gateway = 192.0.2.1 >> discovery_iprange = 192.0.2.200,192.0.2.230 >> discovery_runbench = false >> [auth] >> >> 7. install undercloud >> >> openstack undercloud install >> >> install ends in error: >> Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_user[neutron]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_user[heat]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Could not prefetch keystone_service provider 'openstack': Execution of '/bin/openstack service list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) >> Error: Not managing Keystone_service[glance] due to earlier Keystone API failures. >> Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_service[glance::image]/ensure: change from absent to present failed: Not managing Keystone_service[glance] due to earlier Keystone API failures. >> Error: Could not prefetch keystone_role provider 'openstack': Execution of '/bin/openstack role list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) >> Error: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. >> Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone_role[ResellerAdmin]/ensure: change from absent to present failed: Not managing Keystone_role[ResellerAdmin] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[ironic] due to earlier Keystone API failures. >> Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_service[ironic::baremetal]/ensure: change from absent to present failed: Not managing Keystone_service[ironic] due to earlier Keystone API failures. >> Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_user[nova]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user[glance]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Not managing Keystone_service[novav3] due to earlier Keystone API failures. >> Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova v3 service, user novav3]/Keystone_service[novav3::computev3]/ensure: change from absent to present failed: Not managing Keystone_service[novav3] due to earlier Keystone API failures. >> Error: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures. >> Error: /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_user]/ensure: change from absent to present failed: Not managing Keystone_role[heat_stack_user] due to earlier Keystone API failures. >> Error: /Stage[main]/Ironic::Keystone::Auth/Keystone::Resource::Service_identity[ironic]/Keystone_user[ironic]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Not managing Keystone_service[nova] due to earlier Keystone API failures. >> Error: /Stage[main]/Nova::Keystone::Auth/Keystone::Resource::Service_identity[nova service, user nova]/Keystone_service[nova::compute]/ensure: change from absent to present failed: Not managing Keystone_service[nova] due to earlier Keystone API failures. >> Error: Not managing Keystone_role[swiftoperator] due to earlier Keystone API failures. >> Error: /Stage[main]/Swift::Keystone::Auth/Keystone_role[swiftoperator]/ensure: change from absent to present failed: Not managing Keystone_role[swiftoperator] due to earlier Keystone API failures. >> Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_user[ceilometer]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Not managing Keystone_service[neutron] due to earlier Keystone API failures. >> Error: /Stage[main]/Neutron::Keystone::Auth/Keystone::Resource::Service_identity[neutron]/Keystone_service[neutron::network]/ensure: change from absent to present failed: Not managing Keystone_service[neutron] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[ceilometer] due to earlier Keystone API failures. >> Error: /Stage[main]/Ceilometer::Keystone::Auth/Keystone::Resource::Service_identity[ceilometer]/Keystone_service[ceilometer::metering]/ensure: change from absent to present failed: Not managing Keystone_service[ceilometer] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[swift] due to earlier Keystone API failures. >> Error: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_service[swift::object-store]/ensure: change from absent to present failed: Not managing Keystone_service[swift] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[keystone] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Endpoint/Keystone::Resource::Service_identity[keystone]/Keystone_service[keystone::identity]/ensure: change from absent to present failed: Not managing Keystone_service[keystone] due to earlier Keystone API failures. >> Error: Not managing Keystone_service[heat] due to earlier Keystone API failures. >> Error: /Stage[main]/Heat::Keystone::Auth/Keystone::Resource::Service_identity[heat]/Keystone_service[heat::orchestration]/ensure: change from absent to present failed: Not managing Keystone_service[heat] due to earlier Keystone API failures. >> Error: Could not prefetch keystone_endpoint provider 'openstack': Execution of '/bin/openstack endpoint list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) >> Error: /Stage[main]/Swift::Keystone::Auth/Keystone::Resource::Service_identity[swift]/Keystone_user[swift]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Could not prefetch keystone_tenant provider 'openstack': Execution of '/bin/openstack project list --quiet --format csv --long' returned 1: Gateway Timeout (HTTP 504) >> Error: Not managing Keystone_tenant[service] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[service]/ensure: change from absent to present failed: Not managing Keystone_tenant[service] due to earlier Keystone API failures. >> Error: Not managing Keystone_tenant[admin] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_tenant[admin]/ensure: change from absent to present failed: Not managing Keystone_tenant[admin] due to earlier Keystone API failures. >> Error: Not managing Keystone_role[admin] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_role[admin]/ensure: change from absent to present failed: Not managing Keystone_role[admin] due to earlier Keystone API failures. >> Error: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Could not evaluate: Execution of '/bin/openstack domain show --format shell Default' returned 1: Could not find resource Default >> Error: Could not prefetch keystone_domain provider 'openstack': Execution of '/bin/openstack domain list --quiet --format csv' returned 1: Gateway Timeout (HTTP 504) >> Notice: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]/returns: ERROR: (pymysql.err.OperationalError) (1045, u"Access denied for user 'heat'@'rdo-undercloud' (using password: YES)") >> Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: Failed to call refresh: heat-manage --config-file /etc/heat/heat.conf db_sync returned 1 instead of one of [0] >> Error: /Stage[main]/Heat::Db::Sync/Exec[heat-dbsync]: heat-manage --config-file /etc/heat/heat.conf db_sync returned 1 instead of one of [0] >> [2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] >> >> Notice: /Stage[main]/Heat::Deps/Anchor[heat::service::end]: Triggered 'refresh' from 4 events >> Notice: Finished catalog run in 5259.44 seconds >> + rc=6 >> + set -e >> + echo 'puppet apply exited with exit code 6' >> puppet apply exited with exit code 6 >> + '[' 6 '!=' 2 -a 6 '!=' 0 ']' >> + exit 6 >> [2016-06-27 18:54:04,092] (os-refresh-config) [ERROR] during configure phase. [Command '['dib-run-parts', '/usr/libexec/os-refresh-config/configure.d']' returned non-zero exit status 1] >> >> [2016-06-27 18:54:04,093] (os-refresh-config) [ERROR] Aborting... >> Traceback (most recent call last): >> File "", line 1, in >> File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 815, in install >> _run_orc(instack_env) >> File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 699, in _run_orc >> _run_live_command(args, instack_env, 'os-refresh-config') >> File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 370, in _run_live_command >> raise RuntimeError('%s failed. See log for details.' % name) >> RuntimeError: os-refresh-config failed. See log for details. >> Command 'instack-install-undercloud' returned non-zero exit status 1 >> >> >> I am not able to understand the exact cause of undercloud install failure. It would be really helpful if you guys can point be in direction to understand the exact cause of issue and any possible resolution. >> >> Thanks a lot. >> >> Best Regards, >> Milind >> >> >> Best Regards, >> Milind >> -----Original Message----- >> From: Dan Sneddon [mailto:dsneddon at redhat.com] >> Sent: Monday, June 27, 2016 12:40 PM >> To: Gunjan, Milind [CTO] >; rdo-list at redhat.com >> Subject: Re: [rdo-list] Redeploying UnderCloud >> >> On 06/27/2016 06:41 AM, Gunjan, Milind [CTO] wrote: >>> Hi All, >>> >>> Greeting. >>> >>> >>> >>> This is my first post and I am fairly new to RDO OpenStack. I am >>> working on RDO Triple-O deployment on baremetal. Due to incorrect >>> values in undercloud.conf file , my undercloud deployment failed. I >>> would like to redeploy undercloud and I am trying to understand what >>> steps has to be taken before redeploying undercloud. All the >>> deployment was under stack user . So first step will be to delete >>> stack user. I am not sure what has to be done regarding the networking >>> configuration autogenerated by os-net-config during the older install. >>> >>> Please advise. >>> >>> >>> >>> Best Regards, >>> >>> Milind >> >> No, definitely you don't want to delete the stack user, especially not as your first step! That would get rid of the configuration files, etc. >> that are in ~stack, and generally make your life harder than it needs to be. >> >> Anyway, it isn't necessary. You can do a procedure very much like what you do when upgrading the undercloud, with a couple of extra steps. >> >> As the stack user, edit your undercloud.conf, and make sure there are no more typos. >> >> If the typos were in the network configuration, you should delete the bridge and remove the ifcfg files: >> >> $ sudo ifdown br-ctlplane >> $ sudo ovs-vsctl del-br br-ctlplane >> $ sudo rm /etc/sysconfig/network-scripts/*br-ctlplane >> >> Next, run the underclound installation again: >> >> $ sudo yum update -y # Reboot after if kernel or core packages updated $ openstack undercloud install >> >> Then proceed with the rest of the instructions. You may find that if you already uploaded disk images or imported nodes that they will still be in the database. That's OK, or you can delete and reimport. >> >> -- >> Dan Sneddon | Principal OpenStack Engineer >> dsneddon at redhat.com | redhat.com/openstack >> 650.254.4025 | dsneddon:irc @dxs:twitter >> >> >> >> ________________________________ >> Learn more on how to switch to Sprint and save 50% on most Verizon, AT&T or T-Mobile rates. See sprint.com/50off for details. >> >> ________________________________ >> >> This e-mail may contain Sprint proprietary information intended for the sole use of the recipient(s). Any use by others is prohibited. If you are not the intended recipient, please contact the sender and delete all copies of the message. >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: