From javier.pena at redhat.com Mon Oct 2 09:39:38 2017 From: javier.pena at redhat.com (Javier Pena) Date: Mon, 2 Oct 2017 05:39:38 -0400 (EDT) Subject: [rdo-list] [rdo][tripleo][kolla] Routine patch maintenance on trunk.rdoproject.org, Tue Oct 3rd In-Reply-To: <1975713115.12250295.1506937102196.JavaMail.zimbra@redhat.com> Message-ID: <135513730.12250617.1506937178236.JavaMail.zimbra@redhat.com> Hi, We need to do some routine patching on trunk.rdoproject.org on Oct 3rd, at 8:00 UTC. There will be a brief downtime for a reboot, where jobs using packages from RDO Trunk can fail. Sorry for the inconveniences. If you need additional information, please do not hesitate to contact us. Regards, Javier From rbowen at redhat.com Mon Oct 2 12:50:08 2017 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 02 Oct 2017 12:50:08 +0000 Subject: [rdo-list] Triangle OpenStack Meetup group needs a leader Message-ID: Hi, folks, Since I know that a number of you are in or around the Raleigh, RTP area, I hope that one of you is willing to step up to lead the Triangle OpenStack Meetup group. The current leader has stepped back, and the group will be discontinued (ie, the registration on Meetup.com will lapse) in about a week unless someone is willing to take over. Please let me know if you're willing to do this, and I'll forward the email where you can take over. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Oct 2 14:31:42 2017 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 02 Oct 2017 14:31:42 +0000 Subject: [rdo-list] Upcoming meetups, October 2, 2017 Message-ID: The following are the meetups I'm aware of in the next two weeks where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Monday October 02 in Istanbul, TR: Ankara 12. Meetup - Ceph ve OpenStack - https://www.meetup.com/Turkey-OpenStack-Meetup/events/243543715/ * Tuesday October 03 in Sunnyvale, CA, US: OpenStack Pike Release Update & Redefining Protection in the Cloud - https://www.meetup.com/openstack/events/243811056/ * Wednesday October 04 in Prague, CZ: OpenStack - Zku?enosti s implementacemi a provozem - https://www.meetup.com/OpenStack-Czech-User-Group-Meetup/events/240756834/ * Wednesday October 04 in Helsinki, FI: OpenStackOperators Finland Video Call - https://www.meetup.com/OpenStack-Finland-User-Group/events/243462697/ * Saturday October 07 in Littleton, CO, US: OpenStack by David Willson 10-07-2017 - https://www.meetup.com/sofreeus/events/243043418/ * Saturday October 07 in Ha Noi, VN: OpenStack PTG Denver Recap Meetup #16 - VietOpenStack - https://www.meetup.com/VietOpenStack/events/243769562/ * Tuesday October 10 in Sydney, AU: Canberra OpenStack meetup - https://www.meetup.com/Australian-OpenStack-User-Group/events/243693862/ * Wednesday October 11 in Istanbul, TR: ?stanbul 13. Meetup, Konu: Ceph Yap?ta?lar?, OpenStack Entegrasyonu - https://www.meetup.com/Turkey-OpenStack-Meetup/events/243543951/ * Thursday October 12 in San Diego, CA, US: From the Experts: Cloud Computing - A Panel Discussion - https://www.meetup.com/OpenStackSanDiego/events/242450015/ * Saturday October 14 in Denver, CO, US: Learn OpenStack - https://www.meetup.com/it-ntl/events/243488239/ * Saturday October 14 in Bangalore, IN: OpenStack Mini-Conf at OSI Days India 2017 - https://www.meetup.com/Indian-OpenStack-User-Group/events/243384587/ * Monday October 16 in Tel Aviv-Yafo, IL: ONAP VNF Onboarding Hack Day - https://www.meetup.com/OpenStack-Israel/events/243488892/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Mon Oct 2 15:00:05 2017 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 2 Oct 2017 15:00:05 +0000 (UTC) Subject: [rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20171002150005.75AA060005A2@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2017-10-04 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Meeting ](https://etherpad.openstack.org/p/RDO-Meeting) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From chkumar246 at gmail.com Tue Oct 3 01:30:04 2017 From: chkumar246 at gmail.com (chkumar246 at gmail.com) Date: Tue, 3 Oct 2017 01:30:04 +0000 (UTC) Subject: [rdo-list] [Fedocal] Reminder meeting : RDO Office Hours Message-ID: <20171003013004.3EDAE609A604@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO Office Hours on 2017-10-03 from 13:30:00 to 15:30:00 UTC The meeting will be about: The meeting will be about RDO Office Hour. Aim: To keep up with increasing participation, we'll host office hours to add more easy fixes and provide mentoring to newcomers. [Agenda at RDO Office Hour easyfixes](https://review.rdoproject.org/etherpad/p/rdo-office-hour-easyfixes) Source: https://apps.fedoraproject.org/calendar/meeting/6374/ From javier.pena at redhat.com Tue Oct 3 08:37:49 2017 From: javier.pena at redhat.com (Javier Pena) Date: Tue, 3 Oct 2017 04:37:49 -0400 (EDT) Subject: [rdo-list] [rdo][tripleo][kolla] Routine patch maintenance on trunk.rdoproject.org, Tue Oct 3rd In-Reply-To: <135513730.12250617.1506937178236.JavaMail.zimbra@redhat.com> References: <135513730.12250617.1506937178236.JavaMail.zimbra@redhat.com> Message-ID: <839861556.12562895.1507019869829.JavaMail.zimbra@redhat.com> > Hi, > > We need to do some routine patching on trunk.rdoproject.org on Oct 3rd, at > 8:00 UTC. There will be a brief downtime for a reboot, where jobs using > packages from RDO Trunk can fail. Sorry for the inconveniences. > Hi, The maintenance is now complete. Everything should be back to normal, please contact us if you find any issue. Regards, Javier > If you need additional information, please do not hesitate to contact us. > > Regards, > Javier > From adarazs at redhat.com Tue Oct 3 11:07:21 2017 From: adarazs at redhat.com (Attila Darazs) Date: Tue, 3 Oct 2017 13:07:21 +0200 Subject: [rdo-list] Where and how should the DLRN API based promotion run? Message-ID: <68862ad3-a37b-2044-1757-de7420ff3b7c@redhat.com> I want to follow up this conversation from this review: https://review.rdoproject.org/r/9846 For reference, we're talking about the way we will do promotions based on DLRN API using this piece of script I called dlrnapi_promoter[1]. > Attila, John, Wes, I'm not convinced we need a dedicated machine > for this. > Can we take some time to discuss if a cron on a machine is the > right approach in the first place ? > > I feel there's a lot of different options available but we haven't > had the opportunity to discuss them. > > For example, a jenkins job that would trigger periodically or > through content change on the DLRN API result pages ? I'm sure we > could come up with different ideas. These are the most obvious other options (for me): 1. Make this promotion logic part of DLRN -- I would have preferred this but didn't see interest in this from Javier or from any of you when I pitched it a while ago -- probably too late in the design process of the API. Even the promotion API that we really needed was an afterthought for DLRN API, so we didn't really cooperate on the design to begin with. Adding this logic would have been my preference. We should do better next time. 2. Run this separately but *on* the DLRN server. I couldn't even get a proper approval for me and Gabriele and Sagi to get submit rights in the 'config' repo after a month[2]. I didn't even try to force this -- seems like our communication and cooperation is inflexible enough for now to not try to force this level of cohabitation to save the resources to run a single VM. I think it makes sense to run this separately. 3. Run this in a zuul job constantly polling: we would use the same amount of resources as having a dedicated machine, there's no good reason for doing it. 4. Run this based on some triggers: We want to be able to rerun failed jobs (RDO Phase1 & Phase2) and have the promotion succeed. We will have a ton of jobs that would trigger these scripts when they finish and there's no point in doing it after every single job, time based check seems to be more useful. If we don't run it after every single job finishes, it might miss a possible window for promotion. So in summary, the point of using DLRN API was not relying on random places changing/triggers/etc. The source of truth should be the DLRN API for promotion and the most straightforward way to check for results now is polling. I wouldn't mind if we eventually integrate this functionality in DLRN/DLRN API and when the promotion conditions are true, it could trigger jobs. Though we couldn't trigger stuff on intranet -- polling there still makes sense, but we could poll some DLRN page for sure. This script[1] and VM instance is what we have now due to 1) and 2) not going through. It would be definitely a lot more sane and cheaper resource and maintenance-wise to do these calculations for promotions in DLRN API and have DLRN trigger some jobs when they are true for a given hash, but I didn't feel capable of adding this to DLRN, it was simpler to use the API as it was designed. I'm happy to help start integrating this into DLRN, but as of now we should poll the API, and for polling the most reasonable solution is to have this on a constantly running machine vs. a long running job that does polling. Let me know what you think! Attila [1] https://github.com/rdo-infra/ci-config/tree/master/ci-scripts/dlrnapi_promoter [2] https://www.redhat.com/archives/rdo-list/2017-September/msg00008.html From chkumar246 at gmail.com Tue Oct 3 15:31:48 2017 From: chkumar246 at gmail.com (Chandan kumar) Date: Tue, 3 Oct 2017 21:01:48 +0530 Subject: [rdo-list] [Minutes]RDO Office Hour - 2017-10-03 Message-ID: ================================== #rdo: RDO Office Hour - 2017-10-03 ================================== Meeting started by chandankumar at 13:32:39 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_office_hour___2017_10_03/2017/rdo_office_hour___2017_10_03.2017-10-03-13.32.log.html . Meeting summary --------------- * Roll Call (chandankumar, 13:33:27) * LINK: Easyfix#4 Reviews: https://review.rdoproject.org/r/#/q/topic:easyfix/4+status:open (chandankumar, 13:39:18) * LINK: Easyfix#5 Reviews: https://review.rdoproject.org/r/#/q/status:open+branch:rpm-master+topic:easyfix/5 (chandankumar, 13:39:29) * https://github.com/redhat-openstack/easyfix/issues/11 -> Closed (chandankumar, 13:59:25) * LINK: https://review.rdoproject.org/r/#/c/9717/ (tosky, 14:50:50) * LINK: https://review.rdoproject.org/r/#/c/9543/ (tosky, 14:50:55) * LINK: https://review.rdoproject.org/r/#/c/9544/ (tosky, 14:50:59) Meeting ended at 15:30:49 UTC. People present (lines said) --------------------------- * rdogerrit (42) * chandankumar (26) * snecklifter (11) * openstack (8) * tosky (6) * aditya_r (6) * jpena (5) * panda (5) * apevec (4) * number80 (3) * jatanmalde (3) * sfbender (3) * rbowen (1) * jtomasek (1) * amoralej (1) * EmilienM (1) * adarazs (1) Generated by `MeetBot`_ 0.1.4 From javier.pena at redhat.com Tue Oct 3 16:12:23 2017 From: javier.pena at redhat.com (Javier Pena) Date: Tue, 3 Oct 2017 12:12:23 -0400 (EDT) Subject: [rdo-list] Where and how should the DLRN API based promotion run? In-Reply-To: <68862ad3-a37b-2044-1757-de7420ff3b7c@redhat.com> References: <68862ad3-a37b-2044-1757-de7420ff3b7c@redhat.com> Message-ID: <1727632171.12695289.1507047143749.JavaMail.zimbra@redhat.com> > I want to follow up this conversation from this review: > > https://review.rdoproject.org/r/9846 > > For reference, we're talking about the way we will do promotions based > on DLRN API using this piece of script I called dlrnapi_promoter[1]. > > > Attila, John, Wes, I'm not convinced we need a dedicated machine > > for this. > > Can we take some time to discuss if a cron on a machine is the > > right approach in the first place ? > > > > I feel there's a lot of different options available but we haven't > > had the opportunity to discuss them. > > > > For example, a jenkins job that would trigger periodically or > > through content change on the DLRN API result pages ? I'm sure we > > could come up with different ideas. > > These are the most obvious other options (for me): > > 1. Make this promotion logic part of DLRN -- I would have preferred this > but didn't see interest in this from Javier or from any of you when I > pitched it a while ago -- probably too late in the design process of the > API. Even the promotion API that we really needed was an afterthought > for DLRN API, so we didn't really cooperate on the design to begin with. > Adding this logic would have been my preference. We should do better > next time. > The DLRN API was designed to be pretty much a "dumb" reporting API. Adding all this logic to the API would make it too RDO-pipeline-specific, so I still think this is better handled as an external tool. With this, we can change all promotion logic without even touching the API itself. > 2. Run this separately but *on* the DLRN server. I couldn't even get a > proper approval for me and Gabriele and Sagi to get submit rights in the > 'config' repo after a month[2]. I didn't even try to force this -- seems > like our communication and cooperation is inflexible enough for now to > not try to force this level of cohabitation to save the resources to run > a single VM. I think it makes sense to run this separately. > This sounds good to me. We already have a promoter user in the DLRN instance, and it should not be too complicated to add the required code to puppet-dlrn to use the promoter script, then add a cron job. > 3. Run this in a zuul job constantly polling: we would use the same > amount of resources as having a dedicated machine, there's no good > reason for doing it. > > 4. Run this based on some triggers: We want to be able to rerun failed > jobs (RDO Phase1 & Phase2) and have the promotion succeed. We will have > a ton of jobs that would trigger these scripts when they finish and > there's no point in doing it after every single job, time based check > seems to be more useful. If we don't run it after every single job > finishes, it might miss a possible window for promotion. > That could be a good option, depending on the details. David, can you share your thoughts on this? Cheers, Javier > So in summary, the point of using DLRN API was not relying on random > places changing/triggers/etc. The source of truth should be the DLRN API > for promotion and the most straightforward way to check for results now > is polling. > > I wouldn't mind if we eventually integrate this functionality in > DLRN/DLRN API and when the promotion conditions are true, it could > trigger jobs. Though we couldn't trigger stuff on intranet -- polling > there still makes sense, but we could poll some DLRN page for sure. > > This script[1] and VM instance is what we have now due to 1) and 2) not > going through. It would be definitely a lot more sane and cheaper > resource and maintenance-wise to do these calculations for promotions in > DLRN API and have DLRN trigger some jobs when they are true for a given > hash, but I didn't feel capable of adding this to DLRN, it was simpler > to use the API as it was designed. > > I'm happy to help start integrating this into DLRN, but as of now we > should poll the API, and for polling the most reasonable solution is to > have this on a constantly running machine vs. a long running job that > does polling. > > Let me know what you think! > > Attila > > [1] > https://github.com/rdo-infra/ci-config/tree/master/ci-scripts/dlrnapi_promoter > [2] https://www.redhat.com/archives/rdo-list/2017-September/msg00008.html > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From rbowen at redhat.com Tue Oct 3 21:06:50 2017 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 3 Oct 2017 17:06:50 -0400 Subject: [rdo-list] RDO booth at OpenStack Summit: Help needed Message-ID: <01e23b18-0152-8e12-e0a1-2d52a876e708@redhat.com> As always, RDO will have a presence within the Red Hat booth at OpenStack Summit. We will have the usual stuff: ducks, stickers, and RDO/OpenStack cheatsheet bookmarks. As you begin to look at the Summit schedule and plan out your days, please consider spending one or more of your free sessions answering questions at the RDO booth. We need friendly people who are willing to answer questions all the way from "What's RDO?" to "I found a bug in Neutron and need help troubleshooting it." The schedule is at https://etherpad.openstack.org/p/rdo-sydney-summit-booth and I'll be reminding you of it over the coming weeks. The times are (roughly) in sync with sessions so you should be able to do a shift and make it to the next session. Thanks! -- Rich Bowen: Community Architect rbowen at redhat.com @rbowen // @RDOCommunity // @CentOSProject 1 859 351 9166 From gcerami at redhat.com Wed Oct 4 14:11:43 2017 From: gcerami at redhat.com (Gabriele Cerami) Date: Wed, 4 Oct 2017 15:11:43 +0100 Subject: [rdo-list] Where and how should the DLRN API based promotion run? In-Reply-To: <1727632171.12695289.1507047143749.JavaMail.zimbra@redhat.com> References: <68862ad3-a37b-2044-1757-de7420ff3b7c@redhat.com> <1727632171.12695289.1507047143749.JavaMail.zimbra@redhat.com> Message-ID: <20171004141143.pioznbi2aw4kdpzg@localhost> On 03 Oct, Javier Pena wrote: > > I want to follow up this conversation from this review: > > > > https://review.rdoproject.org/r/9846 > > > > For reference, we're talking about the way we will do promotions based > > on DLRN API using this piece of script I called dlrnapi_promoter[1]. > > > > > Attila, John, Wes, I'm not convinced we need a dedicated machine > > > for this. > > > Can we take some time to discuss if a cron on a machine is the > > > right approach in the first place ? > > > > > > I feel there's a lot of different options available but we haven't > > > had the opportunity to discuss them. > > > > > > For example, a jenkins job that would trigger periodically or > > > through content change on the DLRN API result pages ? I'm sure we > > > could come up with different ideas. Sorry if I'm bending the topic a bit, but until we find a consensus on a new solution, we need to move forward with what we have. The promotion scripts need to access the images server, to update the links. We're currently using the uploader user, but have no access. Can someone add a key we created on the promotion server to allow such access ? Thanks. From gcerami at redhat.com Wed Oct 4 14:53:35 2017 From: gcerami at redhat.com (Gabriele Cerami) Date: Wed, 4 Oct 2017 15:53:35 +0100 Subject: [rdo-list] [infra] images.rdoproject server cleanup changes Message-ID: <20171004145335.irwwlx476syussiz@localhost> Hi, to allow rdophase2 cache servers to have the time to download the images for caching purposes, we are stopping the automatic removal of all previous promoted images at each promotion. We'll change the cleanup process to delete images after a configurable threshold in days/images numbers. This will probably impact the storage space needed, but it's essential for downstream, even more since now potentially the rhythm of promotions upstream is going to increase. Thanks. From jlabarre at redhat.com Wed Oct 4 14:58:59 2017 From: jlabarre at redhat.com (James LaBarre) Date: Wed, 4 Oct 2017 10:58:59 -0400 Subject: [rdo-list] Extending/Scaling quickstart-built overcloud Message-ID: I'm wondering if it is possible to add virtual nodes/services to an overcloud built with Quickstart, rather than having to re-build the entire setup. I would like to set up Swift and Horizon, maybe add in Ceph once I figure out why most of the HDD in my system aren't showing up. From amoralej at redhat.com Wed Oct 4 16:16:18 2017 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Wed, 4 Oct 2017 18:16:18 +0200 Subject: [rdo-list] [Meeting] RDO meeting (2017-10-04) Minutes Message-ID: ============================== #rdo: RDO meeting - 2017-10-04 ============================== Meeting started by amoralej at 15:00:43 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2017_10_04/2017/rdo_meeting___2017_10_04.2017-10-04-15.00.log.html . Meeting summary --------------- * roll call (amoralej, 15:00:54) * ci-config cores (amoralej, 15:05:08) * https://www.redhat.com/archives/rdo-list/2017-September/msg00008.html (amoralej, 15:05:25) * current commiters are https://review.rdoproject.org/r/#/admin/groups/6,members and https://review.rdoproject.org/r/#/admin/groups/2411,members (amoralej, 15:17:31) * ACTION: jpena to propose a formal policy about becoming core for infra (amoralej, 15:19:02) * AGREED: to give core access to config project to adarazs trown and sshnaidm|pto (amoralej, 15:25:54) * Mock 1.4 and network access in DLRN workers (amoralej, 15:34:43) * ACTION: jpena to disable networking in DLRN runs for all workers (amoralej, 15:43:17) * OpenStack SIG in Fedora (amoralej, 15:43:34) * https://www.redhat.com/archives/rdo-list/2017-September/msg00072.html (amoralej, 15:43:51) * Initial slate: amoralej, apevec, chandankumar, dmsimard, jpena, snecklifter, tonyb, tristanC and hguemar (amoralej, 15:44:06) * ACTION: number80 start paperwork to create the Fedora OpenStack SIG (number80, 15:47:46) * RDO booth volunteers needed for OpenStack Summit - https://etherpad.openstack.org/p/rdo-sydney-summit-booth (amoralej, 15:48:31) * RDO booth volunteers needed for OpenStack Summit - https://etherpad.openstack.org/p/rdo-sydney-summit-booth (rbowen, 15:48:38) * Tentative test day schedule - https://www.rdoproject.org/testday/ - please point out any date conflicts (amoralej, 15:49:01) * Tentative test day schedule - https://www.rdoproject.org/testday/ - please point out any date conflicts (rbowen, 15:49:07) * DLRN Release format (amoralej, 15:51:50) * ACTION: jpena to propose DLRN patch to change release to 0.1.. as a configurable option (jpena, 16:02:07) * ACTION: jruzicka to support both current and new DLRN release format in rdopkg (jruzicka, 16:03:43) * open floor (amoralej, 16:04:16) * ACTION: jpena will chair next meeting (amoralej, 16:06:06) Meeting ended at 16:06:31 UTC. Action items, by person ----------------------- * jpena * jpena to propose a formal policy about becoming core for infra * jpena to disable networking in DLRN runs for all workers * jpena to propose DLRN patch to change release to 0.1.. as a configurable option * jpena will chair next meeting * jruzicka * jruzicka to support both current and new DLRN release format in rdopkg * number80 * number80 start paperwork to create the Fedora OpenStack SIG * openstack * number80 start paperwork to create the Fedora OpenStack SIG People present (lines said) --------------------------- * amoralej (103) * number80 (57) * adarazs (28) * jpena (24) * apevec (20) * jruzicka (19) * rbowen (16) * chandankumar (16) * jschlueter (14) * dmsimard (13) * openstack (13) * trown (7) * rdogerrit (7) * Duck (5) * mfedosin (5) * jatanmalde (2) * myoung (1) * snecklifter (1) Generated by `MeetBot`_ 0.1.4 From bderzhavets at hotmail.com Wed Oct 4 16:18:33 2017 From: bderzhavets at hotmail.com (Boris Derzhavets) Date: Wed, 4 Oct 2017 16:18:33 +0000 Subject: [rdo-list] Extending/Scaling quickstart-built overcloud In-Reply-To: References: Message-ID: ________________________________ From: rdo-list-bounces at redhat.com on behalf of James LaBarre Sent: Wednesday, October 4, 2017 5:58 PM To: rdo-list at redhat.com Subject: [rdo-list] Extending/Scaling quickstart-built overcloud I'm wondering if it is possible to add virtual nodes/services to an overcloud built with Quickstart, rather than having to re-build the entire setup. I would like to set up Swift and Horizon, maybe add in Ceph once I figure out why most of the HDD in my system aren't showing up. > Per my understanding at this point you are supposed to run :- $ openstack stack update overcloud . . . . . . . referencing the the set of newly properly written heat templates. And through all releases between Mitaka and Ocata you DON'T have much chance to succeed doing so. At least via my experience. Please, tell me that I am wrong about that. Boris. > _______________________________________________ rdo-list mailing list rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list rdo-list Info Page - Red Hat www.redhat.com The rdo-list mailing list provides a forum for discussions about installing, running, and using OpenStack on Red Hat based distributions. To see the collection of ... To unsubscribe: rdo-list-unsubscribe at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Thu Oct 5 08:52:46 2017 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 5 Oct 2017 09:52:46 +0100 Subject: [rdo-list] [tripleo][quickstart][rdo] shipping python-virtualbmc in Newton to allow undercloud upgrades from Newton to Queens In-Reply-To: <20171004120825.w2jpfu7wmfsfxdh7@lyarwood.usersys.redhat.com> References: <20171004120825.w2jpfu7wmfsfxdh7@lyarwood.usersys.redhat.com> Message-ID: <20171005085246.a3s5kt27vxcn3kx2@lyarwood.usersys.redhat.com> Adding rdo-list in an attempt to get more feeback regarding this proposal, tl;dr can we ship python-virtualbmc in Newton? On 04-10-17 12:08:25, Lee Yarwood wrote: > Hello all, > > I'm currently working to get the tripleo-spec for fast-forward upgrades > out of WIP and merged ahead of the Queens M-1 milestone next week. One > of the documented pre-requisite steps for fast-forward upgrades is for > an operator to linearly upgrade the undercloud from Newton (N) to Queens > (N+3): > > https://review.openstack.org/#/c/497257/ > > This is not possible at present with tripleo-quickstart deployed virtual > environments thanks to our use of the pxe_ssh Ironic driver in Newton > that has now been removed in Pike: > > https://docs.openstack.org/releasenotes/ironic/pike.html#id14 > > I briefly looked into migrating between pxe_ssh and the new default of > vbmc during the Ocata to Pike undercloud upgrade but I'd much rather > just deploy Newton using vbmc. AFAICT the only issue here is packaging > with the python-virtualbmc package not present in the Newton repos. > > With that in mind I've submitted the following changes that remove the > various conditionals in tripleo-quickstart that block the use of vbmc in > Newton and verified that this works by using the Ocata python-virtualbmc > package: > > https://review.openstack.org/#/q/topic:allow_vbmc_newton+(status:open+OR+status:merged) > > FWIW I can deploy successfully on Newton with these changes and then > upgrade the undercloud to Pike just fine. > > Would anyone be able to confirm *if* we could ship python-virtualbmc in > the Newton relevant repos? > > Thanks in advance, > > Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From javier.pena at redhat.com Thu Oct 5 10:48:35 2017 From: javier.pena at redhat.com (Javier Pena) Date: Thu, 5 Oct 2017 06:48:35 -0400 (EDT) Subject: [rdo-list] [tripleo][quickstart][rdo] shipping python-virtualbmc in Newton to allow undercloud upgrades from Newton to Queens In-Reply-To: <20171005085246.a3s5kt27vxcn3kx2@lyarwood.usersys.redhat.com> References: <20171004120825.w2jpfu7wmfsfxdh7@lyarwood.usersys.redhat.com> <20171005085246.a3s5kt27vxcn3kx2@lyarwood.usersys.redhat.com> Message-ID: <1513500237.13126025.1507200515988.JavaMail.zimbra@redhat.com> > Adding rdo-list in an attempt to get more feeback regarding this > proposal, tl;dr can we ship python-virtualbmc in Newton? > Given the background, I think it's reasonable to add it to Newton, even though it is close to EOL. Could you open a review to rdoinfo and add the required tag after https://github.com/redhat-openstack/rdoinfo/blob/master/rdo.yml#L4736-L4737 ? We could iron out any details in the review. Regards, Javier > On 04-10-17 12:08:25, Lee Yarwood wrote: > > Hello all, > > > > I'm currently working to get the tripleo-spec for fast-forward upgrades > > out of WIP and merged ahead of the Queens M-1 milestone next week. One > > of the documented pre-requisite steps for fast-forward upgrades is for > > an operator to linearly upgrade the undercloud from Newton (N) to Queens > > (N+3): > > > > https://review.openstack.org/#/c/497257/ > > > > This is not possible at present with tripleo-quickstart deployed virtual > > environments thanks to our use of the pxe_ssh Ironic driver in Newton > > that has now been removed in Pike: > > > > https://docs.openstack.org/releasenotes/ironic/pike.html#id14 > > > > I briefly looked into migrating between pxe_ssh and the new default of > > vbmc during the Ocata to Pike undercloud upgrade but I'd much rather > > just deploy Newton using vbmc. AFAICT the only issue here is packaging > > with the python-virtualbmc package not present in the Newton repos. > > > > With that in mind I've submitted the following changes that remove the > > various conditionals in tripleo-quickstart that block the use of vbmc in > > Newton and verified that this works by using the Ocata python-virtualbmc > > package: > > > > https://review.openstack.org/#/q/topic:allow_vbmc_newton+(status:open+OR+status:merged) > > > > FWIW I can deploy successfully on Newton with these changes and then > > upgrade the undercloud to Pike just fine. > > > > Would anyone be able to confirm *if* we could ship python-virtualbmc in > > the Newton relevant repos? > > > > Thanks in advance, > > > > Lee > -- > Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 > 2D76 > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From lyarwood at redhat.com Thu Oct 5 10:58:59 2017 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 5 Oct 2017 11:58:59 +0100 Subject: [rdo-list] [tripleo][quickstart][rdo] shipping python-virtualbmc in Newton to allow undercloud upgrades from Newton to Queens In-Reply-To: <1513500237.13126025.1507200515988.JavaMail.zimbra@redhat.com> References: <20171004120825.w2jpfu7wmfsfxdh7@lyarwood.usersys.redhat.com> <20171005085246.a3s5kt27vxcn3kx2@lyarwood.usersys.redhat.com> <1513500237.13126025.1507200515988.JavaMail.zimbra@redhat.com> Message-ID: <20171005105859.yvtn23tvkjctcq2r@lyarwood.usersys.redhat.com> On 05-10-17 10:48:35, Javier Pena wrote: > > Adding rdo-list in an attempt to get more feeback regarding this > > proposal, tl;dr can we ship python-virtualbmc in Newton? > > > > Given the background, I think it's reasonable to add it to Newton, > even though it is close to EOL. > > Could you open a review to rdoinfo and add the required tag after > https://github.com/redhat-openstack/rdoinfo/blob/master/rdo.yml#L4736-L4737 > ? We could iron out any details in the review. Thanks Javier, I've created the following review for this: https://review.rdoproject.org/r/9981 Add python-virtualbmc to Newton Thanks! Lee -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From hamzy at us.ibm.com Thu Oct 5 17:15:57 2017 From: hamzy at us.ibm.com (Mark Hamzy) Date: Thu, 5 Oct 2017 12:15:57 -0500 Subject: [rdo-list] Building the overcloud image on ppc64le Message-ID: I am running into a dependency issue when building the overcloud images on ppc64le and was wondering what I am doing wrong... https://fedoraproject.org/wiki/User:Hamzy/ppc64le_Overcloud_builder_instance_problem1 -- Mark You must be the change you wish to see in the world. -- Mahatma Gandhi Never let the future disturb you. You will meet it, if you have to, with the same weapons of reason which today arm you against the present. -- Marcus Aurelius -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu Oct 5 22:25:45 2017 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 6 Oct 2017 09:25:45 +1100 Subject: [rdo-list] Building the overcloud image on ppc64le In-Reply-To: References: Message-ID: <20171005222545.GA9578@thor.bakeyournoodle.com> On Thu, Oct 05, 2017 at 12:15:57PM -0500, Mark Hamzy wrote: > I am running into a dependency issue when building the overcloud images on > ppc64le and was wondering what I am doing wrong... > > https://fedoraproject.org/wiki/User:Hamzy/ppc64le_Overcloud_builder_instance_problem1 I don't think you're doing anything wrong, you just need http://cbs.centos.org/koji/buildinfo?buildID=17627 which hasn't been published. You could create your own local -override repo and try with that version. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From javier.pena at redhat.com Fri Oct 6 11:30:30 2017 From: javier.pena at redhat.com (Javier Pena) Date: Fri, 6 Oct 2017 07:30:30 -0400 (EDT) Subject: [rdo-list] Rules for config-core group in review.rdoproject.org In-Reply-To: <124027683.13384300.1507289368171.JavaMail.zimbra@redhat.com> Message-ID: <650891332.13384354.1507289430169.JavaMail.zimbra@redhat.com> Hi, During yesterday's meeting, we discussed again about the rules to become a core in the config repository for review.rdoproject.org. I have drafted some basic information/rules, and sent them as a PR to the RDO website: https://github.com/redhat-openstack/website/pull/1081. Please have a look at the PR and leave your thoughts there. Regards, Javier From hamzy at us.ibm.com Fri Oct 6 21:03:14 2017 From: hamzy at us.ibm.com (Mark Hamzy) Date: Fri, 6 Oct 2017 16:03:14 -0500 Subject: [rdo-list] Building the overcloud image on ppc64le In-Reply-To: <20171005222545.GA9578@thor.bakeyournoodle.com> References: <20171005222545.GA9578@thor.bakeyournoodle.com> Message-ID: Tony Breeds wrote on 10/05/2017 05:25:45 PM: > On Thu, Oct 05, 2017 at 12:15:57PM -0500, Mark Hamzy wrote: > > I am running into a dependency issue when building the overcloud images on > > ppc64le and was wondering what I am doing wrong... > > > > https://fedoraproject.org/wiki/User:Hamzy/ > ppc64le_Overcloud_builder_instance_problem1 > > I don't think you're doing anything wrong, you just need > http://cbs.centos.org/koji/buildinfo?buildID=17627 > > which hasn't been published. You could create your own local -override > repo and try with that version. Yeah, as I stated in the URL, if I add that package with another repository it works. It sounds to me like the community has built an older version. This is the problem that I am looking to fix. Do the people who build the overcloud image know what might be wrong? erlang-19.3.6.1-1.el7 (built on Fri, 14 Jul 2017 00:42:33 UTC) was tagged: cloud7-openstack-common-candidate erlang-18.3.4.5-4.el7 (built on Fri, 01 Sep 2017 11:50:45 UTC) is tagged: cloud7-openstack-common-candidate,cloud7-openstack-common-testing,cloud7-openstack-queens-testing -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Sun Oct 8 19:03:22 2017 From: dms at redhat.com (David Moreau Simard) Date: Sun, 8 Oct 2017 15:03:22 -0400 Subject: [rdo-list] Settling on container tags Message-ID: Hi, Could you please let us know what tags we plan on using for containers ? Right now I see 'tripleo-ci-testing' and 'passed-ci-test' but I also remember seeing 'passed-ci' although it doesn't seem like it exists anymore. It's important that we agree on using a given set of tags in order to ensure they're not getting caught up in the image pruning. Thanks, David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] From tony at bakeyournoodle.com Sun Oct 8 21:32:50 2017 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 9 Oct 2017 08:32:50 +1100 Subject: [rdo-list] Building the overcloud image on ppc64le In-Reply-To: References: <20171005222545.GA9578@thor.bakeyournoodle.com> Message-ID: <20171008213250.GA20767@thor.bakeyournoodle.com> On Fri, Oct 06, 2017 at 04:03:14PM -0500, Mark Hamzy wrote: > Tony Breeds wrote on 10/05/2017 05:25:45 PM: > > > On Thu, Oct 05, 2017 at 12:15:57PM -0500, Mark Hamzy wrote: > > > I am running into a dependency issue when building the overcloud > images on > > > ppc64le and was wondering what I am doing wrong... > > > > > > https://fedoraproject.org/wiki/User:Hamzy/ > > ppc64le_Overcloud_builder_instance_problem1 > > > > I don't think you're doing anything wrong, you just need > > http://cbs.centos.org/koji/buildinfo?buildID=17627 > > > > which hasn't been published. You could create your own local -override > > repo and try with that version. > > Yeah, as I stated in the URL, if I add that package with another > repository it works. It sounds to me like the community has built an > older version. This is the problem that I am looking to fix. Do the > people who build the overcloud image know what might be wrong? > > erlang-19.3.6.1-1.el7 (built on Fri, 14 Jul 2017 00:42:33 UTC) was tagged: > cloud7-openstack-common-candidate > erlang-18.3.4.5-4.el7 (built on Fri, 01 Sep 2017 11:50:45 UTC) is tagged: > > cloud7-openstack-common-candidate,cloud7-openstack-common-testing,cloud7-openstack-queens-testing It's my understanding that the new version just hasn't passed a CI run and therefore hasn't been promoted. Alan or Haikel would know more. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From adarazs at redhat.com Mon Oct 9 09:16:23 2017 From: adarazs at redhat.com (Attila Darazs) Date: Mon, 9 Oct 2017 11:16:23 +0200 Subject: [rdo-list] Settling on container tags In-Reply-To: References: Message-ID: On 10/08/2017 09:03 PM, David Moreau Simard wrote: > Hi, > > Could you please let us know what tags we plan on using for containers ? > > Right now I see 'tripleo-ci-testing' and 'passed-ci-test' but I also > remember seeing 'passed-ci' although it doesn't seem like it exists > anymore. > It's important that we agree on using a given set of tags in order to > ensure they're not getting caught up in the image pruning. This script[1] is doing the promotion/pushing of the images. Gabriele was trying to avoid using the proper names during testing, so there are a bunch of WIP ones. We ended up unifying it just like the qcow2 images, they will be tagged the same as we have the link on the DLRN server: tripleo-ci-testing = promotion job working containers current-tripleo = upsteam promoted containers current-tripleo-rdo = phase1 promoted containers[2] We have a documentation task to explain all this, I will work on it today hopefully. May take a while to finish it and merge it though. Attila [1] https://github.com/rdo-infra/ci-config/blob/master/ci-scripts/container-push/container-push.yml [2] though we don't really test them currently in phase1, we'll promote them nonetheless when phase1 promotes. From mrunge at redhat.com Mon Oct 9 11:15:35 2017 From: mrunge at redhat.com (Matthias Runge) Date: Mon, 9 Oct 2017 13:15:35 +0200 Subject: [rdo-list] [Proposal] Improve clients maintenance on Fedora In-Reply-To: <20170928213233.GH28251@thor.bakeyournoodle.com> References: <20170928213233.GH28251@thor.bakeyournoodle.com> Message-ID: <20171009111535.ebmzfgfhp5otug7i@sofja.berg.ol> On Thu, Sep 28, 2017 at 09:32:33PM +0000, Tony Breeds wrote: > > 1. create an OpenStack SIG within Fedora, initial members will be current > > RDO provenpackagers and volunteers > > /me volunteers! Joining the game quite late, I'm interested as well. Rebuilding the packages locally on demand does not really scale well ;-) Matthias -- Matthias Runge Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: not available URL: From hguemar at fedoraproject.org Mon Oct 9 12:38:46 2017 From: hguemar at fedoraproject.org (=?UTF-8?Q?Ha=C3=AFkel?=) Date: Mon, 9 Oct 2017 14:38:46 +0200 Subject: [rdo-list] New SIG: OpenStack Message-ID: Hi, I'd like to announce the beginning of a new SIG: OpenStack. During these last years, I've been more or less maintaining OpenStack clients, and I'd like to pass over that (burden) to a SIG. OpenStack clients/libraries are quite tied to each other, so it is difficult to maintain them without provenpackager permissions, and it is also a lot of work to sync requirements, do the testing. So we will be working with RDO project to provide latest and validated OpenStack clients packaging. We already have ten packagers who signed up [1], and we welcome anyone who wants to help [2]. So what are our immediate plans: 1. create @openstack SIG group 2. transfer packages ownership to @openstack SIG 3. set up CI jobs to regularly test and validate Fedora OpenStack clients packages. 4. automate packages syncing with RDO (still require human validation, no bot!) [3] 5. Enjoy latest and validated OpenStack clients on Fedora. For now, the focus will be solely on clients. Regards, H. [1] Most of them are either upstream developers and/or active in RDO and Fedora projects [2] If you're new in packaging, feel free to join, I'd be happy to mentor/sponsor you! [3] RDO has already automated most packaging tasks, and has proper CI to test against real OpenStack deployments From hamzy at us.ibm.com Mon Oct 9 14:07:36 2017 From: hamzy at us.ibm.com (Mark Hamzy) Date: Mon, 9 Oct 2017 09:07:36 -0500 Subject: [rdo-list] Looking for help in properly configuring a TripleO environment Message-ID: I am looking for help in properly configuring a TripleO environment on a machine with two network cards talking to two baremetal nodes in the overcloud also with two network cards. One network will be for provisioning and one will be for internet connection. I have documented my current configuration at: https://fedoraproject.org/wiki/User:Hamzy/TripleO_mixed_undercloud_overcloud_try8 2017-09-23 23:54:49Z [overcloud.ControllerAllNodesValidationDeployment.0]: CREATE_COMPLETE state changed 2017-09-23 23:54:49Z [overcloud.ControllerAllNodesValidationDeployment]: CREATE_COMPLETE Stack CREATE completed successfully 2017-09-23 23:54:49Z [overcloud.ControllerAllNodesValidationDeployment]: CREATE_COMPLETE state changed 2017-09-24 00:05:06Z [overcloud.ComputeAllNodesValidationDeployment.0]: SIGNAL_IN_PROGRESS Signal: deployment d54b96a6-1860-4802-a$ 45-db4ece0317e4 failed (1) 2017-09-24 00:05:06Z [overcloud.ComputeAllNodesValidationDeployment.0]: CREATE_FAILED Error: resources[0]: Deployment to server fa$led: deploy_status_code : Deployment exited with non-zero status code: 1 2017-09-24 00:05:06Z [overcloud.ComputeAllNodesValidationDeployment]: CREATE_FAILED Resource CREATE failed: Error: resources[0]: D$ployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 1 2017-09-24 00:05:07Z [overcloud.ComputeAllNodesValidationDeployment]: CREATE_FAILED Error: resources.ComputeAllNodesValidationDepl$yment.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1 2017-09-24 00:05:07Z [overcloud]: CREATE_FAILED Resource CREATE failed: Error: resources.ComputeAllNodesValidationDeployment.resou$ces[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1 Stack overcloud CREATE_FAILED overcloud.ComputeAllNodesValidationDeployment.0: resource_type: OS::Heat::StructuredDeployment physical_resource_id: d54b96a6-1860-4802-ad45-db4ece0317e4 status: CREATE_FAILED status_reason: | Error: resources[0]: Deployment to server failed: deploy_status_code : Deployment exited with non-zero status code: 1 deploy_stdout: | ... Ping to 172.16.0.14 failed. Retrying... Ping to 172.16.0.14 failed. Retrying... Ping to 172.16.0.14 failed. Retrying... Ping to 172.16.0.14 failed. Retrying... Ping to 172.16.0.14 failed. Retrying... Ping to 172.16.0.14 failed. Retrying... Ping to 172.16.0.14 failed. Retrying... Ping to 172.16.0.14 failed. Retrying... Ping to 172.16.0.14 failed. Retrying... FAILURE (truncated, view all with --long) deploy_stderr: | 172.16.0.14 is not pingable. Local Network: 172.16.0.0/24 -- Mark You must be the change you wish to see in the world. -- Mahatma Gandhi Never let the future disturb you. You will meet it, if you have to, with the same weapons of reason which today arm you against the present. -- Marcus Aurelius -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Oct 9 14:17:34 2017 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 9 Oct 2017 10:17:34 -0400 Subject: [rdo-list] Upcoming meetups Message-ID: <8307a7c1-9895-cd6a-c9ac-1d4532bcd1ea@redhat.com> The following are the meetups I'm aware of in the next two weeks where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. Also, a reminder that several of us will be at CERN, in Meyrin Switzerland, next Friday, for the CentOS Dojo, which will feature quire a bit of OpenStack/RDO content. --Rich * Tuesday October 10 in Sydney, AU: Canberra OpenStack meetup - https://www.meetup.com/Australian-OpenStack-User-Group/events/243693862/ * Wednesday October 11 in Istanbul, TR: ?stanbul 13. Meetup, Konu: Ceph Yap?ta?lar?, OpenStack Entegrasyonu - https://www.meetup.com/Turkey-OpenStack-Meetup/events/243543951/ * Thursday October 12 in San Diego, CA, US: From the Experts: Cloud Computing - A Panel Discussion - https://www.meetup.com/OpenStackSanDiego/events/242450015/ * Thursday October 12 in Orlando, FL, US: Cloud Foundry PaaS and OpenStack - https://www.meetup.com/Orlando-Central-Florida-OpenStack-Meetup/events/243971049/ * Thursday October 12 in Boston, MA, US: OpenStack Boston Meetup - Massachusetts Open Cloud - https://www.meetup.com/Openstack-Boston/events/243883198/ * Saturday October 14 in Denver, CO, US: Learn OpenStack - https://www.meetup.com/it-ntl/events/243488239/ * Saturday October 14 in Taipei, TW: 10? meetup -- OpenStack ???? - https://www.meetup.com/OpenStack-User-Group-Taiwan/events/243932085/ * Saturday October 14 in Bangalore, IN: OpenStack Mini-Conf at OSI Days India 2017 - https://www.meetup.com/Indian-OpenStack-User-Group/events/243384587/ * Monday October 16 in Tel Aviv-Yafo, IL: ONAP VNF Onboarding Hack Day - https://www.meetup.com/OpenStack-Israel/events/243488892/ * Tuesday October 17 in Istanbul, TR: Openstack Days 2017 - https://www.meetup.com/Turkey-OpenStack-Meetup/events/243535231/ * Tuesday October 17 in Cork, IE: Leveraging Apache Kafka for Web Crawling and Data Processing - https://www.meetup.com/OpenStack-Cork/events/243678867/ * Tuesday October 17 in Guadalajara, MX: Administraci?n de Redes en ambiente de nube con OpenStack - https://www.meetup.com/OpenStack-GDL/events/243661445/ * Wednesday October 18 in Copenhagen, DK: OpenStack Days Nordic - Copenhagen 2017 - https://www.meetup.com/openstackdk/events/241302507/ * Wednesday October 18 in Copenhagen, DK: Ericsson as Proud Sponsor of OpenStack Days Nordic! - https://www.meetup.com/EricssonDenmark/events/243550623/ * Thursday October 19 in Kazan, RU: ?????? OpenStack Meetup ? ?????? - https://www.meetup.com/OpenStack-Russia-Kazan/events/243518457/ * Thursday October 19 in K?ln, DE: OpenTechThoughts: TechExperience & TechTalk - https://www.meetup.com/OpenStack-Cologne/events/243821271/ * Thursday October 19 in Portland, OR, US: Cloud Instance Bootstrapping - https://www.meetup.com/openstack-pdx/events/244035380/ * Thursday October 19 in Chesterfield, MO, US: SUSE: Cloud strategy for VMs and Containers - https://www.meetup.com/OpenStack-STL/events/240759508/ -- Rich Bowen: Community Architect rbowen at redhat.com @rbowen // @RDOCommunity // @CentOSProject 1 859 351 9166 From hguemar at fedoraproject.org Mon Oct 9 15:00:04 2017 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 9 Oct 2017 15:00:04 +0000 (UTC) Subject: [rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20171009150004.9115C6004C57@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2017-10-11 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Meeting ](https://etherpad.openstack.org/p/RDO-Meeting) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From bfournie at redhat.com Mon Oct 9 15:06:20 2017 From: bfournie at redhat.com (Bob Fournier) Date: Mon, 9 Oct 2017 11:06:20 -0400 Subject: [rdo-list] Looking for help in properly configuring a TripleO environment In-Reply-To: References: Message-ID: Hi Mark, Could you also post the network configuration files you are using for the overcloud deployment, e.g. - network-environment.yaml - network-isolation-custom.yaml - any nic config files used by network-environment.yaml It looks like you are using an isolated network that is not reachable (172.16.0.14). Also, did you run introspection on these nodes? Thanks, Bob On Mon, Oct 9, 2017 at 10:07 AM, Mark Hamzy wrote: > I am looking for help in properly configuring a TripleO environment on a > machine with two network cards talking to two baremetal nodes in the > overcloud also with two network cards. One network will be for > provisioning and one will be for internet connection. I have documented my > current configuration at: > > https://fedoraproject.org/wiki/User:Hamzy/TripleO_mixed_ > undercloud_overcloud_try8 > > > 2017-09-23 23:54:49Z [overcloud.ControllerAllNodesValidationDeployment.0]: > CREATE_COMPLETE state changed > 2017-09-23 23:54:49Z [overcloud.ControllerAllNodesValidationDeployment]: > CREATE_COMPLETE Stack CREATE completed successfully > 2017-09-23 23:54:49Z [overcloud.ControllerAllNodesValidationDeployment]: > CREATE_COMPLETE state changed > 2017-09-24 00:05:06Z [overcloud.ComputeAllNodesValidationDeployment.0]: > SIGNAL_IN_PROGRESS Signal: deployment d54b96a6-1860-4802-a$ > 45-db4ece0317e4 failed (1) > 2017-09-24 00:05:06Z [overcloud.ComputeAllNodesValidationDeployment.0]: > CREATE_FAILED Error: resources[0]: Deployment to server fa$led: > deploy_status_code : Deployment exited with non-zero status code: 1 > 2017-09-24 00:05:06Z [overcloud.ComputeAllNodesValidationDeployment]: > CREATE_FAILED Resource CREATE failed: Error: resources[0]: D$ployment to > server failed: deploy_status_code : Deployment exited with non-zero status > code: 1 > 2017-09-24 00:05:07Z [overcloud.ComputeAllNodesValidationDeployment]: > CREATE_FAILED Error: resources.ComputeAllNodesValidationDepl$yment.resources[0]: > Deployment to server failed: deploy_status_code: Deployment exited with > non-zero status code: 1 > 2017-09-24 00:05:07Z [overcloud]: CREATE_FAILED Resource CREATE failed: > Error: resources.ComputeAllNodesValidationDeployment.resou$ces[0]: > Deployment to server failed: deploy_status_code: Deployment exited with > non-zero status code: 1 > > Stack overcloud CREATE_FAILED > > overcloud.ComputeAllNodesValidationDeployment.0: > resource_type: OS::Heat::StructuredDeployment > physical_resource_id: d54b96a6-1860-4802-ad45-db4ece0317e4 > status: CREATE_FAILED > status_reason: | > Error: resources[0]: Deployment to server failed: deploy_status_code : > Deployment exited with non-zero status code: 1 > deploy_stdout: | > ... > Ping to 172.16.0.14 failed. Retrying... > Ping to 172.16.0.14 failed. Retrying... > Ping to 172.16.0.14 failed. Retrying... > Ping to 172.16.0.14 failed. Retrying... > Ping to 172.16.0.14 failed. Retrying... > Ping to 172.16.0.14 failed. Retrying... > Ping to 172.16.0.14 failed. Retrying... > Ping to 172.16.0.14 failed. Retrying... > Ping to 172.16.0.14 failed. Retrying... > FAILURE > (truncated, view all with --long) > deploy_stderr: | > 172.16.0.14 is not pingable. Local Network: 172.16.0.0/24 > > > -- > Mark > > You must be the change you wish to see in the world. -- Mahatma Gandhi > Never let the future disturb you. You will meet it, if you have to, with > the same weapons of reason which today arm you against the present. -- > Marcus Aurelius > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hamzy at us.ibm.com Mon Oct 9 15:49:47 2017 From: hamzy at us.ibm.com (Mark Hamzy) Date: Mon, 9 Oct 2017 10:49:47 -0500 Subject: [rdo-list] Looking for help in properly configuring a TripleO environment In-Reply-To: References: Message-ID: Bob Fournier wrote on 10/09/2017 10:06:20 AM: > Could you also post the network configuration files you are using > for the overcloud deployment, e.g. > - network-environment.yaml > - network-isolation-custom.yaml > - any nic config files used by network-environment.yaml Burried somewhat in that URL was the following: cp -r /usr/share/openstack-tripleo-heat-templates templates (cd templates/; wget --quiet -O - h ttps://hamzy.fedorapeople.org/openstack-tripleo-heat-templates.patch | patch -p1) Is the patch file enough for people to view the configuration files? > It looks like you are using an isolated network that is not > reachable (172.16.0.14). > > Also, did you run introspection on these nodes? Well, the overcloud nodes are ppc64le machines and introspection does not work correctly yet IIRC. However, they both do deploy and I can watch that process on the consoles. They do get IPs from DHCP. [hamzy at overcloud-controller-0 ~]$ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enP1p3s0f0: mtu 1500 qdisc mq portid 98be9454b240 state DOWN qlen 1000 link/ether 98:be:94:54:b2:40 brd ff:ff:ff:ff:ff:ff 3: enP1p3s0f1: mtu 1500 qdisc mq portid 98be9454b242 state DOWN qlen 1000 link/ether 98:be:94:54:b2:42 brd ff:ff:ff:ff:ff:ff 4: enP1p5s0f0: mtu 1500 qdisc mq portid 98be94546360 state DOWN qlen 1000 link/ether 98:be:94:54:63:60 brd ff:ff:ff:ff:ff:ff 5: enP1p5s0f1: mtu 1500 qdisc mq portid 98be94546362 state DOWN qlen 1000 link/ether 98:be:94:54:63:62 brd ff:ff:ff:ff:ff:ff 6: enP3p11s0f0: mtu 1500 qdisc mq portid 40f2e9316940 state DOWN qlen 1000 link/ether 40:f2:e9:31:69:40 brd ff:ff:ff:ff:ff:ff 7: enP3p11s0f1: mtu 1500 qdisc mq portid 40f2e9316942 state DOWN qlen 1000 link/ether 40:f2:e9:31:69:42 brd ff:ff:ff:ff:ff:ff 8: enP6p1s0f0: mtu 1500 qdisc mq portid 98be94541f80 state DOWN qlen 1000 link/ether 98:be:94:54:1f:80 brd ff:ff:ff:ff:ff:ff 9: enP6p1s0f1: mtu 1500 qdisc mq portid 98be94541f82 state DOWN qlen 1000 link/ether 98:be:94:54:1f:82 brd ff:ff:ff:ff:ff:ff 10: enP3p5s0f0: mtu 1500 qdisc mq portid 0100000000304534323130363730453131 state DOWN qlen 1000 link/ether 00:90:fa:74:05:50 brd ff:ff:ff:ff:ff:ff 11: enP3p5s0f1: mtu 1500 qdisc mq portid 0200000000304534323130363730453131 state DOWN qlen 1000 link/ether 00:90:fa:74:05:51 brd ff:ff:ff:ff:ff:ff 12: enP3p5s0f2: mtu 1500 qdisc mq master ovs-system portid 0300000000304534323130363730453131 state UP qlen 1000 link/ether 00:90:fa:74:05:52 brd ff:ff:ff:ff:ff:ff inet6 fe80::290:faff:fe74:552/64 scope link valid_lft forever preferred_lft forever 13: enP3p5s0f3: mtu 1500 qdisc mq portid 0400000000304534323130363730453131 state UP qlen 1000 link/ether 00:90:fa:74:05:53 brd ff:ff:ff:ff:ff:ff inet 9.114.118.245/24 brd 9.114.118.255 scope global dynamic enP3p5s0f3 valid_lft 64400sec preferred_lft 64400sec inet6 fe80::290:faff:fe74:553/64 scope link valid_lft forever preferred_lft forever 14: ovs-system: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether e2:41:9f:8c:f8:62 brd ff:ff:ff:ff:ff:ff 15: br-ex: mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 link/ether 00:90:fa:74:05:52 brd ff:ff:ff:ff:ff:ff inet 9.114.118.245/24 brd 9.114.118.255 scope global br-ex valid_lft forever preferred_lft forever inet6 fd55:faaf:e1ab:3d9:290:faff:fe74:552/64 scope global mngtmpaddr dynamic valid_lft 2591857sec preferred_lft 604657sec inet6 fe80::290:faff:fe74:552/64 scope link valid_lft forever preferred_lft forever 16: vlan10: mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 link/ether ce:fa:f3:36:e9:bb brd ff:ff:ff:ff:ff:ff inet 10.0.0.18/24 brd 10.0.0.255 scope global vlan10 valid_lft forever preferred_lft forever inet6 fe80::ccfa:f3ff:fe36:e9bb/64 scope link valid_lft forever preferred_lft forever 17: vlan20: mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 link/ether e2:59:d0:45:4b:2e brd ff:ff:ff:ff:ff:ff inet 172.17.0.20/24 brd 172.17.0.255 scope global vlan20 valid_lft forever preferred_lft forever inet6 fe80::e059:d0ff:fe45:4b2e/64 scope link valid_lft forever preferred_lft forever 18: vlan30: mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 link/ether 32:b7:a2:9c:f9:c7 brd ff:ff:ff:ff:ff:ff inet 172.18.0.13/24 brd 172.18.0.255 scope global vlan30 valid_lft forever preferred_lft forever inet6 fe80::30b7:a2ff:fe9c:f9c7/64 scope link valid_lft forever preferred_lft forever 19: vlan40: mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 link/ether 8e:e0:a7:a6:ba:f5 brd ff:ff:ff:ff:ff:ff inet 172.19.0.10/24 brd 172.19.0.255 scope global vlan40 valid_lft forever preferred_lft forever inet6 fe80::8ce0:a7ff:fea6:baf5/64 scope link valid_lft forever preferred_lft forever 20: vlan50: mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 link/ether 4e:d6:7e:d8:45:da brd ff:ff:ff:ff:ff:ff inet 172.16.0.14/24 brd 172.16.0.255 scope global vlan50 valid_lft forever preferred_lft forever inet6 fe80::4cd6:7eff:fed8:45da/64 scope link valid_lft forever preferred_lft foreve -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Mon Oct 9 18:11:16 2017 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 9 Oct 2017 14:11:16 -0400 Subject: [rdo-list] Unanswered RDO questions Message-ID: It's been several weeks since I've sent this out. These are the unanswered questions tagged "RDO" on ask.openstack.org. Please look through and see if there's any that you are able to answer. Thanks! --Rich 42 unanswered questions: Openstack installation with binary distribution https://ask.openstack.org/en/question/110315/openstack-installation-with-binary-distribution/ Tags: openstack, openstack-package, hadoop, packagemanager RDO packstack very slow write on cinder volumes https://ask.openstack.org/en/question/109675/rdo-packstack-very-slow-write-on-cinder-volumes/ Tags: rdo, volume, slow, lvm, iscsi Openstack Packstack LVM Thin Provisioning https://ask.openstack.org/en/question/109600/openstack-packstack-lvm-thin-provisioning/ Tags: packstack, openstack, lvm, thin, overprovisioning controller node unexpectedly reboots https://ask.openstack.org/en/question/109468/controller-node-unexpectedly-reboots/ Tags: controller, reboot, newton, rdo, packstack Keystone: domain_specific_drivers_enabled not working with LDAP https://ask.openstack.org/en/question/108918/keystone-domain_specific_drivers_enabled-not-working-with-ldap/ Tags: keystone, ldap limit instance administration via policy.json https://ask.openstack.org/en/question/108746/limit-instance-administration-via-policyjson/ Tags: policy.json, rights, policy, limit Ocata - "nova-manage db sync" fails https://ask.openstack.org/en/question/107970/ocata-nova-manage-db-sync-fails/ Tags: ocata, nova, db_sync Am I missing a configuration option for Magnum? Error "no such option B in group [DEFAULT]" https://ask.openstack.org/en/question/107701/am-i-missing-a-configuration-option-for-magnum-error-no-such-option-b-in-group-default/ Tags: magnum, magnum_service VM cannot get network access https://ask.openstack.org/en/question/107627/vm-cannot-get-network-access/ Tags: openvswitch, neutron, rdo, packstack-ocata ocata - theme customization with templates https://ask.openstack.org/en/question/107544/ocata-theme-customization-with-templates/ Tags: ocata, horizon, theme, templates Can't login to dashboard https://ask.openstack.org/en/question/107427/cant-login-to-dashboard/ Tags: dashboard-keystone Gnocchi returned unauthorized error when creating resources alarm from Heat https://ask.openstack.org/en/question/106910/gnocchi-returned-unauthorized-error-when-creating-resources-alarm-from-heat/ Tags: heat, aodh, gnocchi, alarm, unauthorized how to resolved: "Error: Systemd start for openstack-nova-scheduler failed!" https://ask.openstack.org/en/question/106887/how-to-resolved-error-systemd-start-for-openstack-nova-scheduler-failed/ Tags: nova-scheduler, rdo Build of instance aborted: Failed to allocate the network(s), not rescheduling. https://ask.openstack.org/en/question/106853/build-of-instance-aborted-failed-to-allocate-the-networks-not-rescheduling/ Tags: ocata, neutron, networking Instance can not be launched from the image https://ask.openstack.org/en/question/106626/instance-can-not-be-launched-from-the-image/ Tags: create_instance Error during packstack installation https://ask.openstack.org/en/question/106351/error-during-packstack-installation/ Tags: rdo, install, packstack Ocata, openstack server list Unexpected API Error https://ask.openstack.org/en/question/106063/ocata-openstack-server-list-unexpected-api-error/ Tags: ocata-nova, create_instance RHEL 7.2 openstack octava . Error while generating authentication token issue https://ask.openstack.org/en/question/105531/rhel-72-openstack-octava-error-while-generating-authentication-token-issue/ Tags: openstack, identityv3 Unable to configure bgpvpn service plugin on CentOS7 https://ask.openstack.org/en/question/104860/unable-to-configure-bgpvpn-service-plugin-on-centos7/ Tags: packstack, rdo, neutron, networking Upgrade Mitaka to Newton using Packstack got Python ascii error https://ask.openstack.org/en/question/103615/upgrade-mitaka-to-newton-using-packstack-got-python-ascii-error/ Tags: keystone, packstack, newton, python, install Attaching volume to instance https://ask.openstack.org/en/question/102605/attaching-volume-to-instance/ Tags: nova, glusterfs, rdo, centos, storage Create/add additional CIDR for public IP pool https://ask.openstack.org/en/question/102428/createadd-additional-cidr-for-public-ip-pool/ Tags: fuel-9.0, fuel, mitaka, networking domain version of >dashboards< keystone_policy.json https://ask.openstack.org/en/question/102249/domain-version-of-dashboards-keystone_policyjson/ Tags: mitaka, identityv3, domains, policy dashboard authentication problem https://ask.openstack.org/en/question/101905/dashboard-authentication-problem/ Tags: dashboard, keystone ERROR nova.virt.libvirt.driver with Glusterfs in /var/lib/nova/instances https://ask.openstack.org/en/question/101812/error-novavirtlibvirtdriver-with-glusterfs-in-varlibnovainstances/ Tags: glusterfs, nova, storage Cant get volume_clear option in cinder.conf to work. https://ask.openstack.org/en/question/101230/cant-get-volume_clear-option-in-cinderconf-to-work/ Tags: cinder, newton, packstack, rdo, storage Rebooting Network node after installation https://ask.openstack.org/en/question/101001/rebooting-network-node-after-installation/ Tags: br-ex, neutron, bridge, create-network, networking Ceilometer - No meters / stats ? https://ask.openstack.org/en/question/100767/ceilometer-no-meters-stats/ Tags: ceilometer, stats, meters, rdo, metrics How to configure swift s3 api https://ask.openstack.org/en/question/100414/how-to-configure-swift-s3-api/ Tags: openstack-swift, proxy-swift, s3, api, mitaka cloudadmin user can't manage all domain from Horizon https://ask.openstack.org/en/question/100317/cloudadmin-user-cant-manage-all-domain-from-horizon/ Tags: horizon, mitaka, cloudadmin, domain -- Rich Bowen: Community Architect rbowen at redhat.com @rbowen // @RDOCommunity // @CentOSProject 1 859 351 9166 From chkumar246 at gmail.com Tue Oct 10 01:30:05 2017 From: chkumar246 at gmail.com (chkumar246 at gmail.com) Date: Tue, 10 Oct 2017 01:30:05 +0000 (UTC) Subject: [rdo-list] [Fedocal] Reminder meeting : RDO Office Hours Message-ID: <20171010013005.8050F6004C57@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO Office Hours on 2017-10-10 from 13:30:00 to 15:30:00 UTC The meeting will be about: The meeting will be about RDO Office Hour. Aim: To keep up with increasing participation, we'll host office hours to add more easy fixes and provide mentoring to newcomers. [Agenda at RDO Office Hour easyfixes](https://review.rdoproject.org/etherpad/p/rdo-office-hour-easyfixes) Source: https://apps.fedoraproject.org/calendar/meeting/6374/ From chkumar246 at gmail.com Tue Oct 10 11:21:08 2017 From: chkumar246 at gmail.com (Chandan kumar) Date: Tue, 10 Oct 2017 16:51:08 +0530 Subject: [rdo-list] [Fedocal] Reminder meeting : RDO Office Hours In-Reply-To: <20171010013005.8050F6004C57@fedocal02.phx2.fedoraproject.org> References: <20171010013005.8050F6004C57@fedocal02.phx2.fedoraproject.org> Message-ID: Hello All, On Tue, Oct 10, 2017 at 7:00 AM, wrote: > Dear all, > > You are kindly invited to the meeting: > RDO Office Hours on 2017-10-10 from 13:30:00 to 15:30:00 UTC > > > The meeting will be about: > > > The meeting will be about RDO Office Hour. > > Aim: To keep up with increasing participation, we'll host office hours to add more easy fixes and provide mentoring to newcomers. > > > [Agenda at RDO Office Hour easyfixes](https://review.rdoproject.org/etherpad/p/rdo-office-hour-easyfixes) > > Today's RDO office hour is canceled. Sorry for the late notice. Thanks, Chandan Kumar From rbowen at redhat.com Tue Oct 10 13:03:33 2017 From: rbowen at redhat.com (Rich Bowen) Date: Tue, 10 Oct 2017 09:03:33 -0400 Subject: [rdo-list] RDO Social: Tuesday @ OpenStack Summit Message-ID: <8de9c8da-5787-8d12-7de1-0b29ae8335d7@redhat.com> Join us for light refreshments at the end of the day Tuesday, at OpenStack Summit. We'll be gathering at the Pumphouse, on the Sydney harbor, and just a few minutes walk from the conference venue. We'll start around 6:30pm (right after the last sessions of the day end), and going until my budget runs out. Please register at https://www.eventbrite.com/e/rdo-social-at-the-pumphouse-tickets-38766394329 - the event will be free, but we need to know how many to expect, to tell the venue. (For the moment, I've capped registration at 50, until I hear back from the venue for certain how many we can accommodate.) Come celebrate another great release, and all the hard work that went into it. -- Rich Bowen: Community Architect rbowen at redhat.com @rbowen // @RDOCommunity // @CentOSProject 1 859 351 9166 From jlabarre at redhat.com Tue Oct 10 19:59:06 2017 From: jlabarre at redhat.com (James LaBarre) Date: Tue, 10 Oct 2017 15:59:06 -0400 Subject: [rdo-list] Horizon inside TripleO Quickstart install Message-ID: <82dd0957-3c24-f51d-6eb0-07867f0bf580@redhat.com> trying to determine if a Horizon install is working in a TripleO Quickstart install. The service on the controller node has been configured as per https://docs.openstack.org/ocata/install-guide-rdo/horizon-install.html, the httpsd and memcached services show as running with systemctl status (although httpd does complain about "overcloud-controller-0 python[122833]: ERROR:scss.ast:Function not found: function-exists:1"). I figure it would require at least one SSH tunnel to bring up the UI in a local browser, but just to see if it even seems to be active I tried connecting to the service with links on the controller itself, and it says "unable to retrieve http://localhost:5000/: connection refused". Still don't know if it's just that I can't validate it that way, or it's still not running From dms at redhat.com Wed Oct 11 02:42:17 2017 From: dms at redhat.com (David Moreau Simard) Date: Tue, 10 Oct 2017 22:42:17 -0400 Subject: [rdo-list] Problems with the private RDO container registry Message-ID: Hi, TL;DR: All images have been mistakenly deleted by a script [0], sorry about that. Images and tags will be repopulated on the next periodic job. As you might already know, the private RDO container registry we use for CI purposes is an "OpenShift Standalone Registry" [1]. This implementation replaced the (now) deprecated Atomic Registry [2][3]. In a nutshell, it is an OpenShift deployment without all the bells and whistles of OpenShift: apps. It only contains the internal OpenShift registry as well as the registry console web interface and this registry is exposed for consumption. OpenShift Standalone Registry was a bit of an uncharted territory, not only for us but I feel for upstream as well. This has been a learning experience but we have contributed several patches and upstream has been very receptive to our feedback which resulted in more patches, making the use case better supported in general. For the sake of keeping this short, the latest issue we had been looking at was the pruning of older images in order to keep the disk usage (and RAM[4]) under control. The good news is that in OpenShift trunk, 3.7, they managed to land part of the patches [5][6] required to make the whole process easier to manage. However, the bad news is that we're currently running OpenShift 3.5, the latest version being 3.6. Our last attempt a pruning images deleted legitimate image blobs which resulted in an inconsistent state. I've forcefully deleted all the images completely in order to start from a clean slate. So, where does that leave us ? This is a bit frustrating but not in vain, we've made progress. In the short term, we'll increase the disk space allocation for the registry in order to allow for more retention. I also want to test a clean installation of OpenShift 3.7 (ahead of release) with our playbooks [7] in order to confirm that our ongoing issues have been resolved. After confirming the issues have been resolved, we'll move forward to use 3.7. For what it's worth, this work might end up paying off in OpenStack upstream infrastructure as well. At the last OpenStack PTG in Denver, we agreed that a infrastructure-managed image registry would be necessary -- not just for TripleO but for other projects such as Kolla. Between docker-registry/docker-distribution (which leave much to be desired), quay.io (which is not free and open source) and OpenShift standalone registry, it's entirely possible that we end up using OpenShift upstream. Thanks, and sorry about that. [0]: https://bugzilla.redhat.com/show_bug.cgi?id=1408676 [1]: https://docs.openshift.com/container-platform/latest/install_config/install/stand_alone_registry.html [2]: http://www.projectatomic.io/registry/ [3]: https://www.projectatomic.io/blog/2017/05/oo-standalone-registry/ [4]: https://bugzilla.redhat.com/show_bug.cgi?id=1489501 [5]: https://github.com/openshift/origin/commit/7783364a6f1fd34cf4833c0be506b8ee90d62691 [6]: https://github.com/openshift/openshift-docs/commit/be0ee4f8a8b7f66fccf77ebbc34c26ba223d794c [7]: https://github.com/rdo-infra/rdo-container-registry David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] From whayutin at redhat.com Wed Oct 11 03:52:21 2017 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 10 Oct 2017 23:52:21 -0400 Subject: [rdo-list] Problems with the private RDO container registry In-Reply-To: References: Message-ID: On Tue, Oct 10, 2017 at 10:42 PM, David Moreau Simard wrote: > Hi, > > TL;DR: All images have been mistakenly deleted by a script [0], sorry > about that. Images and tags will be repopulated on the next periodic > job. > > As you might already know, the private RDO container registry we use > for CI purposes is an "OpenShift Standalone Registry" [1]. > This implementation replaced the (now) deprecated Atomic Registry [2][3]. > > In a nutshell, it is an OpenShift deployment without all the bells and > whistles of OpenShift: apps. > It only contains the internal OpenShift registry as well as the > registry console web interface and this registry is exposed for > consumption. > > OpenShift Standalone Registry was a bit of an uncharted territory, not > only for us but I feel for upstream as well. > This has been a learning experience but we have contributed several > patches and upstream has been very receptive to our feedback which > resulted in more patches, making the use case better supported in > general. > > For the sake of keeping this short, the latest issue we had been > looking at was the pruning of older images in order to keep the disk > usage (and RAM[4]) under control. > The good news is that in OpenShift trunk, 3.7, they managed to land > part of the patches [5][6] required to make the whole process easier > to manage. > > However, the bad news is that we're currently running OpenShift 3.5, > the latest version being 3.6. > Our last attempt a pruning images deleted legitimate image blobs which > resulted in an inconsistent state. > I've forcefully deleted all the images completely in order to start > from a clean slate. > > So, where does that leave us ? > This is a bit frustrating but not in vain, we've made progress. > > In the short term, we'll increase the disk space allocation for the > registry in order to allow for more retention. > I also want to test a clean installation of OpenShift 3.7 (ahead of > release) with our playbooks [7] in order to confirm that our ongoing > issues have been resolved. > After confirming the issues have been resolved, we'll move forward to use > 3.7. > > For what it's worth, this work might end up paying off in OpenStack > upstream infrastructure as well. > At the last OpenStack PTG in Denver, we agreed that a > infrastructure-managed image registry would be necessary -- not just > for TripleO but for other projects such as Kolla. > Between docker-registry/docker-distribution (which leave much to be > desired), quay.io (which is not free and open source) and OpenShift > standalone registry, it's entirely possible that we end up using > OpenShift upstream. > > Thanks, and sorry about that. > > [0]: https://bugzilla.redhat.com/show_bug.cgi?id=1408676 > [1]: https://docs.openshift.com/container-platform/latest/ > install_config/install/stand_alone_registry.html > [2]: http://www.projectatomic.io/registry/ > [3]: https://www.projectatomic.io/blog/2017/05/oo-standalone-registry/ > [4]: https://bugzilla.redhat.com/show_bug.cgi?id=1489501 > [5]: https://github.com/openshift/origin/commit/ > 7783364a6f1fd34cf4833c0be506b8ee90d62691 > [6]: https://github.com/openshift/openshift-docs/commit/ > be0ee4f8a8b7f66fccf77ebbc34c26ba223d794c > [7]: https://github.com/rdo-infra/rdo-container-registry > > David Moreau Simard > Senior Software Engineer | OpenStack RDO > dmsimard = [irc, github, twitter] > Thanks for going above the call the duty there David. It does appear that we're blazing a path for the upstream. Well done. -------------- next part -------------- An HTML attachment was scrubbed... URL: From javier.pena at redhat.com Wed Oct 11 15:44:21 2017 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 11 Oct 2017 11:44:21 -0400 (EDT) Subject: [rdo-list] [Meeting] RDO meeting (2017-10-11) minutes In-Reply-To: <1671948045.14517903.1507736655788.JavaMail.zimbra@redhat.com> Message-ID: <1286086378.14517913.1507736661377.JavaMail.zimbra@redhat.com> ============================== #rdo: RDO meeting - 2017-10-11 ============================== Meeting started by jpena at 15:00:25 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2017_10_11/2017/rdo_meeting___2017_10_11.2017-10-11-15.00.log.html . Meeting summary --------------- * roll call (jpena, 15:00:35) * Infrastructure topics (jpena, 15:03:03) * Published review.rdo config-core policy (jpena, 15:09:48) * Fedora SIG status (jpena, 15:12:17) * Fedora OpenStack SIG is live (number80, 15:13:07) * LINK: https://pagure.io/fedora-infrastructure/issue/6435 (number80, 15:13:10) * mrunge updated clients in F27 (number80, 15:13:33) * Arrfab has updated CentOS Cloud to RDO pike (number80, 15:17:00) * LINK: https://arrfab.net/posts/2017/Oct/11/using-ansible-openstack-modules-on-centos-7/ (number80, 15:17:06) * Announcements (jpena, 15:23:15) * RDO booth in Sydney needs you! (number80, 15:24:35) * LINK: https://etherpad.openstack.org/p/rdo-sydney-summit-booth (number80, 15:24:45) * RDO Social @ OpenStack Summit (number80, 15:24:54) * LINK: https://www.eventbrite.com/e/rdo-social-at-the-pumphouse-tickets-38766394329 (number80, 15:25:21) * chair for next meeting (jpena, 15:27:09) * open floor (jpena, 15:27:50) Meeting ended at 15:32:29 UTC. People present (lines said) --------------------------- * jpena (32) * number80 (25) * rbowen (21) * Duck (18) * dmsimard (15) * openstack (11) * jruzicka (6) * chandankumar (4) * rdogerrit (3) * apevec (3) * ykarel (2) * amoralej (2) * jschlueter (1) Generated by `MeetBot`_ 0.1.4 From chkumar246 at gmail.com Wed Oct 11 16:08:44 2017 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 11 Oct 2017 21:38:44 +0530 Subject: [rdo-list] Config project session on 17th Oct, 2017 in RDO Office Hour Message-ID: Hello, We are pleased to announce that adarazs and sshnaidm are going to talk about config project during upcoming RDO office Hour on 17th Oct, 2017 (Tue) at 13:30:00 UTC on #rdo IRC channel on Freenode server. The config repository[1.] contains the configuration used to manage https://review.rdoproject.org . By using the same config, we add/manage RDO package repositories as well as rolling out different CI jobs against Tripleo and RDO. It also manages the configuration for software factory which powers https://review.rdoproject.org. Session Agenda: * Introduction to Config Project * What is present in different directories? * Each of the config cores will talk about different directories. Feel free to join the session. Links: [1.] https://review.rdoproject.org/r/gitweb?p=config.git;a=summary [1.] Mirrored Repo: https://github.com/rdo-infra/review.rdoproject.org-config Thanks, Chandan Kumar From rbowen at redhat.com Wed Oct 11 16:31:49 2017 From: rbowen at redhat.com (Rich Bowen) Date: Wed, 11 Oct 2017 12:31:49 -0400 Subject: [rdo-list] Fwd: [openstack-community] Speed Mentoring in Sydney In-Reply-To: References: Message-ID: <190fb05d-5237-8a16-3cc8-67f9c83982dc@redhat.com> FYI: -------- Forwarded Message -------- Subject: [openstack-community] Speed Mentoring in Sydney Date: Thu, 12 Oct 2017 00:33:48 +1100 From: Sonia Ramza To: community at lists.openstack.org Hi there, There?s an fantastic opportunity in Sydney with Speed Mentoring ?happening again at the Summit. There are two potential ways you can participate: * Share your knowledge and expertise as a /Mentor/, sign up form here:https://openstackfoundation.formstack.com/forms/syd_speed_mentoring_mentor * Learn from the best and brightest of our community members, joining as a /Mentee/: https://openstackfoundation.formstack.com/forms/syd_speed_mentoring_mentee Whichever way you join us, it?s a great afternoon over lunch and a lovely way to network and meet your fellow Stackers. Sign up today! - Sonia Community Management, OpenStack Foundation -------------- next part -------------- _______________________________________________ Community mailing list Community at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/community From arxcruz at redhat.com Wed Oct 11 21:54:22 2017 From: arxcruz at redhat.com (Arx Cruz) Date: Wed, 11 Oct 2017 23:54:22 +0200 Subject: [rdo-list] TripleO CI end of sprint status Message-ID: Hello, On October 10 we came to our first end of sprint using our new team structure [1], and here?s the highlights: TripleO CI Infra meeting notes: - Zuul v3 related patch: - The new Zuul v3 doesn?t have the cirros image cached, so we have a patch to change the tempest image to default value, that is download the image from cirros website. - https://review.openstack.org/510839 - Zuul migration - There will have an outage in order to fix some issues found during the Zuul migration to v3 - http://lists.openstack.org/pipermail/openstack-dev/2017-October/123337.html - Job for migration - We are planning to start moving some jobs rom rh1 cloud to rdo cloud. - RDO Software Factory outage - There were an outage on RDO cloud on October 9, some jobs were stalled for a long time, now everything is working. Sprint Review: The sprint epic was utilizing the DLRN api across TripleO and RDO [2] to report job status and promotions, and we set several tasks in 20 cards, and I am glad to report that we were able to complete 19 cards! Some of these cards generate some tech debts, and after a review, we got 11 card in the tech debt list, plus 3 new bugs opened and XYZ bugs closed by the Ruck and Rover. One can see the results of the sprint via https://tinyurl.com/yblqs5z2 Below the list of new bugs related to the work completed in the sprint: - https://bugs.launchpad.net/tripleo/+bug/1722552 - https://bugs.launchpad.net/tripleo/+bug/1722554 - https://bugs.launchpad.net/tripleo/+bug/1722558 And here the list of what was done by the Ruck and Rover: - https://bugs.launchpad.net/tripleo/+bug/1722640 - https://bugs.launchpad.net/tripleo/+bug/1722621 - https://bugs.launchpad.net/tripleo/+bug/1722596 - https://bugs.launchpad.net/tripleo/+bug/1721790 - https://bugs.launchpad.net/tripleo/+bug/1721366 - https://bugs.launchpad.net/tripleo/+bug/1721134 - https://bugs.launchpad.net/tripleo/+bug/1720556 - https://bugs.launchpad.net/tripleo/+bug/1719902 - https://bugs.launchpad.net/tripleo/+bug/1719421 [1] https://review.openstack.org/#/c/509280/ [2] https://trello.com/c/5FnfGByl -------------- next part -------------- An HTML attachment was scrubbed... URL: From arxcruz at redhat.com Thu Oct 12 14:02:54 2017 From: arxcruz at redhat.com (Arx Cruz) Date: Thu, 12 Oct 2017 16:02:54 +0200 Subject: [rdo-list] RDO-Infra Sprint meeting - Oct 11 Message-ID: Hello, Here?s the highlights from TripleO CI Squad meeting from October 11 - Roles - The Ruck and the Rover will be responsible for any CI problems, so if you have anything related to CI, please contact them. The rest of the team will work on the sprint - Ruck - Wes Hayutin, irc: weshay|ruck - Rover - Gabrielle Cerami, irc: panda|rover - Team - Arx Cruz - Ronelle Landy - Attila Darazs - Sagi Shnaidman - John Trowbridge - For this sprint 10/11/2017 - 10/25/2017 - After review the proposed topics from the UA, the team voted to work on OVB migration to RDO cloud and related work. - The epic task with more information can be found here https://trello.com/c/wyUOPIhP/377-ovb-migration-to-rdo-cloud-and-related-work - Tasks can be found in both the trello card above, or in the TripleO CI Squad trello board using the filter by label ?Sprint 2 ( 10/11/2017 - 10/25/2017 )? or clicking in this link here https://tinyurl.com/yb2mkpwv If you have any questions, suggestions please let us know. Your feedback is very important to us! Kind regards, Arx Cruz -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Thu Oct 12 18:08:38 2017 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 12 Oct 2017 18:08:38 +0000 Subject: [rdo-list] Upcoming mailing list changes Message-ID: A few months ago we discussed splitting the mailing list into two - a users@ and dev@ list - and at the same time moving the list from @redhat.com to @ rdoproject.org After some delays and technical hurdles, this should be happening in the coming few weeks. Initially, you will be on the subscriber list for both of these two new lists, and it will be up to you to determine whether you stay on both, or just one or the other. You can read more details in this (not yet merged) pull request - https://github.com/redhat-openstack/website/pull/1088/commits/ade9f345bec489bfd74e05ff8d252ea7d0ac4083 - about what things will look like once the task is completed. And you can track the status of the issue in this ticket: https://bugzilla.redhat.com/show_bug.cgi?id=1487324 Thanks for your patience. --Rich -- -- Rich Bowen - rbowen at redhat.com @rbowen // @rdocommunity // @CentOSProject 859 351 9166 -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Mon Oct 16 08:31:15 2017 From: chkumar246 at gmail.com (Chandan kumar) Date: Mon, 16 Oct 2017 14:01:15 +0530 Subject: [rdo-list] Feedback/Notes from OpenStack India mini conference at OpenSource India Day, 2017 Bangalore Message-ID: Hello, Janki and I presented two talks at OpenStack India mini-conference at OpenSource India Day, 2017 Bangalore on 14th Oct 2017 [https://opensourceindia.in/osidays/open-source-india-2017/]. The overall conference was good, the audience was interesting. I spoke about RDO and Janki spoke about OpenDaylight's role in OpenStack. Link to slides is below. * Delivering a bleeding edge community-led open stack distribution- RDO : https://www.slideshare.net/ChandanKumar612/delivering-a-bleeding-edge-community-led-open-stack-distribution-rdo * OpenDaylight - Bringing SDN to OpenStack - https://docs.google.com/presentation/d/1mPfvW0dHInMp3SQde6YlJWAlFnn_YhjU17_E0Ezmlak/edit?usp=sharing The audience was interactive. We were hit by lots of questions/suggestions around TripleO and OpenStack. Listing few here: * TripleO Docs are not clear for deploying OpenStack. * It is too much focus from the developer perspective and not from the end-user perspective. For example, the deployment architecture could be explained using few scenarios. * I deployed the cloud, how can I monitor it? * How to do HA deployment using TripleO (Docs from tripleo.org does not show how to do that) and a note about deployment types. * How to configure and use TripleO-Ui (Having screenshots would be better)? * We have lots of deployments methods, people are confused what to use when? RHOS Director or tripleo-quickstart or Manual method? * How to use tripleo-quickstart effectively by using config files? * what to do if somehow undercloud gets destroyed or not accessible? Is redeploy the only option or is there a way to recover it? * People are still confused about the relationship between RDO and RHOS products, what to use when? Should I follow Red Hat customer portal docs or Upstream Docs? * People are still not clear about RDO, its goal and its role in OpenStack. * Do we have a list of projects from which TripleO is formed? and better tracking of the component version we release under TripleO. * How users can contribute to TripleO and OpenStack through feedback (as many of them even do not know about IRC) On a positive note, we saw many people interested and trying out TripleO. Thanks, Chandan Kumar Janki Chhatbar From apevec at redhat.com Mon Oct 16 10:42:40 2017 From: apevec at redhat.com (Alan Pevec) Date: Mon, 16 Oct 2017 12:42:40 +0200 Subject: [rdo-list] Feedback/Notes from OpenStack India mini conference at OpenSource India Day, 2017 Bangalore In-Reply-To: References: Message-ID: On Mon, Oct 16, 2017 at 10:31 AM, Chandan kumar wrote: ... > * People are still not clear about RDO, its goal and its role in OpenStack. Even after you presentation? Alan From chkumar246 at gmail.com Mon Oct 16 11:00:55 2017 From: chkumar246 at gmail.com (Chandan kumar) Date: Mon, 16 Oct 2017 16:30:55 +0530 Subject: [rdo-list] Feedback/Notes from OpenStack India mini conference at OpenSource India Day, 2017 Bangalore In-Reply-To: References: Message-ID: Hello Alan, On Mon, Oct 16, 2017 at 4:12 PM, Alan Pevec wrote: > On Mon, Oct 16, 2017 at 10:31 AM, Chandan kumar wrote: > ... >> * People are still not clear about RDO, its goal and its role in OpenStack. > > > Even after you presentation? > While talking to people during the conference, People were confused what is RDO in OpenStack ecosystem. After the presentation, we tried to clear the confusion after the presentation. I have added this point as a general feedback and notes. Thanks, Chandan Kumar From hguemar at fedoraproject.org Mon Oct 16 15:00:04 2017 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 16 Oct 2017 15:00:04 +0000 (UTC) Subject: [rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20171016150004.0A97C60A416B@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2017-10-18 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Meeting ](https://etherpad.openstack.org/p/RDO-Meeting) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From rbowen at redhat.com Mon Oct 16 20:12:39 2017 From: rbowen at redhat.com (Rich Bowen) Date: Mon, 16 Oct 2017 16:12:39 -0400 Subject: [rdo-list] Upcoming meetups Message-ID: <05c14539-2e7c-5914-d3f8-471554a42908@redhat.com> The following are the meetups I'm aware of in the next two weeks where OpenStack and/or RDO enthusiasts are likely to be present. If you know of others, please let me know, and/or add them to http://rdoproject.org/events If there's a meetup in your area, please consider attending. If you attend, please consider taking a few photos, and possibly even writing up a brief summary of what was covered. --Rich * Tuesday October 17 in Istanbul, TR: Openstack Days 2017 - https://www.meetup.com/Turkey-OpenStack-Meetup/events/243535231/ * Tuesday October 17 in Guadalajara, MX: Administraci?n de Redes en ambiente de nube con OpenStack - https://www.meetup.com/OpenStack-GDL/events/243661445/ * Wednesday October 18 in Copenhagen, DK: OpenStack Days Nordic - Copenhagen 2017 - https://www.meetup.com/openstackdk/events/241302507/ * Wednesday October 18 in Oslo, NO: Ericsson as Proud Sponsor of OpenStack Days Nordic! - https://www.meetup.com/EricssonNorway/events/243550738/ * Wednesday October 18 in M?xico City, MX: Learn about the latest network features in OpenStack - https://www.meetup.com/OpenstackMexicoCity/events/244231080/ * Thursday October 19 in Kazan, RU: ?????? OpenStack Meetup ? ?????? - https://www.meetup.com/OpenStack-Russia-Kazan/events/243518457/ * Thursday October 19 in K?ln, DE: OpenTechThoughts: TechExperience & TechTalk - https://www.meetup.com/OpenStack-Cologne/events/243821271/ * Thursday October 19 in Portland, OR, US: Cloud Instance Bootstrapping - https://www.meetup.com/openstack-pdx/events/244035380/ * Thursday October 19 in Chesterfield, MO, US: SUSE: Cloud strategy for VMs and Containers - https://www.meetup.com/OpenStack-STL/events/240759508/ * Monday October 23 in Houston, TX, US: All Day DevOps - https://www.meetup.com/openstackhoustonmeetup/events/241353325/ * Tuesday October 24 in Austin, TX, US: All Day DevOps 2017 - https://www.meetup.com/OpenStack-Austin/events/243887992/ * Tuesday October 24 in Wellington, NZ: Function as a Service for OpenStack (Wellington) - https://www.meetup.com/New-Zealand-OpenStack-User-Group/events/244239853/ * Wednesday October 25 in M?nchen, DE: OpenStack Grundlagen Workshop (M?nchen) - https://www.meetup.com/OpenStack-Munich/events/244039769/ * Wednesday October 25 in Fort Collins, CO, US: Talk About OpenStack - https://www.meetup.com/OpenStack-Colorado/events/244148339/ * Thursday October 26 in Helsinki, FI: OpenStackOperators Finland Video Call - https://www.meetup.com/OpenStack-Finland-User-Group/events/243550986/ * Friday October 27 in Sydney, AU: SLUG Oct meeting: Openstack virtualisation cluster - https://www.meetup.com/Sydney-Linux-User-Group/events/244109300/ From dms at redhat.com Mon Oct 16 20:41:21 2017 From: dms at redhat.com (David Moreau Simard) Date: Mon, 16 Oct 2017 16:41:21 -0400 Subject: [rdo-list] Upcoming meetups In-Reply-To: <05c14539-2e7c-5914-d3f8-471554a42908@redhat.com> References: <05c14539-2e7c-5914-d3f8-471554a42908@redhat.com> Message-ID: There is OpenStack Days Canada [1] this week, october 19th ! I'll be there. [1]: https://www.openstackcanada.com/ David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] On Mon, Oct 16, 2017 at 4:12 PM, Rich Bowen wrote: > The following are the meetups I'm aware of in the next two weeks where > OpenStack and/or RDO enthusiasts are likely to be present. If you know > of others, please let me know, and/or add them to > http://rdoproject.org/events > > If there's a meetup in your area, please consider attending. If you > attend, please consider taking a few photos, and possibly even writing > up a brief summary of what was covered. > > --Rich > > * Tuesday October 17 in Istanbul, TR: Openstack Days 2017 - > https://www.meetup.com/Turkey-OpenStack-Meetup/events/243535231/ > > * Tuesday October 17 in Guadalajara, MX: Administraci?n de Redes en ambiente > de nube con OpenStack - > https://www.meetup.com/OpenStack-GDL/events/243661445/ > > * Wednesday October 18 in Copenhagen, DK: OpenStack Days Nordic - Copenhagen > 2017 - https://www.meetup.com/openstackdk/events/241302507/ > > * Wednesday October 18 in Oslo, NO: Ericsson as Proud Sponsor of OpenStack > Days Nordic! - https://www.meetup.com/EricssonNorway/events/243550738/ > > * Wednesday October 18 in M?xico City, MX: Learn about the latest network > features in OpenStack - > https://www.meetup.com/OpenstackMexicoCity/events/244231080/ > > * Thursday October 19 in Kazan, RU: ?????? OpenStack Meetup ? ?????? - > https://www.meetup.com/OpenStack-Russia-Kazan/events/243518457/ > > * Thursday October 19 in K?ln, DE: OpenTechThoughts: TechExperience & > TechTalk - https://www.meetup.com/OpenStack-Cologne/events/243821271/ > > * Thursday October 19 in Portland, OR, US: Cloud Instance Bootstrapping - > https://www.meetup.com/openstack-pdx/events/244035380/ > > * Thursday October 19 in Chesterfield, MO, US: SUSE: Cloud strategy for VMs > and Containers - https://www.meetup.com/OpenStack-STL/events/240759508/ > > * Monday October 23 in Houston, TX, US: All Day DevOps - > https://www.meetup.com/openstackhoustonmeetup/events/241353325/ > > * Tuesday October 24 in Austin, TX, US: All Day DevOps 2017 - > https://www.meetup.com/OpenStack-Austin/events/243887992/ > > * Tuesday October 24 in Wellington, NZ: Function as a Service for OpenStack > (Wellington) - > https://www.meetup.com/New-Zealand-OpenStack-User-Group/events/244239853/ > > * Wednesday October 25 in M?nchen, DE: OpenStack Grundlagen Workshop > (M?nchen) - https://www.meetup.com/OpenStack-Munich/events/244039769/ > > * Wednesday October 25 in Fort Collins, CO, US: Talk About OpenStack - > https://www.meetup.com/OpenStack-Colorado/events/244148339/ > > * Thursday October 26 in Helsinki, FI: OpenStackOperators Finland Video Call > - https://www.meetup.com/OpenStack-Finland-User-Group/events/243550986/ > > * Friday October 27 in Sydney, AU: SLUG Oct meeting: Openstack > virtualisation cluster - > https://www.meetup.com/Sydney-Linux-User-Group/events/244109300/ > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com From dms at redhat.com Tue Oct 17 00:43:45 2017 From: dms at redhat.com (David Moreau Simard) Date: Mon, 16 Oct 2017 20:43:45 -0400 Subject: [rdo-list] logs.rdoproject.org pruning Message-ID: Hi, We've hit 1.5TB out of 2TB on logs.rdoproject.org and it's time to look at what kind of retention this gets us. logs.rdoproject.org aggregates logs from ci.centos.org, review.rdoproject.org as well as third party logs. We have about 90 days worth of data right now but we're also increasing the pace at which we're uploading new data as we are migrating new TripleO jobs to review.rdoproject.org. I've set pruning to 60 days for the time being and we will monitor how things are looking in the near future. David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] From chkumar246 at gmail.com Tue Oct 17 01:30:05 2017 From: chkumar246 at gmail.com (chkumar246 at gmail.com) Date: Tue, 17 Oct 2017 01:30:05 +0000 (UTC) Subject: [rdo-list] [Fedocal] Reminder meeting : RDO Office Hours Message-ID: <20171017013005.414CF60A416B@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO Office Hours on 2017-10-17 from 13:30:00 to 15:30:00 UTC The meeting will be about: The meeting will be about RDO Office Hour. Aim: To keep up with increasing participation, we'll host office hours to add more easy fixes and provide mentoring to newcomers. [Agenda at RDO Office Hour easyfixes](https://review.rdoproject.org/etherpad/p/rdo-office-hour-easyfixes) Source: https://apps.fedoraproject.org/calendar/meeting/6374/ From whayutin at redhat.com Tue Oct 17 02:05:20 2017 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 16 Oct 2017 22:05:20 -0400 Subject: [rdo-list] logs.rdoproject.org pruning In-Reply-To: References: Message-ID: Thanks David, I think 30 - 35 days is sufficient for your reference. The upstream requirement is 30 days :) Thanks for taking care of it!! On Mon, Oct 16, 2017 at 8:43 PM, David Moreau Simard wrote: > Hi, > > We've hit 1.5TB out of 2TB on logs.rdoproject.org and it's time to > look at what kind of retention this gets us. > logs.rdoproject.org aggregates logs from ci.centos.org, > review.rdoproject.org as well as third party logs. > > We have about 90 days worth of data right now but we're also > increasing the pace at which we're uploading new data as we are > migrating new TripleO jobs to review.rdoproject.org. > I've set pruning to 60 days for the time being and we will monitor how > things are looking in the near future. > > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at redhat.com Tue Oct 17 03:38:58 2017 From: dms at redhat.com (David Moreau Simard) Date: Mon, 16 Oct 2017 23:38:58 -0400 Subject: [rdo-list] logs.rdoproject.org pruning In-Reply-To: References: Message-ID: The idea is to maximize our usage of this 2TB without going over a certain treshold. We'll tweak retention to keep logs as long as possible while keeping sufficient enough buffer in terms of space. For now we're trying 60 days, if we can do it and stay within that treshold, we'll keep 60 days! David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] On Mon, Oct 16, 2017 at 10:05 PM, Wesley Hayutin wrote: > Thanks David, > I think 30 - 35 days is sufficient for your reference. > The upstream requirement is 30 days :) > Thanks for taking care of it!! > > On Mon, Oct 16, 2017 at 8:43 PM, David Moreau Simard wrote: >> >> Hi, >> >> We've hit 1.5TB out of 2TB on logs.rdoproject.org and it's time to >> look at what kind of retention this gets us. >> logs.rdoproject.org aggregates logs from ci.centos.org, >> review.rdoproject.org as well as third party logs. >> >> We have about 90 days worth of data right now but we're also >> increasing the pace at which we're uploading new data as we are >> migrating new TripleO jobs to review.rdoproject.org. >> I've set pruning to 60 days for the time being and we will monitor how >> things are looking in the near future. >> >> David Moreau Simard >> Senior Software Engineer | OpenStack RDO >> >> dmsimard = [irc, github, twitter] > > From hguemar at redhat.com Tue Oct 17 06:28:36 2017 From: hguemar at redhat.com (=?UTF-8?B?SGHDr2tlbCBHdcOpbWFy?=) Date: Tue, 17 Oct 2017 08:28:36 +0200 Subject: [rdo-list] logs.rdoproject.org pruning In-Reply-To: References: Message-ID: <610dd83c-3141-b2da-f2fd-7800e1f8814f@redhat.com> On 17/10/2017 05:38, David Moreau Simard wrote: > The idea is to maximize our usage of this 2TB without going over a > certain treshold. > We'll tweak retention to keep logs as long as possible while keeping > sufficient enough buffer in terms of space. > > For now we're trying 60 days, if we can do it and stay within that > treshold, we'll keep 60 days! > Concerning packages reviews, 30/35 would be ok. Anything older would mean to recheck jobs anyway. Ack for 60. H. > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] > > > On Mon, Oct 16, 2017 at 10:05 PM, Wesley Hayutin wrote: >> Thanks David, >> I think 30 - 35 days is sufficient for your reference. >> The upstream requirement is 30 days :) >> Thanks for taking care of it!! >> >> On Mon, Oct 16, 2017 at 8:43 PM, David Moreau Simard wrote: >>> >>> Hi, >>> >>> We've hit 1.5TB out of 2TB on logs.rdoproject.org and it's time to >>> look at what kind of retention this gets us. >>> logs.rdoproject.org aggregates logs from ci.centos.org, >>> review.rdoproject.org as well as third party logs. >>> >>> We have about 90 days worth of data right now but we're also >>> increasing the pace at which we're uploading new data as we are >>> migrating new TripleO jobs to review.rdoproject.org. >>> I've set pruning to 60 days for the time being and we will monitor how >>> things are looking in the near future. >>> >>> David Moreau Simard >>> Senior Software Engineer | OpenStack RDO >>> >>> dmsimard = [irc, github, twitter] >> >> > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > From whayutin at redhat.com Tue Oct 17 13:11:54 2017 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 17 Oct 2017 09:11:54 -0400 Subject: [rdo-list] logs.rdoproject.org pruning In-Reply-To: <610dd83c-3141-b2da-f2fd-7800e1f8814f@redhat.com> References: <610dd83c-3141-b2da-f2fd-7800e1f8814f@redhat.com> Message-ID: Thanks for taking care of it David! On Tue, Oct 17, 2017 at 2:28 AM, Ha?kel Gu?mar wrote: > On 17/10/2017 05:38, David Moreau Simard wrote: > >> The idea is to maximize our usage of this 2TB without going over a >> certain treshold. >> We'll tweak retention to keep logs as long as possible while keeping >> sufficient enough buffer in terms of space. >> >> For now we're trying 60 days, if we can do it and stay within that >> treshold, we'll keep 60 days! >> >> > Concerning packages reviews, 30/35 would be ok. Anything older would mean > to recheck jobs anyway. > Ack for 60. > > H. > > David Moreau Simard >> Senior Software Engineer | OpenStack RDO >> >> dmsimard = [irc, github, twitter] >> >> >> On Mon, Oct 16, 2017 at 10:05 PM, Wesley Hayutin >> wrote: >> >>> Thanks David, >>> I think 30 - 35 days is sufficient for your reference. >>> The upstream requirement is 30 days :) >>> Thanks for taking care of it!! >>> >>> On Mon, Oct 16, 2017 at 8:43 PM, David Moreau Simard >>> wrote: >>> >>>> >>>> Hi, >>>> >>>> We've hit 1.5TB out of 2TB on logs.rdoproject.org and it's time to >>>> look at what kind of retention this gets us. >>>> logs.rdoproject.org aggregates logs from ci.centos.org, >>>> review.rdoproject.org as well as third party logs. >>>> >>>> We have about 90 days worth of data right now but we're also >>>> increasing the pace at which we're uploading new data as we are >>>> migrating new TripleO jobs to review.rdoproject.org. >>>> I've set pruning to 60 days for the time being and we will monitor how >>>> things are looking in the near future. >>>> >>>> David Moreau Simard >>>> Senior Software Engineer | OpenStack RDO >>>> >>>> dmsimard = [irc, github, twitter] >>>> >>> >>> >>> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Tue Oct 17 14:41:56 2017 From: chkumar246 at gmail.com (Chandan kumar) Date: Tue, 17 Oct 2017 20:11:56 +0530 Subject: [rdo-list] Config Project session logs Message-ID: Hello, On 17th Oct, 2017, we have hosted Config project session in RDO office hour. Here is the detailed log: http://eavesdrop.openstack.org/meetings/rdo_office_hour___2017_10_17/2017/rdo_office_hour___2017_10_17.2017-10-17-13.31.log.html#l-17 And here are the notes: https://review.rdoproject.org/etherpad/p/rdo-config-office-hours Feel free to go through the links. If you have any queries, Feel free to reply back. Thanks, Chandan Kumar From javier.pena at redhat.com Wed Oct 18 16:00:53 2017 From: javier.pena at redhat.com (Javier Pena) Date: Wed, 18 Oct 2017 12:00:53 -0400 (EDT) Subject: [rdo-list] [Meeting] RDO meeting (2017-10-18) minutes Message-ID: <1517505188.17426358.1508342453583.JavaMail.zimbra@redhat.com> ============================== #rdo: RDO meeting - 2017-10-18 ============================== Meeting started by jpena at 15:04:18 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/rdo_meeting___2017_10_18/2017/rdo_meeting___2017_10_18.2017-10-18-15.04.log.html . Meeting summary --------------- * roll call (jpena, 15:04:27) * RDO Trunk cleanup for Fedora Rawhide (jpena, 15:06:16) * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1377125 has details on the whole hacking vs flake8 issue (jpena, 15:14:44) * ACTION: jpena to create copr for Fedora deps that can't/won't be included in Rawhide (jpena, 15:25:16) * TripleO 'Zuul v3 ask me anything': http://lists.openstack.org/pipermail/openstack-dev/2017-October/123668.html (jpena, 15:25:57) * LINK: http://lists.openstack.org/pipermail/openstack-dev/2017-October/123668.html (dmsimard, 15:26:48) * Mailing list migration status (jpena, 15:29:51) * mailing lists to be migrated soon, stay tuned for announcement (jpena, 15:42:36) * chair for next meeting (jpena, 15:45:34) * ACTION: amoralej to chair next meeting (jpena, 15:49:51) * open floor (jpena, 15:49:55) Meeting ended at 15:58:46 UTC. Action items, by person ----------------------- * amoralej * amoralej to chair next meeting * jpena * jpena to create copr for Fedora deps that can't/won't be included in Rawhide People present (lines said) --------------------------- * jpena (42) * dmsimard (36) * Duck (28) * rbowen (22) * openstack (8) * hguemar (5) * amoralej (5) * honza (4) * jrist (4) * rdogerrit (3) * EmilienM (3) * openstackgerrit (2) * ykarel|afk (1) * chandankumar (1) * adarazs (1) * ykarel (0) Generated by `MeetBot`_ 0.1.4 From jlabarre at redhat.com Wed Oct 18 18:50:36 2017 From: jlabarre at redhat.com (James LaBarre) Date: Wed, 18 Oct 2017 14:50:36 -0400 Subject: [rdo-list] tunneling to Horizon Message-ID: <3cbc629a-6b55-4090-981d-1e4dc4090245@redhat.com> I have experimented with various configurations, and I have yet to find a combination that works. I have a TripleO quickstart install (basic setup) and am trying to connect to the Horizon dashboard. My scenario us like this: Laptop at home connects to HWhost in server lab (can ssh directly through a VPN to the HWHost, can ping) Undercloud can be seen from HWhost, not from Laptop (have to ssh to undercloud from HWHost, after having SSHed to HWHost from Laptop. Can ping from HWhost, not from Laptop) Controller and Compute can ping from undercloud, can ssh from undercloud directly, can ssh from HWhost by redirecting through undercloud (?) So with all this, how does one use a web browser to connect to Horizon? Does the browser have to be running on HWHost or Undercloud, or can it be tunnelled for the Laptop? It would seem I'd have to do multiple tunnels (if that's even allowed). From jlabarre at redhat.com Wed Oct 18 20:37:11 2017 From: jlabarre at redhat.com (James LaBarre) Date: Wed, 18 Oct 2017 16:37:11 -0400 Subject: [rdo-list] tunneling to Horizon In-Reply-To: <3cbc629a-6b55-4090-981d-1e4dc4090245@redhat.com> References: <3cbc629a-6b55-4090-981d-1e4dc4090245@redhat.com> Message-ID: <698f5447-4ca9-db19-042e-bc10a53faccc@redhat.com> I *might* (emphasize "might") have reached the Horizon server/desktop, but now Firefox will give the error: =================================================== An error occurred during a connection to localhost:8080. SSL received a record that exceeded the maximum permissible length. Error code: SSL_ERROR_RX_RECORD_TOO_LONG =================================================== Mozilla's help page suggests this is a problem with the certificate on the server side. Now, since I presume the certificate was generated bt quickstart (I didn't generate one myself) I would guess something is wrong in the configuration From dmanchad at redhat.com Thu Oct 19 13:16:02 2017 From: dmanchad at redhat.com (David Manchado Cuesta) Date: Thu, 19 Oct 2017 15:16:02 +0200 Subject: [rdo-list] [rdo cloud] Ceph issues seems to be solved Message-ID: All, We are glad to let you know the issues we've been having on the ceph cluster (latency, performance) for the last 6w have been solved since yesterday ~17:00 UTC. We will keep an eye on this issue until Monday before considering it completely solved but looks promising. The root casue was the cache policy for the OSD disks was not the expected one so we weren't taking any advantage on the cache on the RAID controller. The change might be due to the fact that servers were for some time unplugged from the PDU and BBU might have completely discharged reverting to default settings (just a theory). We have confirmed that the cache policy change is persistent across server reboots so we should not hit this problem again (crossing fingers) Thanks to all the colleagues that have given us a hand! David Manchado Senior Software Engineer - SysOps Team Red Hat From whayutin at redhat.com Thu Oct 19 18:14:19 2017 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 19 Oct 2017 14:14:19 -0400 Subject: [rdo-list] tunneling to Horizon In-Reply-To: <698f5447-4ca9-db19-042e-bc10a53faccc@redhat.com> References: <3cbc629a-6b55-4090-981d-1e4dc4090245@redhat.com> <698f5447-4ca9-db19-042e-bc10a53faccc@redhat.com> Message-ID: On Wed, Oct 18, 2017 at 4:37 PM, James LaBarre wrote: > I *might* (emphasize "might") have reached the Horizon server/desktop, > but now Firefox will give the error: > > =================================================== > > An error occurred during a connection to localhost:8080. SSL received a > record that exceeded the maximum permissible length. Error code: > SSL_ERROR_RX_RECORD_TOO_LONG > > =================================================== > > Mozilla's help page suggests this is a problem with the certificate on > the server side. Now, since I presume the certificate was generated bt > quickstart (I didn't generate one myself) I would guess something is > wrong in the configuration > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > Hrm, That kind of seems like the issue we have doc'd in James, that kind of sounds like the issue we doc'd in https://docs.openstack.org/tripleo-quickstart/latest/accessing-undercloud.html#access-via-the-tripleo-ui Please have a look and let us know. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbowen at redhat.com Fri Oct 20 04:44:07 2017 From: rbowen at redhat.com (Rich Bowen) Date: Fri, 20 Oct 2017 04:44:07 +0000 Subject: [rdo-list] NOTICE: Mailing list changes, Tuesday October 24th Message-ID: NOTICE: On Tuesday, October 24th, we will have an outage of the rdo-list mailing list, starting at 10:30 JST (01:30 UTC) and potentially lasting until 12:00 JST (03:00 UTC). During this time, we will be migrating rdo-list at redhat.com to two new mailing lists - dev at lists.rdoproject.org and users at lists.rdoproject.org You will, initially, be subscribed to both of these lists. You can unsubscribe according to the usual formula, eg sending a blank message to dev-unsubscribe at lists.rdoproject.org The purpose of these lists are, respectively, development and user discussions. New messages to those lists should be sent to those email addresses. Messages to the old rdo-list email address will receive an auto-response with the above information. The page at http://rdoproject.org/contribute/mailing-lists/ will also be updated to reflect this information. At the same time, the rdo-infra at redhat.com mailing list will move to infra at lists.rdoproject.org with existing subscriptions copied over. No action is necessary to remain on that list. Likewise, the rdo-newsletter mailing list will be moved to newsletter at lists.rdoproject.org Please let us (Myself and Marc "Duck" Dequenes) know immediately of any concerns around these changes. Thanks. --Rich -- -- Rich Bowen - rbowen at redhat.com @rbowen // @rdocommunity // @CentOSProject 859 351 9166 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Fri Oct 20 15:50:50 2017 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Fri, 20 Oct 2017 17:50:50 +0200 Subject: [rdo-list] tunneling to Horizon In-Reply-To: <3cbc629a-6b55-4090-981d-1e4dc4090245@redhat.com> References: <3cbc629a-6b55-4090-981d-1e4dc4090245@redhat.com> Message-ID: On Wed, Oct 18, 2017 at 8:50 PM, James LaBarre wrote: > I have experimented with various configurations, and I have yet to find > a combination that works. [cut] > So with all this, how does one use a web browser to connect to Horizon? > Does the browser have to be running on HWHost or Undercloud, or can it > be tunnelled for the Laptop? It would seem I'd have to do multiple > tunnels (if that's even allowed). Since i had similar issue, to avoid any redirection issue i installed firefox on undercloud host and enabled X forwarding. Then i did a dnat rule for port 2222 of virthost redirecting to port 22 of undercloud. So i can directly make ssh -X to undercloud and run firefox in a network that has full visibility. Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From clecomte at redhat.com Mon Oct 23 08:23:43 2017 From: clecomte at redhat.com (Cedric Lecomte) Date: Mon, 23 Oct 2017 10:23:43 +0200 Subject: [rdo-list] Problem with ha-router Message-ID: Hello all, I tried to deploy RDO Pike without container on our internal plateform. The setup is pretty simple : - 3 Controller in HA - 5 Ceph - 4 Compute - 3 Object-Store I didn't used any exotic parameter. This is my deployment command : openstack overcloud deploy --templates -e environement.yaml --ntp-server 0.pool.ntp.org -e storage-env.yaml -e network-env.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-ceph.yaml --control-scale 3 --control-flavor control --compute-scale 4 --compute-flavor compute --ceph-storage-scale 5 --ceph-storage-flavor ceph-storage --swift-storage-flavor swift-storage --swift-storage-scale 3 -e scheduler_hints_env.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/pup pet-pacemaker.yaml *environnement.yaml :* parameter_defaults: ControllerCount: 3 ComputeCount: 4 CephStorageCount: 5 OvercloudCephStorageFlavor: ceph-storage CephDefaultPoolSize: 3 ObjectStorageCount: 3 *network-env.yaml :* resource_registry: OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-conf igs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml OS::TripleO::CephStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/ceph-storage.yaml OS::TripleO::ObjectStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/swift-storage.yaml parameter_defaults: InternalApiNetCidr: 172.16.0.0/24 TenantNetCidr: 172.17.0.0/24 StorageNetCidr: 172.18.0.0/24 StorageMgmtNetCidr: 172.19.0.0/24 ManagementNetCidr: 172.20.0.0/24 ExternalNetCidr: 10.41.11.0/24 InternalApiAllocationPools: [{'start': '172.16.0.10', 'end': '172.16.0.200'}] TenantAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}] StorageAllocationPools: [{'start': '172.18.0.10', 'end': '172.18.0.200'}] StorageMgmtAllocationPools: [{'start': '172.19.0.10', 'end': '172.19.0.200'}] ManagementAllocationPools: [{'start': '172.20.0.10', 'end': '172.20.0.200'}] # Leave room for floating IPs in the External allocation pool ExternalAllocationPools: [{'start': '10.41.11.10', 'end': '10.41.11.30'}] # Set to the router gateway on the external network ExternalInterfaceDefaultRoute: 10.41.11.254 # Gateway router for the provisioning network (or Undercloud IP) ControlPlaneDefaultRoute: 192.168.131.253 # The IP address of the EC2 metadata server. Generally the IP of the Undercloud EC2MetadataIp: 192.0.2.1 # Define the DNS servers (maximum 2) for the overcloud nodes DnsServers: ["10.38.5.26"] InternalApiNetworkVlanID: 202 StorageNetworkVlanID: 203 StorageMgmtNetworkVlanID: 204 TenantNetworkVlanID: 205 ManagementNetworkVlanID: 206 ExternalNetworkVlanID: 198 NeutronExternalNetworkBridge: "''" ControlPlaneSubnetCidr: '24' BondInterfaceOvsOptions: "mode=balance-xor" *storage-env.yaml :* parameter_defaults: ExtraConfig: ceph::profile::params::osds: '/dev/sdb': {} '/dev/sdc': {} '/dev/sdd': {} '/dev/sde': {} '/dev/sdf': {} '/dev/sdg': {} SwiftRingBuild: false RingBuild: false *scheduler_hints_env.yaml* parameter_defaults: ControllerSchedulerHints: 'capabilities:node': 'control-%index%' NovaComputeSchedulerHints: 'capabilities:node': 'compute-%index%' CephStorageSchedulerHints: 'capabilities:node': 'ceph-storage-%index%' ObjectStorageSchedulerHints: 'capabilities:node': 'swift-storage-%index%' After a little use, I found that I found that one controller is unable to get active ha-router and I got this output : neutron l3-agent-list-hosting-router XXX +--------------------------------------+-------------------- ----------------+----------------+-------+----------+ | id | host | admin_state_up | alive | ha_state | +--------------------------------------+-------------------- ----------------+----------------+-------+----------+ | 420a7e31-bae1-4f8c-9438-97839cf190c4 | overcloud-controller-0.localdomain | True | :-) | standby | | 6a943aa5-6fd1-4b44-8557-f0043b266a2f | overcloud-controller-1.localdomain | True | :-) | standby | | dd66ef16-7533-434f-bf5b-25e38c51375f | overcloud-controller-2.localdomain | True | :-) | standby | +--------------------------------------+-------------------- ----------------+----------------+-------+----------+ So each time a router is schedule on this controller I can't get an active router. I tried to compare the configuration but everything seems to be good. I redeployed to see if it help, and the only thing that change is the controller where the ha-router are stuck. The only message that I got is fron OVS : 2017-10-20 08:38:44.930 136145 WARNING neutron.agent.rpc [req-0ad9aec4-f718-498f-9ca7-15b265340174 - - - - -] Device Port(admin_state_up=True,allowed_address_pairs=[], binding=PortBinding,binding_levels=[],created_at=2017-10- 20T08:38:38Z,data_plane_status=,description='', device_id='a7e23552-9329-4572-a69d-d7f316fcc5c9',device_ owner='network:router_ha_interface',dhcp_options=[], distributed_binding=None,dns=None,fixed_ips=[IPAllocation], id=7b6d81ef-0451-4216-9fe5-52d921052cb7,mac_address=fa:16:3e:13:e9:3c,name='HA port tenant 0ee0af8e94044a42923873939978ed42',network_id=ffe5ffa5-2693- 4d35-988e-7290899601e0,project_id='',qos_policy_id=None,revision_number=5, security=PortSecurity(7b6d81ef-0451-4216-9fe5-52d921052cb7),security_group_ ids=set([]),status='DOWN',updated_at=2017-10-20T08:38:44Z) is not bound. 2017-10-20 08:38:44.944 136145 WARNING neutron.plugins.ml2.drivers. openvswitch.agent.ovs_neutron_agent [req-0ad9aec4-f718-498f-9ca7-15b265340174 - - - - -] Device 7b6d81ef-0451-4216-9fe5-52d921052cb7 not defined on plugin or binding failed Any Idea ? -- LECOMTE Cedric Senior software ENgineer Red Hat clecomte at redhat.com TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hguemar at fedoraproject.org Mon Oct 23 15:00:03 2017 From: hguemar at fedoraproject.org (hguemar at fedoraproject.org) Date: Mon, 23 Oct 2017 15:00:03 +0000 (UTC) Subject: [rdo-list] [Fedocal] Reminder meeting : RDO meeting Message-ID: <20171023150003.D01E160A416B@fedocal02.phx2.fedoraproject.org> Dear all, You are kindly invited to the meeting: RDO meeting on 2017-10-25 from 15:00:00 to 16:00:00 UTC At rdo at irc.freenode.net The meeting will be about: RDO IRC meeting [Agenda at https://etherpad.openstack.org/p/RDO-Meeting ](https://etherpad.openstack.org/p/RDO-Meeting) Every Wednesday on #rdo on Freenode IRC Source: https://apps.fedoraproject.org/calendar/meeting/2017/ From emilien at redhat.com Mon Oct 23 16:57:03 2017 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 23 Oct 2017 09:57:03 -0700 Subject: [rdo-list] Problem with ha-router In-Reply-To: References: Message-ID: Hey C?dric, You might get some help on openstack-dev [tripleo] or by filling a bug in launchpad/tripleo but from my experience with rdo-list, there is no tripleo support. HTH, On Mon, Oct 23, 2017 at 1:23 AM, Cedric Lecomte wrote: > Hello all, > > I tried to deploy RDO Pike without container on our internal plateform. > > The setup is pretty simple : > - 3 Controller in HA > - 5 Ceph > - 4 Compute > - 3 Object-Store > > I didn't used any exotic parameter. > This is my deployment command : > > openstack overcloud deploy --templates > -e environement.yaml > --ntp-server 0.pool.ntp.org > -e storage-env.yaml > -e network-env.yaml > -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-ceph.yaml > > --control-scale 3 --control-flavor control > --compute-scale 4 --compute-flavor compute > --ceph-storage-scale 5 --ceph-storage-flavor ceph-storage > --swift-storage-flavor swift-storage --swift-storage-scale 3 > -e scheduler_hints_env.yaml > -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml > > -e /usr/share/openstack-tripleo-heat-templates/environments/pup > pet-pacemaker.yaml > > *environnement.yaml :* > parameter_defaults: > ControllerCount: 3 > ComputeCount: 4 > CephStorageCount: 5 > OvercloudCephStorageFlavor: ceph-storage > CephDefaultPoolSize: 3 > ObjectStorageCount: 3 > > *network-env.yaml :* > resource_registry: > OS::TripleO::Compute::Net::SoftwareConfig: > /home/stack/templates/nic-configs/compute.yaml > OS::TripleO::Controller::Net::SoftwareConfig: > /home/stack/templates/nic-configs/controller.yaml > OS::TripleO::CephStorage::Net::SoftwareConfig: > /home/stack/templates/nic-configs/ceph-storage.yaml > OS::TripleO::ObjectStorage::Net::SoftwareConfig: > /home/stack/templates/nic-configs/swift-storage.yaml > > parameter_defaults: > InternalApiNetCidr: 172.16.0.0/24 > TenantNetCidr: 172.17.0.0/24 > StorageNetCidr: 172.18.0.0/24 > StorageMgmtNetCidr: 172.19.0.0/24 > ManagementNetCidr: 172.20.0.0/24 > ExternalNetCidr: 10.41.11.0/24 > InternalApiAllocationPools: [{'start': '172.16.0.10', 'end': > '172.16.0.200'}] > TenantAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}] > StorageAllocationPools: [{'start': '172.18.0.10', 'end': '172.18.0.200'}] > StorageMgmtAllocationPools: [{'start': '172.19.0.10', 'end': > '172.19.0.200'}] > ManagementAllocationPools: [{'start': '172.20.0.10', 'end': > '172.20.0.200'}] > # Leave room for floating IPs in the External allocation pool > ExternalAllocationPools: [{'start': '10.41.11.10', 'end': '10.41.11.30'}] > # Set to the router gateway on the external network > ExternalInterfaceDefaultRoute: 10.41.11.254 > # Gateway router for the provisioning network (or Undercloud IP) > ControlPlaneDefaultRoute: 192.168.131.253 > # The IP address of the EC2 metadata server. Generally the IP of the > Undercloud > EC2MetadataIp: 192.0.2.1 > # Define the DNS servers (maximum 2) for the overcloud nodes > DnsServers: ["10.38.5.26"] > InternalApiNetworkVlanID: 202 > StorageNetworkVlanID: 203 > StorageMgmtNetworkVlanID: 204 > TenantNetworkVlanID: 205 > ManagementNetworkVlanID: 206 > ExternalNetworkVlanID: 198 > NeutronExternalNetworkBridge: "''" > ControlPlaneSubnetCidr: '24' > BondInterfaceOvsOptions: > "mode=balance-xor" > > *storage-env.yaml :* > parameter_defaults: > ExtraConfig: > ceph::profile::params::osds: > '/dev/sdb': {} > '/dev/sdc': {} > '/dev/sdd': {} > '/dev/sde': {} > '/dev/sdf': {} > '/dev/sdg': {} > SwiftRingBuild: false > RingBuild: false > > > *scheduler_hints_env.yaml* > parameter_defaults: > ControllerSchedulerHints: > 'capabilities:node': 'control-%index%' > NovaComputeSchedulerHints: > 'capabilities:node': 'compute-%index%' > CephStorageSchedulerHints: > 'capabilities:node': 'ceph-storage-%index%' > ObjectStorageSchedulerHints: > 'capabilities:node': 'swift-storage-%index%' > > After a little use, I found that I found that one controller is unable to > get active ha-router and I got this output : > > neutron l3-agent-list-hosting-router XXX > +--------------------------------------+-------------------- > ----------------+----------------+-------+----------+ > | id | host > | admin_state_up | alive | ha_state | > +--------------------------------------+-------------------- > ----------------+----------------+-------+----------+ > | 420a7e31-bae1-4f8c-9438-97839cf190c4 | overcloud-controller-0.localdomain > | True | :-) | standby | > | 6a943aa5-6fd1-4b44-8557-f0043b266a2f | overcloud-controller-1.localdomain > | True | :-) | standby | > | dd66ef16-7533-434f-bf5b-25e38c51375f | overcloud-controller-2.localdomain > | True | :-) | standby | > +--------------------------------------+-------------------- > ----------------+----------------+-------+----------+ > > So each time a router is schedule on this controller I can't get an active > router. I tried to compare the configuration but everything seems to be > good. I redeployed to see if it help, and the only thing that change is the > controller where the ha-router are stuck. > > The only message that I got is fron OVS : > > 2017-10-20 08:38:44.930 136145 WARNING neutron.agent.rpc > [req-0ad9aec4-f718-498f-9ca7-15b265340174 - - - - -] Device > Port(admin_state_up=True,allowed_address_pairs=[],binding= > PortBinding,binding_levels=[],created_at=2017-10-20T08:38: > 38Z,data_plane_status=,description='',device_id=' > a7e23552-9329-4572-a69d-d7f316fcc5c9',device_owner=' > network:router_ha_interface',dhcp_options=[],distributed_ > binding=None,dns=None,fixed_ips=[IPAllocation],id= > 7b6d81ef-0451-4216-9fe5-52d921052cb7,mac_address=fa:16:3e:13:e9:3c,name='HA > port tenant 0ee0af8e94044a42923873939978ed42',network_id=ffe5ffa5-2693-4 > d35-988e-7290899601e0,project_id='',qos_policy_id=None, > revision_number=5,security=PortSecurity(7b6d81ef-0451- > 4216-9fe5-52d921052cb7),security_group_ids=set([]), > status='DOWN',updated_at=2017-10-20T08:38:44Z) is not bound. > 2017-10-20 08:38:44.944 136145 WARNING neutron.plugins.ml2.drivers.op > envswitch.agent.ovs_neutron_agent [req-0ad9aec4-f718-498f-9ca7-15b265340174 > - - - - -] Device 7b6d81ef-0451-4216-9fe5-52d921052cb7 not defined on > plugin or binding failed > > Any Idea ? > > -- > > LECOMTE Cedric > > Senior software ENgineer > > Red Hat > > > > clecomte at redhat.com > > TRIED. TESTED. TRUSTED. > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlabarre at redhat.com Mon Oct 23 18:21:52 2017 From: jlabarre at redhat.com (James LaBarre) Date: Mon, 23 Oct 2017 14:21:52 -0400 Subject: [rdo-list] tunneling to Horizon In-Reply-To: References: <3cbc629a-6b55-4090-981d-1e4dc4090245@redhat.com> <698f5447-4ca9-db19-042e-bc10a53faccc@redhat.com> Message-ID: <610418ea-2201-b62d-6fa6-7bd025aa1ac1@redhat.com> On 10/19/2017 02:14 PM, Wesley Hayutin wrote: > > > On Wed, Oct 18, 2017 at 4:37 PM, James LaBarre > wrote: > > I *might* (emphasize "might") have reached the Horizon server/desktop, > but now Firefox will give the error: > > =================================================== > > An error occurred during a connection to localhost:8080. SSL > received a > record that exceeded the maximum permissible length. Error code: > SSL_ERROR_RX_RECORD_TOO_LONG > > =================================================== > > Mozilla's help page suggests this is a problem with the certificate on > the server side. Now, since I presume the certificate was > generated bt > quickstart (I didn't generate one myself) I would guess something is > wrong in the configuration > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > > To unsubscribe: rdo-list-unsubscribe at redhat.com > > > > > Hrm, > That kind of seems like the issue we have doc'd in > James, that kind of sounds like the issue > we doc'd in https://docs.openstack.org/tripleo-quickstart/latest/accessing-undercloud.html#access-via-the-tripleo-ui > > Please have a look and let us know. > So I tried connecting to that Port 3000 as it shows on the link above. Tried connecting from my own laptop, pointing to the host system for the TripleO cluster. Also tried from the host system pointing to the undercloud IP (cannot see the undercloud IP from my laptop). I *do* get a blank root directory of I just browse to the undercloud VM (no port or sub-directories). -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrist at redhat.com Mon Oct 23 20:11:03 2017 From: jrist at redhat.com (Jason E. Rist) Date: Mon, 23 Oct 2017 14:11:03 -0600 Subject: [rdo-list] tunneling to Horizon In-Reply-To: <610418ea-2201-b62d-6fa6-7bd025aa1ac1@redhat.com> References: <3cbc629a-6b55-4090-981d-1e4dc4090245@redhat.com> <698f5447-4ca9-db19-042e-bc10a53faccc@redhat.com> <610418ea-2201-b62d-6fa6-7bd025aa1ac1@redhat.com> Message-ID: <074da573-6e67-cff0-9e81-4c28a723f597@redhat.com> On 10/23/2017 12:21 PM, James LaBarre wrote: > On 10/19/2017 02:14 PM, Wesley Hayutin wrote: >> >> >> On Wed, Oct 18, 2017 at 4:37 PM, James LaBarre > > wrote: >> >> I *might* (emphasize "might") have reached the Horizon server/desktop, >> but now Firefox will give the error: >> >> =================================================== >> >> An error occurred during a connection to localhost:8080. SSL >> received a >> record that exceeded the maximum permissible length. Error code: >> SSL_ERROR_RX_RECORD_TOO_LONG >> >> =================================================== >> >> Mozilla's help page suggests this is a problem with the certificate on >> the server side. Now, since I presume the certificate was >> generated bt >> quickstart (I didn't generate one myself) I would guess something is >> wrong in the configuration >> >> _______________________________________________ >> rdo-list mailing list >> rdo-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rdo-list >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com >> >> >> >> >> Hrm, >> That kind of seems like the issue we have doc'd in >> James, that kind of sounds like the issue >> we doc'd in https://docs.openstack.org/tripleo-quickstart/latest/accessing-undercloud.html#access-via-the-tripleo-ui >> >> Please have a look and let us know. >> > So I tried connecting to that Port 3000 as it shows on the link above. > Tried connecting from my own laptop, pointing to the host system for the > TripleO cluster. Also tried from the host system pointing to the > undercloud IP (cannot see the undercloud IP from my laptop). > > I *do* get a blank root directory of I just browse to the undercloud VM > (no port or sub-directories). > > > > > > _______________________________________________ > rdo-list mailing list > rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list > > To unsubscribe: rdo-list-unsubscribe at redhat.com > Sorry for the slow/late reply on this. There is a bug right now for just the TripleO-UI tunneling wherein the SSH tunnel doesn't get written properly. https://launchpad.net/bugs/1722674 I put up a patch and so did Sagi: https://review.openstack.org/#/c/511143/1 Both work for me in solving the tunneling issue, but then there is another issue wherein the TripleO-UI config doesn't get written properly if the Quickstart setup is SSL, which it is by default (and honestly I don't know how to disable). https://bugs.launchpad.net/tripleo/+bug/1725115 At the bottom of the /etc/systemd/system/ssh-tunnel.service tunnel there is two lines that should help hitting horizon: -L 0.0.0.0:8181:overcloud.localdomain:80 \ -L 0.0.0.0:8443:overcloud.localdomain:443 modify as necessary. -J