From mkudlej at redhat.com Thu Dec 1 09:40:58 2016 From: mkudlej at redhat.com (Martin Kudlej) Date: Thu, 1 Dec 2016 10:40:58 +0100 Subject: [Tendrl-devel] move github repo for deployment machines in CentOS for Tendrl testing Message-ID: Hi Jeff, all, could you please move https://github.com/mkudlej/usmqe-centos-ci to https://github.com/Tendrl ? It is repository with Ansible playbook for deploying machines in CentOS CI for Tendrl testing. -- Best Regards, Martin Kudlej. RHSC/USM Senior Quality Assurance Engineer Red Hat Czech s.r.o. Phone: +420 532 294 155 E-mail:mkudlej at redhat.com IRC: mkudlej at #brno, #gluster, #storage-qa, #rhs, #rh-ceph, #usm-meeting @ redhat #tendrl-devel @ freenode From sankarshan.mukhopadhyay at gmail.com Thu Dec 1 09:45:40 2016 From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay) Date: Thu, 1 Dec 2016 15:15:40 +0530 Subject: [Tendrl-devel] move github repo for deployment machines in CentOS for Tendrl testing In-Reply-To: References: Message-ID: On Thu, Dec 1, 2016 at 3:10 PM, Martin Kudlej wrote: > > could you please move https://github.com/mkudlej/usmqe-centos-ci to > https://github.com/Tendrl ? > > It is repository with Ansible playbook for deploying machines in CentOS CI > for Tendrl testing. To clarify - are you seeking to accomplish what is explained at ? -- sankarshan mukhopadhyay From mkudlej at redhat.com Thu Dec 1 09:55:43 2016 From: mkudlej at redhat.com (Martin Kudlej) Date: Thu, 1 Dec 2016 10:55:43 +0100 Subject: [Tendrl-devel] move github repo for deployment machines in CentOS for Tendrl testing In-Reply-To: References: Message-ID: <70deaedc-c383-8dd1-2db7-f0444955e986@redhat.com> Hi Sankarshan, On 12/01/2016 10:45 AM, Sankarshan Mukhopadhyay wrote: > To clarify - are you seeking to accomplish what is explained at > > ? Yes, I have no rights for that transfer. -- Best Regards, Martin Kudlej. RHSC/USM Senior Quality Assurance Engineer Red Hat Czech s.r.o. Phone: +420 532 294 155 E-mail:mkudlej at redhat.com IRC: mkudlej at #brno, #gluster, #storage-qa, #rhs, #rh-ceph, #usm-meeting @ redhat #tendrl-devel @ freenode From shtripat at redhat.com Thu Dec 1 10:01:43 2016 From: shtripat at redhat.com (Shubhendu Tripathi) Date: Thu, 1 Dec 2016 15:31:43 +0530 Subject: [Tendrl-devel] move github repo for deployment machines in CentOS for Tendrl testing In-Reply-To: References: Message-ID: <794d4a66-575a-50e5-43ba-be6396a7d2b4@redhat.com> On 12/01/2016 03:10 PM, Martin Kudlej wrote: > Hi Jeff, all, > > could you please move https://github.com/mkudlej/usmqe-centos-ci to > https://github.com/Tendrl ? > > It is repository with Ansible playbook for deploying machines in > CentOS CI for Tendrl testing. > Shouldnt we also call the repo "centos-ci" once its moved under the umbrella of tendrl ? From mbukatov at redhat.com Thu Dec 1 10:02:00 2016 From: mbukatov at redhat.com (Martin Bukatovic) Date: Thu, 1 Dec 2016 11:02:00 +0100 Subject: [Tendrl-devel] labeling github issues Message-ID: <9ecd00cf-3d32-5335-945d-27e89e873e9c@redhat.com> Dear all, I would like to assign labels (such as "bug" or "question") to github issues I have created, but I don't seem to have the access rights needed. Could you reconfigure the Tendrl github group so that qe team members can add labels to theirs github issues? Thank you. -- Martin Bukatovic USM QE team From mkudlej at redhat.com Thu Dec 1 10:08:50 2016 From: mkudlej at redhat.com (Martin Kudlej) Date: Thu, 1 Dec 2016 11:08:50 +0100 Subject: [Tendrl-devel] move github repo for deployment machines in CentOS for Tendrl testing In-Reply-To: <794d4a66-575a-50e5-43ba-be6396a7d2b4@redhat.com> References: <794d4a66-575a-50e5-43ba-be6396a7d2b4@redhat.com> Message-ID: <269642bb-b5b1-5873-c706-8900856c95b1@redhat.com> Hi Shubhendu, On 12/01/2016 11:01 AM, Shubhendu Tripathi wrote: > On 12/01/2016 03:10 PM, Martin Kudlej wrote: >> Hi Jeff, all, >> >> could you please move https://github.com/mkudlej/usmqe-centos-ci to https://github.com/Tendrl ? >> >> It is repository with Ansible playbook for deploying machines in CentOS CI for Tendrl testing. >> > > Shouldnt we also call the repo "centos-ci" once its moved under the umbrella of tendrl ? all repositories focused on Tendrl quality have usmqe prefix even if they are under Tendrl umbrella so we can distinguish that repository is focused on quality. -- Best Regards, Martin Kudlej. RHSC/USM Senior Quality Assurance Engineer Red Hat Czech s.r.o. Phone: +420 532 294 155 E-mail:mkudlej at redhat.com IRC: mkudlej at #brno, #gluster, #storage-qa, #rhs, #rh-ceph, #usm-meeting @ redhat #tendrl-devel @ freenode From sankarshan.mukhopadhyay at gmail.com Thu Dec 1 10:16:25 2016 From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay) Date: Thu, 1 Dec 2016 15:46:25 +0530 Subject: [Tendrl-devel] labeling github issues In-Reply-To: <9ecd00cf-3d32-5335-945d-27e89e873e9c@redhat.com> References: <9ecd00cf-3d32-5335-945d-27e89e873e9c@redhat.com> Message-ID: On Thu, Dec 1, 2016 at 3:32 PM, Martin Bukatovic wrote: > I would like to assign labels (such as "bug" or "question") to github > issues I have created, but I don't seem to have the access rights > needed. Could you reconfigure the Tendrl github group so that qe team > members can add labels to theirs github issues? Alright. I'm missing something here. The specific label (names, which you indicate) exist. Can you provide me with a link to a particular issue? It should be easier for me to figure out what to do. -- sankarshan mukhopadhyay From mbukatov at redhat.com Thu Dec 1 10:38:56 2016 From: mbukatov at redhat.com (Martin Bukatovic) Date: Thu, 1 Dec 2016 11:38:56 +0100 Subject: [Tendrl-devel] labeling github issues In-Reply-To: References: <9ecd00cf-3d32-5335-945d-27e89e873e9c@redhat.com> Message-ID: On 12/01/2016 11:16 AM, Sankarshan Mukhopadhyay wrote: > On Thu, Dec 1, 2016 at 3:32 PM, Martin Bukatovic wrote: >> I would like to assign labels (such as "bug" or "question") to github >> issues I have created, but I don't seem to have the access rights >> needed. Could you reconfigure the Tendrl github group so that qe team >> members can add labels to theirs github issues? > > Alright. I'm missing something here. The specific label (names, which > you indicate) exist. Can you provide me with a link to a particular > issue? It should be easier for me to figure out what to do. The problem I have here is that while the labels exists, and other team members are using them on some github issues, I'm unable to do so. When I click on "New issue" of any Tendrl project on github, I don't see the the knobs for setting the label at all [1] - the right panel which provides those options is missing. Neither I see them when I try to edit already created issue. Since I'm able to label issues of my own projects, I suspect that this is related to access rights of Tendrl github group. To try this yourself, try to click on "New issue" button of tendrl documentation project[2] and compare it with my screenshot[1]. If you are able to see knobs to set labels in the right panel, while I'm not provided with this option as shown on the screenshot, we would need to reconfigure access rights so that the qe team members can add labels to tendrl github issues. Thank you for your help. [1] https://ibin.co/33riFN0YCthe.png [2] https://github.com/Tendrl/documentation/issues/new -- Martin Bukatovic USM QE team From mbukatov at redhat.com Thu Dec 1 17:26:58 2016 From: mbukatov at redhat.com (Martin Bukatovic) Date: Thu, 1 Dec 2016 18:26:58 +0100 Subject: [Tendrl-devel] linking to storage docs from Tendrl user interface Message-ID: <210dc27c-a592-9b12-abcf-af416665ed97@redhat.com> Dear all, would it make sense to link to GlusterFS/Ceph documentation from the Tendrl web interface for each storage specific operation? So for example in a screen which provides configuration related to start of glusterfs rebalance operation, we would provide a link to glusterfs documentation on rebalance. This would make clear what each operation means exactly, it would fit nicely to our plan to use storage specific terminology and last but not least it would simplify the issues related to integration of 2 very different storage systems into single management interface, when a common term is used. What do you think? I believe that such pointers would provide a significant value for the user. -- Martin Bukatovic USM QE team From khartsoe at redhat.com Thu Dec 1 17:33:29 2016 From: khartsoe at redhat.com (Kenneth Hartsoe) Date: Thu, 1 Dec 2016 12:33:29 -0500 (EST) Subject: [Tendrl-devel] linking to storage docs from Tendrl user interface In-Reply-To: <210dc27c-a592-9b12-abcf-af416665ed97@redhat.com> References: <210dc27c-a592-9b12-abcf-af416665ed97@redhat.com> Message-ID: <667961400.73050214.1480613609439.JavaMail.zimbra@redhat.com> Hi Martin, I agree; however, I would like to discuss further to identify any potential issues related to product versioning, link stability, etc. I'll set up a meeting to explore. If anyone else wants to join, please let me know, thanks. Ken Hartsoe Content Strategist Red Hat Storage Documentation khartsoe at redhat.com; IRC: khartsoe Office: 919 754 4770; Internal: 814 4770 ----- Original Message ----- | Dear all, | | would it make sense to link to GlusterFS/Ceph documentation from | the Tendrl web interface for each storage specific operation? So for | example in a screen which provides configuration related to start of | glusterfs rebalance operation, we would provide a link to glusterfs | documentation on rebalance. | | This would make clear what each operation means exactly, it would fit | nicely to our plan to use storage specific terminology and last but | not least it would simplify the issues related to integration of 2 very | different storage systems into single management interface, when a | common term is used. | | What do you think? I believe that such pointers would provide | a significant value for the user. | | -- | Martin Bukatovic | USM QE team | | _______________________________________________ | Tendrl-devel mailing list | Tendrl-devel at redhat.com | https://www.redhat.com/mailman/listinfo/tendrl-devel | From mbukatov at redhat.com Thu Dec 1 17:41:36 2016 From: mbukatov at redhat.com (Martin Bukatovic) Date: Thu, 1 Dec 2016 18:41:36 +0100 Subject: [Tendrl-devel] Few questions/clarification regarding tendrl In-Reply-To: References: Message-ID: <90597ef5-e2ad-2187-e849-d826feb41012@redhat.com> On 09/06/2016 12:46 PM, Martin Bukatovic wrote: > On 08/31/2016 11:14 AM, John Spray wrote: >> I would also like to hear more about how Tendrl will interface to Ceph >> -- I'm sure others on ceph-devel would be interested too. I've been >> given the impression off-list that the copied Calamari code is mainly >> a placeholder, and I've pointed out that ceph-mgr is coming soon and >> is intended for exactly this kind of thing. >> > > +1 from me. Does anyone have any updates on this? For some reason, I can't find any resolution on this (maybe it's just me unable to search properly). What is the plan here? Are we going to continue build Tendrl Ceph integration based on code forked from calamari? When I look at https://github.com/Tendrl/ceph_integration, it still shows that this is the case. -- Martin Bukatovic USM QE team From mcarrano at redhat.com Thu Dec 1 18:06:29 2016 From: mcarrano at redhat.com (Matt Carrano) Date: Thu, 1 Dec 2016 13:06:29 -0500 Subject: [Tendrl-devel] linking to storage docs from Tendrl user interface In-Reply-To: <667961400.73050214.1480613609439.JavaMail.zimbra@redhat.com> References: <210dc27c-a592-9b12-abcf-af416665ed97@redhat.com> <667961400.73050214.1480613609439.JavaMail.zimbra@redhat.com> Message-ID: Hi Ken, I'd like to participate in any discussion of this. Agree that in theory it is a good idea, but there are potential issues as you reference. Matt On Thu, Dec 1, 2016 at 12:33 PM, Kenneth Hartsoe wrote: > Hi Martin, > > I agree; however, I would like to discuss further to identify any > potential issues related to product versioning, link stability, etc. I'll > set up a meeting to explore. If anyone else wants to join, please let me > know, thanks. > > Ken Hartsoe > Content Strategist > Red Hat Storage Documentation > > khartsoe at redhat.com; IRC: khartsoe > Office: 919 754 4770; Internal: 814 4770 > > ----- Original Message ----- > | Dear all, > | > | would it make sense to link to GlusterFS/Ceph documentation from > | the Tendrl web interface for each storage specific operation? So for > | example in a screen which provides configuration related to start of > | glusterfs rebalance operation, we would provide a link to glusterfs > | documentation on rebalance. > | > | This would make clear what each operation means exactly, it would fit > | nicely to our plan to use storage specific terminology and last but > | not least it would simplify the issues related to integration of 2 very > | different storage systems into single management interface, when a > | common term is used. > | > | What do you think? I believe that such pointers would provide > | a significant value for the user. > | > | -- > | Martin Bukatovic > | USM QE team > | > | _______________________________________________ > | Tendrl-devel mailing list > | Tendrl-devel at redhat.com > | https://www.redhat.com/mailman/listinfo/tendrl-devel > | > > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel > -- Matt Carrano Sr. Interaction Designer Red Hat, Inc. mcarrano at redhat.com From mbukatov at redhat.com Thu Dec 1 18:10:17 2016 From: mbukatov at redhat.com (Martin Bukatovic) Date: Thu, 1 Dec 2016 19:10:17 +0100 Subject: [Tendrl-devel] Tendrl UX designs for review In-Reply-To: References: Message-ID: <7ac8a365-1fdb-2d83-a16c-d5e408aa7114@redhat.com> On 11/23/2016 09:30 PM, Matt Carrano wrote: > Three new UX designs are published for review. Please see the links below. > > Rebalance File Share (TEN-163): > https://redhat.invisionapp.com/share/AB94BNET6 > > Create Ceph Cluster (TEN-164): > https://redhat.invisionapp.com/share/2K8M4PQYZ > > Delete File Share (TEN-165): https://redhat.invisionapp.com/share/729GRP1W9 > > Comments may be left directly in the InVision documents. If you have never > commented in InVision before, you should read this article > https://support.invisionapp.com/hc/en-us/articles/209192426-How-do-I-comment-on-a-prototype- > > Please review and comment by COB on Wed Nov 30. We will be scheduling a > follow-up review meeting sometime in the next 2 weeks. Sorry for the delayed update, I just reviewed the 1st and I'm going through the 2nd one right now. Going through the docs and review older BZs to find references takes some time. But I have a question: how do we ensure that feedback provided via invision comments is properly discussed and/or transformed into action? Especially given broad group of people involved (we would need to make sure people across ceph and gluster teams are invited to the follow up review meeting as well). -- Martin Bukatovic USM QE team From julim at redhat.com Thu Dec 1 18:36:17 2016 From: julim at redhat.com (Ju Lim) Date: Thu, 1 Dec 2016 13:36:17 -0500 Subject: [Tendrl-devel] Tendrl UX designs for review In-Reply-To: <7ac8a365-1fdb-2d83-a16c-d5e408aa7114@redhat.com> References: <7ac8a365-1fdb-2d83-a16c-d5e408aa7114@redhat.com> Message-ID: Martin: We've not yet had time to review all the comments. Rest assured they will be turned into something actionable as we revise the designs. Additionally, we've not yet scheduled the review of these designs, but it's in our plans. We've just been unable to schedule it yet. Please feel free to comment on the designs in the meantime, and/or if you want to wait till our planned review, that is fine also. Thanks, Ju On Thu, Dec 1, 2016 at 1:10 PM, Martin Bukatovic wrote: > On 11/23/2016 09:30 PM, Matt Carrano wrote: > > Three new UX designs are published for review. Please see the links > below. > > > > Rebalance File Share (TEN-163): > > https://redhat.invisionapp.com/share/AB94BNET6 > > > > Create Ceph Cluster (TEN-164): > > https://redhat.invisionapp.com/share/2K8M4PQYZ > > > > Delete File Share (TEN-165): https://redhat.invisionapp. > com/share/729GRP1W9 > > > > Comments may be left directly in the InVision documents. If you have > never > > commented in InVision before, you should read this article > > https://support.invisionapp.com/hc/en-us/articles/ > 209192426-How-do-I-comment-on-a-prototype- > > > > Please review and comment by COB on Wed Nov 30. We will be scheduling a > > follow-up review meeting sometime in the next 2 weeks. > > Sorry for the delayed update, I just reviewed the 1st and I'm going > through the 2nd one right now. Going through the docs and review > older BZs to find references takes some time. > > But I have a question: how do we ensure that feedback provided via > invision comments is properly discussed and/or transformed into action? > Especially given broad group of people involved (we would need to make > sure people across ceph and gluster teams are invited to the follow up > review meeting as well). > > -- > Martin Bukatovic > USM QE team > > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel > From japplewh at redhat.com Thu Dec 1 21:07:29 2016 From: japplewh at redhat.com (Jeff Applewhite) Date: Thu, 01 Dec 2016 21:07:29 +0000 Subject: [Tendrl-devel] Few questions/clarification regarding tendrl In-Reply-To: <90597ef5-e2ad-2187-e849-d826feb41012@redhat.com> References: <90597ef5-e2ad-2187-e849-d826feb41012@redhat.com> Message-ID: The Ceph team and Nishanth and a couple of others actually met yesterday to discuss this. The plan is for development to file bugs on the needed features to support our needs by Friday and let the Ceph team assess their ability to deliver these. Then we will meet again next week to review. But you rightly point out there is a disconnect that needs to be addressed. On Thu, Dec 1, 2016 at 12:41 PM Martin Bukatovic wrote: > On 09/06/2016 12:46 PM, Martin Bukatovic wrote: > > On 08/31/2016 11:14 AM, John Spray wrote: > >> I would also like to hear more about how Tendrl will interface to Ceph > >> -- I'm sure others on ceph-devel would be interested too. I've been > >> given the impression off-list that the copied Calamari code is mainly > >> a placeholder, and I've pointed out that ceph-mgr is coming soon and > >> is intended for exactly this kind of thing. > >> > > > > +1 from me. Does anyone have any updates on this? > > For some reason, I can't find any resolution on this (maybe it's > just me unable to search properly). What is the plan here? Are > we going to continue build Tendrl Ceph integration based on code > forked from calamari? When I look at > https://github.com/Tendrl/ceph_integration, it still shows that this is > the case. > > -- > Martin Bukatovic > USM QE team > > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel > From sankarshan.mukhopadhyay at gmail.com Fri Dec 2 03:12:59 2016 From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay) Date: Fri, 2 Dec 2016 08:42:59 +0530 Subject: [Tendrl-devel] linking to storage docs from Tendrl user interface In-Reply-To: <210dc27c-a592-9b12-abcf-af416665ed97@redhat.com> References: <210dc27c-a592-9b12-abcf-af416665ed97@redhat.com> Message-ID: On Thu, Dec 1, 2016 at 10:56 PM, Martin Bukatovic wrote: > would it make sense to link to GlusterFS/Ceph documentation from > the Tendrl web interface for each storage specific operation? So for > example in a screen which provides configuration related to start of > glusterfs rebalance operation, we would provide a link to glusterfs > documentation on rebalance. > This is fraught with some niggles. I've included Amye (Community Lead for Gluster) in this conversation so that we can understand the state of Gluster documentation to cite and refer. > This would make clear what each operation means exactly, it would fit > nicely to our plan to use storage specific terminology and last but > not least it would simplify the issues related to integration of 2 very > different storage systems into single management interface, when a > common term is used. > > What do you think? I believe that such pointers would provide > a significant value for the user. -- sankarshan mukhopadhyay From amye at redhat.com Fri Dec 2 04:04:15 2016 From: amye at redhat.com (Amye Scavarda) Date: Thu, 1 Dec 2016 20:04:15 -0800 Subject: [Tendrl-devel] linking to storage docs from Tendrl user interface In-Reply-To: References: <210dc27c-a592-9b12-abcf-af416665ed97@redhat.com> Message-ID: Thanks for including me! Responses inline. On Thu, Dec 1, 2016 at 7:12 PM, Sankarshan Mukhopadhyay wrote: > On Thu, Dec 1, 2016 at 10:56 PM, Martin Bukatovic wrote: >> would it make sense to link to GlusterFS/Ceph documentation from >> the Tendrl web interface for each storage specific operation? So for >> example in a screen which provides configuration related to start of >> glusterfs rebalance operation, we would provide a link to glusterfs >> documentation on rebalance. >> It might, but realize that once we do connect these things, it's on the teams to be able to maintain this long term. If you give a user something once, they're rightly going to expect this feature in the future unless you explicitly redirect towards a new and remarkably improved feature. > > This is fraught with some niggles. I've included Amye (Community Lead > for Gluster) in this conversation so that we can understand the state > of Gluster documentation to cite and refer. Gluster documentation is in an ever evolving state, as an upstream project is. You'll have to be more specific. :) > >> This would make clear what each operation means exactly, it would fit >> nicely to our plan to use storage specific terminology and last but >> not least it would simplify the issues related to integration of 2 very >> different storage systems into single management interface, when a >> common term is used. >> >> What do you think? I believe that such pointers would provide >> a significant value for the user. I don't disagree at all - this would be a very nice thing for our users! But what happens two years down the line when our directions between Ceph and Gluster have diverged - who is responsible for maintaining coherency and communication between all three projects? > > > > > -- > sankarshan mukhopadhyay > -- Amye Scavarda | amye at redhat.com | Gluster Community Lead From shtripat at redhat.com Fri Dec 2 04:41:07 2016 From: shtripat at redhat.com (Shubhendu Tripathi) Date: Fri, 2 Dec 2016 10:11:07 +0530 Subject: [Tendrl-devel] Tendrl specifications PRs for features/changes planned Message-ID: <3ac73544-52be-9e97-47e9-d6dbdda90a67@redhat.com> Hi, Below are the few already available PRs for tendrl features/changes planned. Few more underway and in process. https://github.com/Tendrl/specifications/pull/6 https://github.com/Tendrl/specifications/pull/7 https://github.com/Tendrl/specifications/pull/8 https://github.com/Tendrl/specifications/pull/9 https://github.com/Tendrl/specifications/pull/10 https://github.com/Tendrl/specifications/pull/11 Request all to have look on them and provide your valuable comments (if any). Thanks and Regards, Shubhendu From rkanade at redhat.com Fri Dec 2 05:48:43 2016 From: rkanade at redhat.com (Rohan Kanade) Date: Fri, 2 Dec 2016 11:18:43 +0530 Subject: [Tendrl-devel] move github repo for deployment machines in CentOS for Tendrl testing In-Reply-To: <269642bb-b5b1-5873-c706-8900856c95b1@redhat.com> References: <794d4a66-575a-50e5-43ba-be6396a7d2b4@redhat.com> <269642bb-b5b1-5873-c706-8900856c95b1@redhat.com> Message-ID: We should definitely drop the "USM" from upstream repositories. Call it tendrl-centos-ci or something On Thu, Dec 1, 2016 at 3:38 PM, Martin Kudlej wrote: > Hi Shubhendu, > > On 12/01/2016 11:01 AM, Shubhendu Tripathi wrote: > >> On 12/01/2016 03:10 PM, Martin Kudlej wrote: >> >>> Hi Jeff, all, >>> >>> could you please move https://github.com/mkudlej/usmqe-centos-ci to >>> https://github.com/Tendrl ? >>> >>> It is repository with Ansible playbook for deploying machines in CentOS >>> CI for Tendrl testing. >>> >>> >> Shouldnt we also call the repo "centos-ci" once its moved under the >> umbrella of tendrl ? >> > > all repositories focused on Tendrl quality have usmqe prefix even if they > are under Tendrl umbrella so we can distinguish that repository is focused > on quality. > > -- > Best Regards, > Martin Kudlej. > RHSC/USM Senior Quality Assurance Engineer > Red Hat Czech s.r.o. > > Phone: +420 532 294 155 > E-mail:mkudlej at redhat.com > IRC: mkudlej at #brno, #gluster, #storage-qa, #rhs, #rh-ceph, > #usm-meeting @ redhat > #tendrl-devel @ freenode > > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel > From mrugesh at brainfunked.org Fri Dec 2 07:34:00 2016 From: mrugesh at brainfunked.org (Mrugesh Karnik) Date: Fri, 2 Dec 2016 13:04:00 +0530 Subject: [Tendrl-devel] Tendrl specifications PRs for features/changes planned In-Reply-To: <3ac73544-52be-9e97-47e9-d6dbdda90a67@redhat.com> References: <3ac73544-52be-9e97-47e9-d6dbdda90a67@redhat.com> Message-ID: On 2 December 2016 at 10:11, Shubhendu Tripathi wrote: > > Hi, > > Below are the few already available PRs for tendrl features/changes planned. Few more underway and in process. > > https://github.com/Tendrl/specifications/pull/6 > https://github.com/Tendrl/specifications/pull/7 > https://github.com/Tendrl/specifications/pull/8 > https://github.com/Tendrl/specifications/pull/9 > https://github.com/Tendrl/specifications/pull/10 > https://github.com/Tendrl/specifications/pull/11 > > Request all to have look on them and provide your valuable comments (if any). Have reviewed the specifications. There are also some pull requests on alerting and monitoring that have an impact on the specifications. I've pointed out the comments wherever applicable. -- Mrugesh From mbukatov at redhat.com Fri Dec 2 09:00:46 2016 From: mbukatov at redhat.com (Martin Bukatovic) Date: Fri, 2 Dec 2016 10:00:46 +0100 Subject: [Tendrl-devel] move github repo for deployment machines in CentOS for Tendrl testing In-Reply-To: References: <794d4a66-575a-50e5-43ba-be6396a7d2b4@redhat.com> <269642bb-b5b1-5873-c706-8900856c95b1@redhat.com> Message-ID: <3444d09a-4a89-bf09-450a-fe9f01e5eed8@redhat.com> On 12/02/2016 06:48 AM, Rohan Kanade wrote: > We should definitely drop the "USM" from upstream repositories. Call it > tendrl-centos-ci or something When we agreed that usm qe team will host it's repositories in the tendrl group, we also agreed that qe repositories will keep it's usmqe prefix to make the ownership/responsibility clear. -- Martin Bukatovic USM QE team From mbukatov at redhat.com Fri Dec 2 10:28:52 2016 From: mbukatov at redhat.com (Martin Bukatovic) Date: Fri, 2 Dec 2016 11:28:52 +0100 Subject: [Tendrl-devel] linking to storage docs from Tendrl user interface In-Reply-To: References: <210dc27c-a592-9b12-abcf-af416665ed97@redhat.com> Message-ID: On 12/02/2016 05:04 AM, Amye Scavarda wrote: > I don't disagree at all - this would be a very nice thing for our > users! But what happens two years down the line when our directions > between Ceph and Gluster have diverged - who is responsible for > maintaining coherency and communication between all three projects? That is a good point and a main risk here. If tendr links to it's own documentation only (eg. as oVirt console does), it would be easier to manage as tendrl project would control both the user interface and the documentation linked from there. But since my suggestion was to link to gluster and ceph documentation, there are multiple additional problems and all teams (tendrl teams, ceph doc team and gluster doc team) would need to work together to make this work. If any of these teams doesn't consider this to be important, we can't make this happen. Since tendrl release would list which gluster versions it supports, it would make sense to link to documentation of latest one of these versions. For this to work, gluster docs for that particular version would need to be still available even when a new upstream version is released. The other task would be for tendrl ui team to draft what kind of information we need to be able to link to in the documentation and ceph/gluster documentation teams would need add/change docs and to maintain it so that the connection would work as long as given tendrl version is supported. -- Martin Bukatovic USM QE team From jspray at redhat.com Fri Dec 2 11:23:13 2016 From: jspray at redhat.com (John Spray) Date: Fri, 2 Dec 2016 11:23:13 +0000 Subject: [Tendrl-devel] linking to storage docs from Tendrl user interface In-Reply-To: References: <210dc27c-a592-9b12-abcf-af416665ed97@redhat.com> Message-ID: On Fri, Dec 2, 2016 at 10:28 AM, Martin Bukatovic wrote: > On 12/02/2016 05:04 AM, Amye Scavarda wrote: >> I don't disagree at all - this would be a very nice thing for our >> users! But what happens two years down the line when our directions >> between Ceph and Gluster have diverged - who is responsible for >> maintaining coherency and communication between all three projects? > > That is a good point and a main risk here. > > If tendr links to it's own documentation only (eg. as oVirt > console does), it would be easier to manage as tendrl project would > control both the user interface and the documentation linked from there. > > But since my suggestion was to link to gluster and ceph documentation, > there are multiple additional problems and all teams (tendrl teams, > ceph doc team and gluster doc team) would need to work together to > make this work. If any of these teams doesn't consider this to be > important, we can't make this happen. This isn't really a question for "ceph doc team", it's a question for the Ceph community as a whole -- we don't have separate people writing the docs, or separate infrastructure for hosting them. Docs are in ceph.git and builds are on the community ceph.com site. You will have noticed that currently docs are at http://docs.ceph.com/docs/jewel/, http://docs.ceph.com/docs/master/ etc -- obviously you won't get stable links to master so you would have to pick a stable branch to link to, and then re-check all your links every year or so at the point that you update to link to a more recent stable branch. Anyone who wants to go forward with this should take the question to ceph-devel: "Can I rely on URLs on docs.ceph.com?" Because Tendrl doesn't cover most of Ceph administration, and we're still expecting users to use the command line for everything that Tendrl doesn't do, I think there's still a basic expectation that users will have done some level of reading/familiarisation -- at that stage I'm not sure how critical it is for the UI to point people at the documentation. John > Since tendrl release would list which gluster versions it supports, > it would make sense to link to documentation of latest one of these > versions. For this to work, gluster docs for that particular version > would need to be still available even when a new upstream version is > released. > > The other task would be for tendrl ui team to draft what kind of > information we need to be able to link to in the documentation > and ceph/gluster documentation teams would need add/change docs and > to maintain it so that the connection would work as long as given > tendrl version is supported. > > -- > Martin Bukatovic > USM QE team > > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel From mbukatov at redhat.com Fri Dec 2 14:55:13 2016 From: mbukatov at redhat.com (Martin Bukatovic) Date: Fri, 2 Dec 2016 15:55:13 +0100 Subject: [Tendrl-devel] adding dahorak into @Tendrl/qe Message-ID: <8d03bcca-6bc0-6ad4-9121-13fed9b4b580@redhat.com> Dear Jeff, I noticed that dahorak[1] is missing in @Tendrl/qe group[2]. Could you add him there? I'm writing you since you are the owner of the group. Thank you [1] https://github.com/dahorak [2] https://github.com/orgs/Tendrl/teams/qe -- Martin Bukatovic USM QE team From mbukatov at redhat.com Fri Dec 2 15:15:45 2016 From: mbukatov at redhat.com (Martin Bukatovic) Date: Fri, 2 Dec 2016 16:15:45 +0100 Subject: [Tendrl-devel] linking to storage docs from Tendrl user interface In-Reply-To: References: <210dc27c-a592-9b12-abcf-af416665ed97@redhat.com> Message-ID: <1ecb8da4-593f-50b3-6b58-874505c1bedf@redhat.com> On 12/02/2016 12:23 PM, John Spray wrote: > This isn't really a question for "ceph doc team", it's a question for > the Ceph community as a whole -- we don't have separate people writing > the docs, or separate infrastructure for hosting them. Docs are in > ceph.git and builds are on the community ceph.com site. You will have > noticed that currently docs are at http://docs.ceph.com/docs/jewel/, > http://docs.ceph.com/docs/master/ etc -- obviously you won't get > stable links to master so you would have to pick a stable branch to > link to, and then re-check all your links every year or so at the > point that you update to link to a more recent stable branch. Anyone > who wants to go forward with this should take the question to > ceph-devel: "Can I rely on URLs on docs.ceph.com?" Understood. > Because Tendrl doesn't cover most of Ceph administration, and we're > still expecting users to use the command line for everything that > Tendrl doesn't do, I think there's still a basic expectation that > users will have done some level of reading/familiarisation -- at that > stage I'm not sure how critical it is for the UI to point people at > the documentation. That's definitely true, admin using the Tendrl is expected to have some knowledge level of ceph or gluster storage. Maybe we could ask a different question: what would experienced Ceph administrator found useful to be referenced in the Tendrl ui? Linking to the ceph docs directly for a particular action (so that the connection between an ui feature and the ceph action is clear, making looking for ceph related details easy) - or just linking to the Tendrl docs which would reveal more details on how the features and values reported by the ui relates to the ceph features without actually linking to the ceph docs? -- Martin Bukatovic USM QE team From khartsoe at redhat.com Fri Dec 2 15:59:44 2016 From: khartsoe at redhat.com (Kenneth Hartsoe) Date: Fri, 2 Dec 2016 10:59:44 -0500 (EST) Subject: [Tendrl-devel] linking to storage docs from Tendrl user interface In-Reply-To: <1ecb8da4-593f-50b3-6b58-874505c1bedf@redhat.com> References: <210dc27c-a592-9b12-abcf-af416665ed97@redhat.com> <1ecb8da4-593f-50b3-6b58-874505c1bedf@redhat.com> Message-ID: <65773354.73615351.1480694384700.JavaMail.zimbra@redhat.com> Comments inline Ken Hartsoe Content Strategist Red Hat Storage Documentation khartsoe at redhat.com; IRC: khartsoe Office: 919 754 4770; Internal: 814 4770 ----- Original Message ----- | On 12/02/2016 12:23 PM, John Spray wrote: | > This isn't really a question for "ceph doc team", it's a question for | > the Ceph community as a whole -- we don't have separate people writing | > the docs, or separate infrastructure for hosting them. Docs are in | > ceph.git and builds are on the community ceph.com site. You will have | > noticed that currently docs are at http://docs.ceph.com/docs/jewel/, | > http://docs.ceph.com/docs/master/ etc -- obviously you won't get | > stable links to master so you would have to pick a stable branch to | > link to, and then re-check all your links every year or so at the | > point that you update to link to a more recent stable branch. Anyone | > who wants to go forward with this should take the question to | > ceph-devel: "Can I rely on URLs on docs.ceph.com?" | | Understood. This stability (or instability) is why my original thought on this is that the linking would be directed to downstream content, where we would have more control over the link stability, versioning, content, etc. i.e., the UI text/hover help would be the first level of assistance; then, if the user wanted more in-depth concept/advanced task content, they would click the link to the downstream topic. | | > Because Tendrl doesn't cover most of Ceph administration, and we're | > still expecting users to use the command line for everything that | > Tendrl doesn't do, I think there's still a basic expectation that | > users will have done some level of reading/familiarisation -- at that | > stage I'm not sure how critical it is for the UI to point people at | > the documentation. | | That's definitely true, admin using the Tendrl is expected to have | some knowledge level of ceph or gluster storage. Although the overall scope/extent/necessity of identifying which/if user content will be necessary to be integrated is still under discussion, I think of a critical-type scenario as a user task where the ramifications of executing it requires a thorough understanding of the advantages/disadvantages/consequences before implementing, or, a non-routine task where familiarity might be limited. Within the UI where user assistance is limited, necessary content can be linked to address these scenarios. | Maybe we could ask a different question: what would experienced Ceph | administrator found useful to be referenced in the Tendrl ui? Linking to | the ceph docs directly for a particular action (so that the connection | between an ui feature and the ceph action is clear, making looking for | ceph related details easy) - or just linking to the Tendrl docs which | would reveal more details on how the features and values reported by | the ui relates to the ceph features without actually linking to the | ceph docs? | | -- | Martin Bukatovic | USM QE team | | _______________________________________________ | Tendrl-devel mailing list | Tendrl-devel at redhat.com | https://www.redhat.com/mailman/listinfo/tendrl-devel | From japplewh at redhat.com Fri Dec 2 16:54:47 2016 From: japplewh at redhat.com (Jeff Applewhite) Date: Fri, 02 Dec 2016 16:54:47 +0000 Subject: [Tendrl-devel] linking to storage docs from Tendrl user interface In-Reply-To: <1ecb8da4-593f-50b3-6b58-874505c1bedf@redhat.com> References: <210dc27c-a592-9b12-abcf-af416665ed97@redhat.com> <1ecb8da4-593f-50b3-6b58-874505c1bedf@redhat.com> Message-ID: I think this is a good idea but I don't think it rises to the top of the priority list given all the things we are trying to do. I suggest we discuss this in the Tendrl 4 time frame and focus now on actual feature development and testing. On Fri, Dec 2, 2016 at 10:15 AM Martin Bukatovic wrote: > On 12/02/2016 12:23 PM, John Spray wrote: > > This isn't really a question for "ceph doc team", it's a question for > > the Ceph community as a whole -- we don't have separate people writing > > the docs, or separate infrastructure for hosting them. Docs are in > > ceph.git and builds are on the community ceph.com site. You will have > > noticed that currently docs are at http://docs.ceph.com/docs/jewel/, > > http://docs.ceph.com/docs/master/ etc -- obviously you won't get > > stable links to master so you would have to pick a stable branch to > > link to, and then re-check all your links every year or so at the > > point that you update to link to a more recent stable branch. Anyone > > who wants to go forward with this should take the question to > > ceph-devel: "Can I rely on URLs on docs.ceph.com?" > > Understood. > > > Because Tendrl doesn't cover most of Ceph administration, and we're > > still expecting users to use the command line for everything that > > Tendrl doesn't do, I think there's still a basic expectation that > > users will have done some level of reading/familiarisation -- at that > > stage I'm not sure how critical it is for the UI to point people at > > the documentation. > > That's definitely true, admin using the Tendrl is expected to have > some knowledge level of ceph or gluster storage. > > Maybe we could ask a different question: what would experienced Ceph > administrator found useful to be referenced in the Tendrl ui? Linking to > the ceph docs directly for a particular action (so that the connection > between an ui feature and the ceph action is clear, making looking for > ceph related details easy) - or just linking to the Tendrl docs which > would reveal more details on how the features and values reported by > the ui relates to the ceph features without actually linking to the > ceph docs? > > -- > Martin Bukatovic > USM QE team > > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel > From japplewh at redhat.com Fri Dec 2 16:54:53 2016 From: japplewh at redhat.com (Jeff Applewhite) Date: Fri, 02 Dec 2016 16:54:53 +0000 Subject: [Tendrl-devel] adding dahorak into @Tendrl/qe In-Reply-To: <8d03bcca-6bc0-6ad4-9121-13fed9b4b580@redhat.com> References: <8d03bcca-6bc0-6ad4-9121-13fed9b4b580@redhat.com> Message-ID: Hi Martin I am traveling today. Could one of the other admins make this change for me? Thanks Jeff On Fri, Dec 2, 2016 at 9:55 AM Martin Bukatovic wrote: > Dear Jeff, > > I noticed that dahorak[1] is missing in @Tendrl/qe group[2]. > Could you add him there? I'm writing you since you are > the owner of the group. > > Thank you > > [1] https://github.com/dahorak > [2] https://github.com/orgs/Tendrl/teams/qe > > -- > Martin Bukatovic > USM QE team > From amukherj at redhat.com Sat Dec 3 04:46:17 2016 From: amukherj at redhat.com (Atin Mukherjee) Date: Sat, 03 Dec 2016 04:46:17 +0000 Subject: [Tendrl-devel] Regarding some additional data points in the state output In-Reply-To: <040d4c4e-c468-4ae0-81dd-3b4fa5238c4c@redhat.com> References: <5e1609e4-cfc3-4dc3-f421-ac51f56b9d64@redhat.com> <1988c0ba-6dfb-cdb3-a70e-4e51f02830e0@redhat.com> <6a77b36c-086a-8903-e5eb-b277a45a178f@redhat.com> <040d4c4e-c468-4ae0-81dd-3b4fa5238c4c@redhat.com> Message-ID: The patch is now into gluster 3.9 nightly builds. On Wed, 30 Nov 2016 at 15:13, Samikshan Bairagya wrote: > > > On 11/30/2016 02:53 PM, Rohan Kanade wrote: > > @samikshan, > > > > Please provide sample output of the get-state cli based on this patch > > > > A sample output file is attached with this email. > > ~ Samikshan > > > On Wed, Nov 30, 2016 at 2:47 PM, Samikshan Bairagya > > > wrote: > > > >> > >> > >> On 11/30/2016 09:35 AM, Shubhendu Tripathi wrote: > >> > >>> Dear Rohan, > >>> > >>> I accept that the below format is a valid ini file format > syntactically, > >>> but semantically its screwing up volumes listing. > >>> The "Volumes" list returns > >>> > >>> [Volumes] > >>> Volume1.name: test-vol > >>> Volume1.id: 7942d008-e300-4fd9-8af0-5a118afd8d3d > >>> Volume1.type: Distribute > >>> Volume1.transport_type: tcp > >>> Volume1.status: Started > >>> Volume1.brickcount: 1 > >>> Volume1.Brick1.path: 172.17.0.2:/tmp/b1 > >>> Volume1.Brick1.hostname: 172.17.0.2 > >>> Volume1.Brick1.port: 49153 > >>> Volume1.Brick1.rdma_port: 0 > >>> Volume1.Brick1.status: Started > >>> Volume1.Brick1.signedin: True > >>> Volume1.snap_count: 0 > >>> Volume1.stripe_count: 1 > >>> Volume1.replica_count: 1 > >>> Volume1.subvol_count: 1 > >>> Volume1.arbiter_count: 0 > >>> Volume1.disperse_count: 0 > >>> Volume1.redundancy_count: 0 > >>> Volume1.quorum_status: not_applicable > >>> Volume1.snapd_svc.online_status: Offline > >>> Volume1.snapd_svc.inited: True > >>> Volume1.rebalance.id: 00000000-0000-0000-0000-000000000000 > >>> Volume1.rebalance.status: not_started > >>> Volume1.rebalance.failures: 0 > >>> Volume1.rebalance.skipped: 0 > >>> Volume1.rebalance.lookedup: 0 > >>> Volume1.rebalance.files: 0 > >>> Volume1.rebalance.data: 0Bytes > >>> > >>> Also the "Volume1.options" returns the volume2 details mingled within > as > >>> below > >>> > >>> features.barrier: on > >>> transport.address-family: inet > >>> performance.readdir-ahead: on > >>> nfs.disable: on > >>> > >>> Volume2.name: test-vol1 > >>> Volume2.id: 35854708-bb72-45a5-bdbd-77c51e5ebfb9 > >>> Volume2.type: Distribute > >>> Volume2.transport_type: tcp > >>> Volume2.status: Started > >>> Volume2.brickcount: 1 > >>> Volume2.Brick1.path: 172.17.0.2:/tmp/b2 > >>> Volume2.Brick1.hostname: 172.17.0.2 > >>> Volume2.Brick1.port: 49152 > >>> Volume2.Brick1.rdma_port: 0 > >>> Volume2.Brick1.status: Started > >>> Volume2.Brick1.signedin: True > >>> Volume2.snap_count: 0 > >>> Volume2.stripe_count: 1 > >>> Volume2.replica_count: 1 > >>> Volume2.subvol_count: 1 > >>> Volume2.arbiter_count: 0 > >>> Volume2.disperse_count: 0 > >>> Volume2.redundancy_count: 0 > >>> Volume2.quorum_status: not_applicable > >>> Volume2.snapd_svc.online_status: Offline > >>> Volume2.snapd_svc.inited: True > >>> Volume2.rebalance.id: 00000000-0000-0000-0000-000000000000 > >>> Volume2.rebalance.status: not_started > >>> Volume2.rebalance.failures: 0 > >>> Volume2.rebalance.skipped: 0 > >>> Volume2.rebalance.lookedup: 0 > >>> Volume2.rebalance.files: 0 > >>> Volume2.rebalance.data: 0Bytes > >>> > >>> Tried debugging a little the parser for this ini file and it looks like > >>> sector/sections are formed based on [] brackets and anything below one > >>> section (till next [] found) is treated as one section. > >>> > >>> Instead, the flatted structure like "Volume1.options.nfs.disable: on" > >>> would have been an easier option to parse and code change tendrl side. > >>> > >>> > >> Hi, A patch for this is ready here: http://review.gluster.org/15975. > >> Thanks. > >> > >> ~ Samikshan > >> > >> > >> > >> At the moment I dont find a way to resolve this mingled sections and > >>> handling within tendrl parser. I have tried some tweaking in parser but > >>> looks like sections are formed underneath using the library for ini > >>> parser. > >>> > >>> Comments?? > >>> > >>> Regards, > >>> Shubhendu > >>> > >>> > >>> On 11/22/2016 08:16 PM, Rohan Kanade wrote: > >>> > >>>> Sample: > >>>> START>>> > >>>> > >>>> [Global] > >>>> MYUUID: 6bbf8ac2-22a0-4f08-b986-fe75aea9f654 > >>>> op-version: 40000 > >>>> > >>>> [Global options] > >>>> > >>>> [Peers] > >>>> > >>>> [Volumes] > >>>> Volume1.name: test-vol > >>>> Volume1.id: 7942d008-e300-4fd9-8af0-5a118afd8d3d > >>>> Volume1.type: Distribute > >>>> Volume1.transport_type: tcp > >>>> Volume1.status: Started > >>>> Volume1.brickcount: 1 > >>>> Volume1.Brick1.path: 172.17.0.2:/tmp/b1 > >>>> Volume1.Brick1.hostname: 172.17.0.2 > >>>> Volume1.Brick1.port: 49153 > >>>> Volume1.Brick1.rdma_port: 0 > >>>> Volume1.Brick1.status: Started > >>>> Volume1.Brick1.signedin: True > >>>> Volume1.snap_count: 0 > >>>> Volume1.stripe_count: 1 > >>>> Volume1.replica_count: 1 > >>>> Volume1.subvol_count: 1 > >>>> Volume1.arbiter_count: 0 > >>>> Volume1.disperse_count: 0 > >>>> Volume1.redundancy_count: 0 > >>>> Volume1.quorum_status: not_applicable > >>>> Volume1.snapd_svc.online_status: Offline > >>>> Volume1.snapd_svc.inited: True > >>>> Volume1.rebalance.id: 00000000-0000-0000-0000-000000000000 > >>>> Volume1.rebalance.status: not_started > >>>> Volume1.rebalance.failures: 0 > >>>> Volume1.rebalance.skipped: 0 > >>>> Volume1.rebalance.lookedup: 0 > >>>> Volume1.rebalance.files: 0 > >>>> Volume1.rebalance.data: 0Bytes > >>>> [Volume1.options] > >>>> features.barrier: on > >>>> transport.address-family: inet > >>>> performance.readdir-ahead: on > >>>> nfs.disable: on > >>>> > >>>> Volume2.name: test-vol1 > >>>> Volume2.id: 35854708-bb72-45a5-bdbd-77c51e5ebfb9 > >>>> Volume2.type: Distribute > >>>> Volume2.transport_type: tcp > >>>> Volume2.status: Started > >>>> Volume2.brickcount: 1 > >>>> Volume2.Brick1.path: 172.17.0.2:/tmp/b2 > >>>> Volume2.Brick1.hostname: 172.17.0.2 > >>>> Volume2.Brick1.port: 49152 > >>>> Volume2.Brick1.rdma_port: 0 > >>>> Volume2.Brick1.status: Started > >>>> Volume2.Brick1.signedin: True > >>>> Volume2.snap_count: 0 > >>>> Volume2.stripe_count: 1 > >>>> Volume2.replica_count: 1 > >>>> Volume2.subvol_count: 1 > >>>> Volume2.arbiter_count: 0 > >>>> Volume2.disperse_count: 0 > >>>> Volume2.redundancy_count: 0 > >>>> Volume2.quorum_status: not_applicable > >>>> Volume2.snapd_svc.online_status: Offline > >>>> Volume2.snapd_svc.inited: True > >>>> Volume2.rebalance.id: 00000000-0000-0000-0000-000000000000 > >>>> Volume2.rebalance.status: not_started > >>>> Volume2.rebalance.failures: 0 > >>>> Volume2.rebalance.skipped: 0 > >>>> Volume2.rebalance.lookedup: 0 > >>>> Volume2.rebalance.files: 0 > >>>> Volume2.rebalance.data: 0Bytes > >>>> [Volume2.options] > >>>> transport.address-family: inet > >>>> performance.readdir-ahead: on > >>>> nfs.disable: on > >>>> > >>>> > >>>> [Services] > >>>> svc1.name: glustershd > >>>> svc1.online_status: Offline > >>>> > >>>> svc2.name: nfs > >>>> svc2.online_status: Offline > >>>> > >>>> svc3.name: bitd > >>>> svc3.online_status: Offline > >>>> > >>>> svc4.name: scrub > >>>> svc4.online_status: Offline > >>>> > >>>> svc5.name: quotad > >>>> svc5.online_status: Offline > >>>> > >>>> > >>>> [Misc] > >>>> Base port: 49152 > >>>> Last allocated port: 49153 > >>>> > >>>> < >>>> > >>>> On Tue, Nov 22, 2016 at 2:06 PM, Rohan Kanade > >>>> wrote: > >>>> > >>>>> Also, please provide a full state dump example with this patch > included, > >>>>> > >>>> easier for Tendrl devs to get started without deploying this patch > >>>> > >>>>> On Tue, Nov 22, 2016 at 1:44 PM, Rohan Kanade > >>>>> wrote: > >>>>> > >>>>>> Id prefer the first option > >>>>>> > >>>>>> > >>>>>> [Volumes] > >>>>>> Volume1.name: tv1 > >>>>>> Volume1.id: 0242f875-24ad-480d-a605-06de2e0f3842 > >>>>>> Volume1.type: Distribute > >>>>>> > >>>>>> Volume1.rebalance.files: 0 > >>>>>> Volume1.rebalance.data: 0Bytes > >>>>>> [Volume1.options] > >>>>>> nfs.disable: on > >>>>>> performance.readdir-ahead: on > >>>>>> transport.address-family: inet > >>>>>> features.uss: on > >>>>>> > >>>>>> Volume2.name: tv2 > >>>>>> Volume2.id: 937ad30c-bc08-4928-85e4-ece49235037a > >>>>>> Volume2.type: Distribute > >>>>>> ......... > >>>>>> ......... > >>>>>> > >>>>>> > >>>>>> This would require minor changes to tendrl/gluster-integration > >>>>>> > >>>>> definition files and code. I will draw up a spec on > >>>> Tendrl/specifications > >>>> to document the changes required. Please go ahead with your patch > >>>> > >>>>> Thanks > >>>>>> > >>>>>> On Tue, Nov 22, 2016 at 9:18 AM, Atin Mukherjee < > amukherj at redhat.com> > >>>>>> > >>>>> wrote: > >>>> > >>>>> We are awaiting final confirmation from Rohan. Samikshan has kept the > >>>>>>> changes ready and will push it to gerrit once we hear from Rohan. > >>>>>>> > >>>>>>> On Mon, Nov 21, 2016 at 12:54 PM, Shubhendu Tripathi < > >>>>>>> > >>>>>> shtripat at redhat.com> > >>>> > >>>>> wrote: > >>>>>>> > >>>>>>> Looking at options, I feel option-2 would be more feasible and > might > >>>>>>>> > >>>>>>> not > >>>> > >>>>> need code changes in tendrl. > >>>>>>>> But still lets wait for the confirmation from Rohan. > >>>>>>>> > >>>>>>>> Regards, > >>>>>>>> Shubhendu > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> On 11/21/2016 12:43 PM, Atin Mukherjee wrote: > >>>>>>>> > >>>>>>>> +tendrl-devel > >>>>>>>>> > >>>>>>>>> On Mon, Nov 21, 2016 at 12:41 PM, Samikshan Bairagya < > >>>>>>>>> > >>>>>>>> sbairagy at redhat.com > >>>> > >>>>> wrote: > >>>>>>>>> > >>>>>>>>> Hey Rohan, > >>>>>>>>> > >>>>>>>>>> So the current get-state CLI misses volume specific options in > its > >>>>>>>>>> output. > >>>>>>>>>> Somehow I missed it while coming up with the implementation. > This > >>>>>>>>>> > >>>>>>>>> patch > >>>> > >>>>> by > >>>>>>>>>> Atin is a fix for that: http://review.gluster.org/#/c/15889/1. > The > >>>>>>>>>> following example shows how this patch would add these new data > >>>>>>>>>> > >>>>>>>>> points > >>>> > >>>>> and > >>>>>>>>>> how that would change the existing format: > >>>>>>>>>> > >>>>>>>>>> [Volumes] > >>>>>>>>>> Volume1.name: tv1 > >>>>>>>>>> Volume1.id: 0242f875-24ad-480d-a605-06de2e0f3842 > >>>>>>>>>> Volume1.type: Distribute > >>>>>>>>>> > >>>>>>>>>> Volume1.rebalance.files: 0 > >>>>>>>>>> Volume1.rebalance.data: 0Bytes > >>>>>>>>>> [Volume1.options] > >>>>>>>>>> nfs.disable: on > >>>>>>>>>> performance.readdir-ahead: on > >>>>>>>>>> transport.address-family: inet > >>>>>>>>>> features.uss: on > >>>>>>>>>> > >>>>>>>>>> Volume2.name: tv2 > >>>>>>>>>> Volume2.id: 937ad30c-bc08-4928-85e4-ece49235037a > >>>>>>>>>> Volume2.type: Distribute > >>>>>>>>>> ......... > >>>>>>>>>> ......... > >>>>>>>>>> > >>>>>>>>>> So essentially there would be a new section for every volume > that > >>>>>>>>>> > >>>>>>>>> would > >>>> > >>>>> list the option names and corresponding values. Would adding this > >>>>>>>>>> > >>>>>>>>> change > >>>> > >>>>> still keep the get-state output parseable from Tendrl POV? > >>>>>>>>>> > >>>>>>>>>> Or would an output like the following make more sense? Let us > know. > >>>>>>>>>> Thanks. > >>>>>>>>>> > >>>>>>>>>> [Volumes] > >>>>>>>>>> Volume1.name: tv1 > >>>>>>>>>> Volume1.id: 0242f875-24ad-480d-a605-06de2e0f3842 > >>>>>>>>>> Volume1.type: Distribute > >>>>>>>>>> > >>>>>>>>>> Volume1.rebalance.files: 0 > >>>>>>>>>> Volume1.rebalance.data: 0Bytes > >>>>>>>>>> Volume1.options.nfs.disable: on > >>>>>>>>>> Volume1.options.performance.readdir-ahead: on > >>>>>>>>>> Volume1.options.transport.address-family: inet > >>>>>>>>>> Volume1.options.features.uss: on > >>>>>>>>>> > >>>>>>>>>> Volume2.name: tv2 > >>>>>>>>>> Volume2.id: 937ad30c-bc08-4928-85e4-ece49235037a > >>>>>>>>>> Volume2.type: Distribute > >>>>>>>>>> ......... > >>>>>>>>>> ......... > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> ~ Samikshan > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>> _______________________________________________ > >>>>>>>> Tendrl-devel mailing list > >>>>>>>> Tendrl-devel at redhat.com > >>>>>>>> https://www.redhat.com/mailman/listinfo/tendrl-devel > >>>>>>>> > >>>>>>>> > >>>>>>> > >>>>>>> -- > >>>>>>> > >>>>>>> ~ Atin (atinm) > >>>>>>> _______________________________________________ > >>>>>>> Tendrl-devel mailing list > >>>>>>> Tendrl-devel at redhat.com > >>>>>>> https://www.redhat.com/mailman/listinfo/tendrl-devel > >>>>>>> > >>>>>> > >>>>>> _______________________________________________ > >>>> Tendrl-devel mailing list > >>>> Tendrl-devel at redhat.com > >>>> https://www.redhat.com/mailman/listinfo/tendrl-devel > >>>> > >>> > >>> > >>> > > > -- - Atin (atinm) From shtripat at redhat.com Sat Dec 3 09:44:35 2016 From: shtripat at redhat.com (Shubhendu Tripathi) Date: Sat, 3 Dec 2016 04:44:35 -0500 (EST) Subject: [Tendrl-devel] Regarding some additional data points in the state output Message-ID: Thanks Atin for confirmation. I will try with latest build. Regards Shubhendu Sent from Samsung Mobile -------- Original message -------- From: Atin Mukherjee Date:03/12/2016 10:16 (GMT+05:30) To: Rohan Kanade ,Samikshan Bairagya Cc: Mailing list for the contributors to the Tendrl project ,Shubhendu Tripathi Subject: Re: [Tendrl-devel] Regarding some additional data points in the state output The patch is now into gluster 3.9 nightly builds. On Wed, 30 Nov 2016 at 15:13, Samikshan Bairagya wrote: On 11/30/2016 02:53 PM, Rohan Kanade wrote: > @samikshan, > > Please provide sample output of the get-state cli based on this patch > A sample output file is attached with this email. ~ Samikshan > On Wed, Nov 30, 2016 at 2:47 PM, Samikshan Bairagya > wrote: > >> >> >> On 11/30/2016 09:35 AM, Shubhendu Tripathi wrote: >> >>> Dear Rohan, >>> >>> I accept that the below format is a valid ini file format syntactically, >>> but semantically its screwing up volumes listing. >>> The "Volumes" list returns >>> >>> [Volumes] >>> Volume1.name: test-vol >>> Volume1.id: 7942d008-e300-4fd9-8af0-5a118afd8d3d >>> Volume1.type: Distribute >>> Volume1.transport_type: tcp >>> Volume1.status: Started >>> Volume1.brickcount: 1 >>> Volume1.Brick1.path: 172.17.0.2:/tmp/b1 >>> Volume1.Brick1.hostname: 172.17.0.2 >>> Volume1.Brick1.port: 49153 >>> Volume1.Brick1.rdma_port: 0 >>> Volume1.Brick1.status: Started >>> Volume1.Brick1.signedin: True >>> Volume1.snap_count: 0 >>> Volume1.stripe_count: 1 >>> Volume1.replica_count: 1 >>> Volume1.subvol_count: 1 >>> Volume1.arbiter_count: 0 >>> Volume1.disperse_count: 0 >>> Volume1.redundancy_count: 0 >>> Volume1.quorum_status: not_applicable >>> Volume1.snapd_svc.online_status: Offline >>> Volume1.snapd_svc.inited: True >>> Volume1.rebalance.id: 00000000-0000-0000-0000-000000000000 >>> Volume1.rebalance.status: not_started >>> Volume1.rebalance.failures: 0 >>> Volume1.rebalance.skipped: 0 >>> Volume1.rebalance.lookedup: 0 >>> Volume1.rebalance.files: 0 >>> Volume1.rebalance.data: 0Bytes >>> >>> Also the "Volume1.options" returns the volume2 details mingled within as >>> below >>> >>> features.barrier: on >>> transport.address-family: inet >>> performance.readdir-ahead: on >>> nfs.disable: on >>> >>> Volume2.name: test-vol1 >>> Volume2.id: 35854708-bb72-45a5-bdbd-77c51e5ebfb9 >>> Volume2.type: Distribute >>> Volume2.transport_type: tcp >>> Volume2.status: Started >>> Volume2.brickcount: 1 >>> Volume2.Brick1.path: 172.17.0.2:/tmp/b2 >>> Volume2.Brick1.hostname: 172.17.0.2 >>> Volume2.Brick1.port: 49152 >>> Volume2.Brick1.rdma_port: 0 >>> Volume2.Brick1.status: Started >>> Volume2.Brick1.signedin: True >>> Volume2.snap_count: 0 >>> Volume2.stripe_count: 1 >>> Volume2.replica_count: 1 >>> Volume2.subvol_count: 1 >>> Volume2.arbiter_count: 0 >>> Volume2.disperse_count: 0 >>> Volume2.redundancy_count: 0 >>> Volume2.quorum_status: not_applicable >>> Volume2.snapd_svc.online_status: Offline >>> Volume2.snapd_svc.inited: True >>> Volume2.rebalance.id: 00000000-0000-0000-0000-000000000000 >>> Volume2.rebalance.status: not_started >>> Volume2.rebalance.failures: 0 >>> Volume2.rebalance.skipped: 0 >>> Volume2.rebalance.lookedup: 0 >>> Volume2.rebalance.files: 0 >>> Volume2.rebalance.data: 0Bytes >>> >>> Tried debugging a little the parser for this ini file and it looks like >>> sector/sections are formed based on [] brackets and anything below one >>> section (till next [] found) is treated as one section. >>> >>> Instead, the flatted structure like "Volume1.options.nfs.disable: on" >>> would have been an easier option to parse and code change tendrl side. >>> >>> >> Hi, A patch for this is ready here: http://review.gluster.org/15975. >> Thanks. >> >> ~ Samikshan >> >> >> >> At the moment I dont find a way to resolve this mingled sections and >>> handling within tendrl parser. I have tried some tweaking in parser but >>> looks like sections are formed underneath using the library for ini >>> parser. >>> >>> Comments?? >>> >>> Regards, >>> Shubhendu >>> >>> >>> On 11/22/2016 08:16 PM, Rohan Kanade wrote: >>> >>>> Sample: >>>> START>>> >>>> >>>> [Global] >>>> MYUUID: 6bbf8ac2-22a0-4f08-b986-fe75aea9f654 >>>> op-version: 40000 >>>> >>>> [Global options] >>>> >>>> [Peers] >>>> >>>> [Volumes] >>>> Volume1.name: test-vol >>>> Volume1.id: 7942d008-e300-4fd9-8af0-5a118afd8d3d >>>> Volume1.type: Distribute >>>> Volume1.transport_type: tcp >>>> Volume1.status: Started >>>> Volume1.brickcount: 1 >>>> Volume1.Brick1.path: 172.17.0.2:/tmp/b1 >>>> Volume1.Brick1.hostname: 172.17.0.2 >>>> Volume1.Brick1.port: 49153 >>>> Volume1.Brick1.rdma_port: 0 >>>> Volume1.Brick1.status: Started >>>> Volume1.Brick1.signedin: True >>>> Volume1.snap_count: 0 >>>> Volume1.stripe_count: 1 >>>> Volume1.replica_count: 1 >>>> Volume1.subvol_count: 1 >>>> Volume1.arbiter_count: 0 >>>> Volume1.disperse_count: 0 >>>> Volume1.redundancy_count: 0 >>>> Volume1.quorum_status: not_applicable >>>> Volume1.snapd_svc.online_status: Offline >>>> Volume1.snapd_svc.inited: True >>>> Volume1.rebalance.id: 00000000-0000-0000-0000-000000000000 >>>> Volume1.rebalance.status: not_started >>>> Volume1.rebalance.failures: 0 >>>> Volume1.rebalance.skipped: 0 >>>> Volume1.rebalance.lookedup: 0 >>>> Volume1.rebalance.files: 0 >>>> Volume1.rebalance.data: 0Bytes >>>> [Volume1.options] >>>> features.barrier: on >>>> transport.address-family: inet >>>> performance.readdir-ahead: on >>>> nfs.disable: on >>>> >>>> Volume2.name: test-vol1 >>>> Volume2.id: 35854708-bb72-45a5-bdbd-77c51e5ebfb9 >>>> Volume2.type: Distribute >>>> Volume2.transport_type: tcp >>>> Volume2.status: Started >>>> Volume2.brickcount: 1 >>>> Volume2.Brick1.path: 172.17.0.2:/tmp/b2 >>>> Volume2.Brick1.hostname: 172.17.0.2 >>>> Volume2.Brick1.port: 49152 >>>> Volume2.Brick1.rdma_port: 0 >>>> Volume2.Brick1.status: Started >>>> Volume2.Brick1.signedin: True >>>> Volume2.snap_count: 0 >>>> Volume2.stripe_count: 1 >>>> Volume2.replica_count: 1 >>>> Volume2.subvol_count: 1 >>>> Volume2.arbiter_count: 0 >>>> Volume2.disperse_count: 0 >>>> Volume2.redundancy_count: 0 >>>> Volume2.quorum_status: not_applicable >>>> Volume2.snapd_svc.online_status: Offline >>>> Volume2.snapd_svc.inited: True >>>> Volume2.rebalance.id: 00000000-0000-0000-0000-000000000000 >>>> Volume2.rebalance.status: not_started >>>> Volume2.rebalance.failures: 0 >>>> Volume2.rebalance.skipped: 0 >>>> Volume2.rebalance.lookedup: 0 >>>> Volume2.rebalance.files: 0 >>>> Volume2.rebalance.data: 0Bytes >>>> [Volume2.options] >>>> transport.address-family: inet >>>> performance.readdir-ahead: on >>>> nfs.disable: on >>>> >>>> >>>> [Services] >>>> svc1.name: glustershd >>>> svc1.online_status: Offline >>>> >>>> svc2.name: nfs >>>> svc2.online_status: Offline >>>> >>>> svc3.name: bitd >>>> svc3.online_status: Offline >>>> >>>> svc4.name: scrub >>>> svc4.online_status: Offline >>>> >>>> svc5.name: quotad >>>> svc5.online_status: Offline >>>> >>>> >>>> [Misc] >>>> Base port: 49152 >>>> Last allocated port: 49153 >>>> >>>> <>>> >>>> On Tue, Nov 22, 2016 at 2:06 PM, Rohan Kanade >>>> wrote: >>>> >>>>> Also, please provide a full state dump example with this patch included, >>>>> >>>> easier for Tendrl devs to get started without deploying this patch >>>> >>>>> On Tue, Nov 22, 2016 at 1:44 PM, Rohan Kanade >>>>> wrote: >>>>> >>>>>> Id prefer the first option >>>>>> >>>>>> >>>>>> [Volumes] >>>>>> Volume1.name: tv1 >>>>>> Volume1.id: 0242f875-24ad-480d-a605-06de2e0f3842 >>>>>> Volume1.type: Distribute >>>>>> >>>>>> Volume1.rebalance.files: 0 >>>>>> Volume1.rebalance.data: 0Bytes >>>>>> [Volume1.options] >>>>>> nfs.disable: on >>>>>> performance.readdir-ahead: on >>>>>> transport.address-family: inet >>>>>> features.uss: on >>>>>> >>>>>> Volume2.name: tv2 >>>>>> Volume2.id: 937ad30c-bc08-4928-85e4-ece49235037a >>>>>> Volume2.type: Distribute >>>>>> ......... >>>>>> ......... >>>>>> >>>>>> >>>>>> This would require minor changes to tendrl/gluster-integration >>>>>> >>>>> definition files and code. I will draw up a spec on >>>> Tendrl/specifications >>>> to document the changes required. Please go ahead with your patch >>>> >>>>> Thanks >>>>>> >>>>>> On Tue, Nov 22, 2016 at 9:18 AM, Atin Mukherjee >>>>>> >>>>> wrote: >>>> >>>>> We are awaiting final confirmation from Rohan. Samikshan has kept the >>>>>>> changes ready and will push it to gerrit once we hear from Rohan. >>>>>>> >>>>>>> On Mon, Nov 21, 2016 at 12:54 PM, Shubhendu Tripathi < >>>>>>> >>>>>> shtripat at redhat.com> >>>> >>>>> wrote: >>>>>>> >>>>>>> Looking at options, I feel option-2 would be more feasible and might >>>>>>>> >>>>>>> not >>>> >>>>> need code changes in tendrl. >>>>>>>> But still lets wait for the confirmation from Rohan. >>>>>>>> >>>>>>>> Regards, >>>>>>>> Shubhendu >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On 11/21/2016 12:43 PM, Atin Mukherjee wrote: >>>>>>>> >>>>>>>> +tendrl-devel >>>>>>>>> >>>>>>>>> On Mon, Nov 21, 2016 at 12:41 PM, Samikshan Bairagya < >>>>>>>>> >>>>>>>> sbairagy at redhat.com >>>> >>>>> wrote: >>>>>>>>> >>>>>>>>> Hey Rohan, >>>>>>>>> >>>>>>>>>> So the current get-state CLI misses volume specific options in its >>>>>>>>>> output. >>>>>>>>>> Somehow I missed it while coming up with the implementation. This >>>>>>>>>> >>>>>>>>> patch >>>> >>>>> by >>>>>>>>>> Atin is a fix for that: http://review.gluster.org/#/c/15889/1. The >>>>>>>>>> following example shows how this patch would add these new data >>>>>>>>>> >>>>>>>>> points >>>> >>>>> and >>>>>>>>>> how that would change the existing format: >>>>>>>>>> >>>>>>>>>> [Volumes] >>>>>>>>>> Volume1.name: tv1 >>>>>>>>>> Volume1.id: 0242f875-24ad-480d-a605-06de2e0f3842 >>>>>>>>>> Volume1.type: Distribute >>>>>>>>>> >>>>>>>>>> Volume1.rebalance.files: 0 >>>>>>>>>> Volume1.rebalance.data: 0Bytes >>>>>>>>>> [Volume1.options] >>>>>>>>>> nfs.disable: on >>>>>>>>>> performance.readdir-ahead: on >>>>>>>>>> transport.address-family: inet >>>>>>>>>> features.uss: on >>>>>>>>>> >>>>>>>>>> Volume2.name: tv2 >>>>>>>>>> Volume2.id: 937ad30c-bc08-4928-85e4-ece49235037a >>>>>>>>>> Volume2.type: Distribute >>>>>>>>>> ......... >>>>>>>>>> ......... >>>>>>>>>> >>>>>>>>>> So essentially there would be a new section for every volume that >>>>>>>>>> >>>>>>>>> would >>>> >>>>> list the option names and corresponding values. Would adding this >>>>>>>>>> >>>>>>>>> change >>>> >>>>> still keep the get-state output parseable from Tendrl POV? >>>>>>>>>> >>>>>>>>>> Or would an output like the following make more sense? Let us know. >>>>>>>>>> Thanks. >>>>>>>>>> >>>>>>>>>> [Volumes] >>>>>>>>>> Volume1.name: tv1 >>>>>>>>>> Volume1.id: 0242f875-24ad-480d-a605-06de2e0f3842 >>>>>>>>>> Volume1.type: Distribute >>>>>>>>>> >>>>>>>>>> Volume1.rebalance.files: 0 >>>>>>>>>> Volume1.rebalance.data: 0Bytes >>>>>>>>>> Volume1.options.nfs.disable: on >>>>>>>>>> Volume1.options.performance.readdir-ahead: on >>>>>>>>>> Volume1.options.transport.address-family: inet >>>>>>>>>> Volume1.options.features.uss: on >>>>>>>>>> >>>>>>>>>> Volume2.name: tv2 >>>>>>>>>> Volume2.id: 937ad30c-bc08-4928-85e4-ece49235037a >>>>>>>>>> Volume2.type: Distribute >>>>>>>>>> ......... >>>>>>>>>> ......... >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> ~ Samikshan >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>> Tendrl-devel mailing list >>>>>>>> Tendrl-devel at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/tendrl-devel >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> >>>>>>> ~ Atin (atinm) >>>>>>> _______________________________________________ >>>>>>> Tendrl-devel mailing list >>>>>>> Tendrl-devel at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/tendrl-devel >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>> Tendrl-devel mailing list >>>> Tendrl-devel at redhat.com >>>> https://www.redhat.com/mailman/listinfo/tendrl-devel >>>> >>> >>> >>> > -- - Atin (atinm) From anbabu at redhat.com Mon Dec 5 05:07:39 2016 From: anbabu at redhat.com (Anmol Babu) Date: Mon, 5 Dec 2016 00:07:39 -0500 (EST) Subject: [Tendrl-devel] Patches for review In-Reply-To: <1464749384.1189992.1480913848829.JavaMail.zimbra@redhat.com> Message-ID: <1725490733.1190852.1480914459505.JavaMail.zimbra@redhat.com> Hi, Following are the patches seeking your kind attention: Specifications: 1. https://github.com/Tendrl/specifications/pull/8 -- Spec for flow framework refactoring(Rohan need your help in raising 3 new spec PRs as per Mrugesh's comments) 2. https://github.com/Tendrl/specifications/pull/18 -- Spec for Pluggability of different supported alert notifying means Node-agent: 1. https://github.com/Tendrl/node_agent/pull/67 -- Atom to check service status + Alerts socket Performance-monitoring: 1. https://github.com/Tendrl/performance_monitoring/pull/2 -- Add time-series db plugins + apis to access time-series Common: 1. https://github.com/Tendrl/common/pull/64 -- Add singleton utility Documentation: 1. https://github.com/Tendrl/documentation/pull/51 -- Add monitoring architecture doc Thanks, Anmol From anbabu at redhat.com Mon Dec 5 05:08:26 2016 From: anbabu at redhat.com (Anmol Babu) Date: Mon, 5 Dec 2016 00:08:26 -0500 (EST) Subject: [Tendrl-devel] Patches for review In-Reply-To: <1725490733.1190852.1480914459505.JavaMail.zimbra@redhat.com> References: <1725490733.1190852.1480914459505.JavaMail.zimbra@redhat.com> Message-ID: <664709141.1190858.1480914506134.JavaMail.zimbra@redhat.com> Kindly review and provide your valuable feedback. Regards, Anmol ----- Original Message ----- From: "Anmol Babu" To: "Mailing list for the contributors to the Tendrl project" Sent: Monday, December 5, 2016 10:37:39 AM Subject: [Tendrl-devel] Patches for review Hi, Following are the patches seeking your kind attention: Specifications: 1. https://github.com/Tendrl/specifications/pull/8 -- Spec for flow framework refactoring(Rohan need your help in raising 3 new spec PRs as per Mrugesh's comments) 2. https://github.com/Tendrl/specifications/pull/18 -- Spec for Pluggability of different supported alert notifying means Node-agent: 1. https://github.com/Tendrl/node_agent/pull/67 -- Atom to check service status + Alerts socket Performance-monitoring: 1. https://github.com/Tendrl/performance_monitoring/pull/2 -- Add time-series db plugins + apis to access time-series Common: 1. https://github.com/Tendrl/common/pull/64 -- Add singleton utility Documentation: 1. https://github.com/Tendrl/documentation/pull/51 -- Add monitoring architecture doc Thanks, Anmol _______________________________________________ Tendrl-devel mailing list Tendrl-devel at redhat.com https://www.redhat.com/mailman/listinfo/tendrl-devel From mbukatov at redhat.com Mon Dec 5 10:55:36 2016 From: mbukatov at redhat.com (Martin Bukatovic) Date: Mon, 5 Dec 2016 11:55:36 +0100 Subject: [Tendrl-devel] labeling github issues In-Reply-To: References: <9ecd00cf-3d32-5335-945d-27e89e873e9c@redhat.com> Message-ID: <70f5889c-97b1-2c4b-37e4-d33d7b8017d2@redhat.com> On 12/01/2016 11:38 AM, Martin Bukatovic wrote: > On 12/01/2016 11:16 AM, Sankarshan Mukhopadhyay wrote: >> On Thu, Dec 1, 2016 at 3:32 PM, Martin Bukatovic wrote: >>> I would like to assign labels (such as "bug" or "question") to github >>> issues I have created, but I don't seem to have the access rights >>> needed. Could you reconfigure the Tendrl github group so that qe team >>> members can add labels to theirs github issues? >> >> Alright. I'm missing something here. The specific label (names, which >> you indicate) exist. Can you provide me with a link to a particular >> issue? It should be easier for me to figure out what to do. > > The problem I have here is that while the labels exists, and other > team members are using them on some github issues, I'm unable to do > so. > > When I click on "New issue" of any Tendrl project on github, I don't > see the the knobs for setting the label at all [1] - the right panel > which provides those options is missing. Neither I see them when I > try to edit already created issue. Since I'm able to label issues of > my own projects, I suspect that this is related to access rights > of Tendrl github group. > > To try this yourself, try to click on "New issue" button of tendrl > documentation project[2] and compare it with my screenshot[1]. > If you are able to see knobs to set labels in the right panel, while > I'm not provided with this option as shown on the screenshot, we > would need to reconfigure access rights so that the qe team members > can add labels to tendrl github issues. > > Thank you for your help. > > [1] https://ibin.co/33riFN0YCthe.png > [2] https://github.com/Tendrl/documentation/issues/new ping -- Martin Bukatovic USM QE team From khartsoe at redhat.com Mon Dec 5 13:08:33 2016 From: khartsoe at redhat.com (Kenneth Hartsoe) Date: Mon, 5 Dec 2016 08:08:33 -0500 (EST) Subject: [Tendrl-devel] linking to storage docs from Tendrl user interface In-Reply-To: References: <210dc27c-a592-9b12-abcf-af416665ed97@redhat.com> <1ecb8da4-593f-50b3-6b58-874505c1bedf@redhat.com> Message-ID: <1995417668.75000629.1480943313818.JavaMail.zimbra@redhat.com> Hi Jeff, Thanks for clarifying: I have canceled Wednesday's scheduled discussion meeting and will revisit in the Tendrl 4 time frame, thanks. Ken Hartsoe Content Strategist Red Hat Storage Documentation khartsoe at redhat.com; IRC: khartsoe Office: 919 754 4770; Internal: 814 4770 ----- Original Message ----- | I think this is a good idea but I don't think it rises to the top of the | priority list given all the things we are trying to do. I suggest we | discuss this in the Tendrl 4 time frame and focus now on actual feature | development and testing. | | On Fri, Dec 2, 2016 at 10:15 AM Martin Bukatovic | wrote: | | > On 12/02/2016 12:23 PM, John Spray wrote: | > > This isn't really a question for "ceph doc team", it's a question for | > > the Ceph community as a whole -- we don't have separate people writing | > > the docs, or separate infrastructure for hosting them. Docs are in | > > ceph.git and builds are on the community ceph.com site. You will have | > > noticed that currently docs are at http://docs.ceph.com/docs/jewel/, | > > http://docs.ceph.com/docs/master/ etc -- obviously you won't get | > > stable links to master so you would have to pick a stable branch to | > > link to, and then re-check all your links every year or so at the | > > point that you update to link to a more recent stable branch. Anyone | > > who wants to go forward with this should take the question to | > > ceph-devel: "Can I rely on URLs on docs.ceph.com?" | > | > Understood. | > | > > Because Tendrl doesn't cover most of Ceph administration, and we're | > > still expecting users to use the command line for everything that | > > Tendrl doesn't do, I think there's still a basic expectation that | > > users will have done some level of reading/familiarisation -- at that | > > stage I'm not sure how critical it is for the UI to point people at | > > the documentation. | > | > That's definitely true, admin using the Tendrl is expected to have | > some knowledge level of ceph or gluster storage. | > | > Maybe we could ask a different question: what would experienced Ceph | > administrator found useful to be referenced in the Tendrl ui? Linking to | > the ceph docs directly for a particular action (so that the connection | > between an ui feature and the ceph action is clear, making looking for | > ceph related details easy) - or just linking to the Tendrl docs which | > would reveal more details on how the features and values reported by | > the ui relates to the ceph features without actually linking to the | > ceph docs? | > | > -- | > Martin Bukatovic | > USM QE team | > | > _______________________________________________ | > Tendrl-devel mailing list | > Tendrl-devel at redhat.com | > https://www.redhat.com/mailman/listinfo/tendrl-devel | > | _______________________________________________ | Tendrl-devel mailing list | Tendrl-devel at redhat.com | https://www.redhat.com/mailman/listinfo/tendrl-devel | From mbukatov at redhat.com Tue Dec 6 16:11:28 2016 From: mbukatov at redhat.com (Martin Bukatovic) Date: Tue, 6 Dec 2016 17:11:28 +0100 Subject: [Tendrl-devel] tendrl packaging Message-ID: <6091ca44-45db-5ba8-691b-b8e3605374d6@redhat.com> Dear tendrl-devel list, I'm working on review of usmqe-setup code [1] while creating issues for python and fedora packaging issues at the same time, sometimes with a pull request if the change is straightforward (which means when I don't need any internal implementation knowledge to get the issue fixed). Related JIRA task is here: https://tendrl.atlassian.net/browse/AR-8 That said, I will use github issues (and pull requests when suitable) for description of the issues itself. The JIRA task is here just for management purposes. Sooner we do this, the better. I will work on this until the request is fixed (so that it could be merged) and all remaining packaging issues are fixed. [1] https://github.com/Tendrl/usmqe-setup/pull/6#pullrequestreview-11348551 -- Martin Bukatovic USM QE team From mbukatov at redhat.com Tue Dec 6 16:27:08 2016 From: mbukatov at redhat.com (Martin Bukatovic) Date: Tue, 6 Dec 2016 17:27:08 +0100 Subject: [Tendrl-devel] Few questions/clarification regarding tendrl In-Reply-To: References: <90597ef5-e2ad-2187-e849-d826feb41012@redhat.com> Message-ID: On 12/01/2016 10:07 PM, Jeff Applewhite wrote: > The Ceph team and Nishanth and a couple of others actually met yesterday to > discuss this. The plan is for development to file bugs on the needed > features to support our needs by Friday and let the Ceph team assess their > ability to deliver these. Then we will meet again next week to review. But > you rightly point out there is a disconnect that needs to be addressed. Has this topic been discussed this week in Bangalore? -- Martin Bukatovic USM QE team From vsarmila at redhat.com Wed Dec 7 05:06:26 2016 From: vsarmila at redhat.com (Sharmilla Abhilash) Date: Wed, 7 Dec 2016 10:36:26 +0530 Subject: [Tendrl-devel] Few questions/clarification regarding tendrl In-Reply-To: References: <90597ef5-e2ad-2187-e849-d826feb41012@redhat.com> Message-ID: Yes, Nishanth had filed the BZ's related to calamari today. I'm following up on this. On Tue, Dec 6, 2016 at 9:57 PM, Martin Bukatovic wrote: > On 12/01/2016 10:07 PM, Jeff Applewhite wrote: > > The Ceph team and Nishanth and a couple of others actually met yesterday > to > > discuss this. The plan is for development to file bugs on the needed > > features to support our needs by Friday and let the Ceph team assess > their > > ability to deliver these. Then we will meet again next week to review. > But > > you rightly point out there is a disconnect that needs to be addressed. > > Has this topic been discussed this week in Bangalore? > > -- > Martin Bukatovic > USM QE team > > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel > -- Sharmilla Abhilash PgM, Storage From mbukatov at redhat.com Wed Dec 7 07:35:18 2016 From: mbukatov at redhat.com (Martin Bukatovic) Date: Wed, 7 Dec 2016 08:35:18 +0100 Subject: [Tendrl-devel] Few questions/clarification regarding tendrl In-Reply-To: References: <90597ef5-e2ad-2187-e849-d826feb41012@redhat.com> Message-ID: <8af77f60-476f-04d5-0d16-204a3fc9c55b@redhat.com> On 12/07/2016 06:06 AM, Sharmilla Abhilash wrote: > Yes, Nishanth had filed the BZ's related to calamari today. I'm following > up on this. That's great news. Could you also provide a BZ number for reference? The current state when we basically fork calamari code into ceph bridge component is not usual and needs to be resolved in long term. -- Martin Bukatovic USM QE team From rghatvis at redhat.com Wed Dec 7 11:32:26 2016 From: rghatvis at redhat.com (Bobb Gt) Date: Wed, 7 Dec 2016 17:02:26 +0530 Subject: [Tendrl-devel] linking to storage docs from Tendrl user interface In-Reply-To: <1995417668.75000629.1480943313818.JavaMail.zimbra@redhat.com> References: <210dc27c-a592-9b12-abcf-af416665ed97@redhat.com> <1ecb8da4-593f-50b3-6b58-874505c1bedf@redhat.com> <1995417668.75000629.1480943313818.JavaMail.zimbra@redhat.com> Message-ID: I know that this discussion is deferred for Tendrl 4 but I would like to share CloudForms UI screens where they have added a shortcut to product documentation. The CloudForm UI design is based on Patternfly just like the Storage Console. These screens could be used for reference/feasibility purposes once the discussion on this is resumed. Thanks, [image: photo] Bobb GT Technical Writer, Red Hat Inc Mobile: +91 8411001236 Website: redhat.com Division: Customer Content Services APAC Get a signature like this: Click here! On Mon, Dec 5, 2016 at 6:38 PM, Kenneth Hartsoe wrote: > Hi Jeff, > > Thanks for clarifying: I have canceled Wednesday's scheduled discussion > meeting and will revisit in the Tendrl 4 time frame, thanks. > > Ken Hartsoe > Content Strategist > Red Hat Storage Documentation > > khartsoe at redhat.com; IRC: khartsoe > Office: 919 754 4770; Internal: 814 4770 > > ----- Original Message ----- > | I think this is a good idea but I don't think it rises to the top of the > | priority list given all the things we are trying to do. I suggest we > | discuss this in the Tendrl 4 time frame and focus now on actual feature > | development and testing. > | > | On Fri, Dec 2, 2016 at 10:15 AM Martin Bukatovic > | wrote: > | > | > On 12/02/2016 12:23 PM, John Spray wrote: > | > > This isn't really a question for "ceph doc team", it's a question for > | > > the Ceph community as a whole -- we don't have separate people > writing > | > > the docs, or separate infrastructure for hosting them. Docs are in > | > > ceph.git and builds are on the community ceph.com site. You will > have > | > > noticed that currently docs are at http://docs.ceph.com/docs/jewel/, > | > > http://docs.ceph.com/docs/master/ etc -- obviously you won't get > | > > stable links to master so you would have to pick a stable branch to > | > > link to, and then re-check all your links every year or so at the > | > > point that you update to link to a more recent stable branch. Anyone > | > > who wants to go forward with this should take the question to > | > > ceph-devel: "Can I rely on URLs on docs.ceph.com?" > | > > | > Understood. > | > > | > > Because Tendrl doesn't cover most of Ceph administration, and we're > | > > still expecting users to use the command line for everything that > | > > Tendrl doesn't do, I think there's still a basic expectation that > | > > users will have done some level of reading/familiarisation -- at that > | > > stage I'm not sure how critical it is for the UI to point people at > | > > the documentation. > | > > | > That's definitely true, admin using the Tendrl is expected to have > | > some knowledge level of ceph or gluster storage. > | > > | > Maybe we could ask a different question: what would experienced Ceph > | > administrator found useful to be referenced in the Tendrl ui? Linking > to > | > the ceph docs directly for a particular action (so that the connection > | > between an ui feature and the ceph action is clear, making looking for > | > ceph related details easy) - or just linking to the Tendrl docs which > | > would reveal more details on how the features and values reported by > | > the ui relates to the ceph features without actually linking to the > | > ceph docs? > | > > | > -- > | > Martin Bukatovic > | > USM QE team > | > > | > _______________________________________________ > | > Tendrl-devel mailing list > | > Tendrl-devel at redhat.com > | > https://www.redhat.com/mailman/listinfo/tendrl-devel > | > > | _______________________________________________ > | Tendrl-devel mailing list > | Tendrl-devel at redhat.com > | https://www.redhat.com/mailman/listinfo/tendrl-devel > | > > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel > From vsarmila at redhat.com Thu Dec 8 05:45:46 2016 From: vsarmila at redhat.com (Sharmilla Abhilash) Date: Thu, 8 Dec 2016 11:15:46 +0530 Subject: [Tendrl-devel] Few questions/clarification regarding tendrl In-Reply-To: <8af77f60-476f-04d5-0d16-204a3fc9c55b@redhat.com> References: <90597ef5-e2ad-2187-e849-d826feb41012@redhat.com> <8af77f60-476f-04d5-0d16-204a3fc9c55b@redhat.com> Message-ID: here is the list of BZ's https://bugzilla.redhat.com/show_bug.cgi?id=1401903 https://bugzilla.redhat.com/show_bug.cgi?id=1401906 https://bugzilla.redhat.com/show_bug.cgi?id=1401910 https://bugzilla.redhat.com/show_bug.cgi?id=1401926 https://bugzilla.redhat.com/show_bug.cgi?id=1401936 On Wed, Dec 7, 2016 at 1:05 PM, Martin Bukatovic wrote: > On 12/07/2016 06:06 AM, Sharmilla Abhilash wrote: > > Yes, Nishanth had filed the BZ's related to calamari today. I'm following > > up on this. > > That's great news. Could you also provide a BZ number for reference? > > The current state when we basically fork calamari code into ceph bridge > component is not usual and needs to be resolved in long term. > > -- > Martin Bukatovic > USM QE team > > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel > -- Sharmilla Abhilash PgM, Storage From nthomas at redhat.com Thu Dec 8 06:05:17 2016 From: nthomas at redhat.com (Nishanth Thomas) Date: Thu, 8 Dec 2016 11:35:17 +0530 Subject: [Tendrl-devel] tendrl packaging In-Reply-To: <6091ca44-45db-5ba8-691b-b8e3605374d6@redhat.com> References: <6091ca44-45db-5ba8-691b-b8e3605374d6@redhat.com> Message-ID: On Tue, Dec 6, 2016 at 9:41 PM, Martin Bukatovic wrote: > Dear tendrl-devel list, > > I'm working on review of usmqe-setup code [1] while creating issues > for python and fedora packaging issues at the same time, sometimes > with a pull request if the change is straightforward (which means > when I don't need any internal implementation knowledge to get > the issue fixed). > > Related JIRA task is here: https://tendrl.atlassian.net/browse/AR-8 I had a look at the Jira issue and found https://github.com/Tendrl/node_agent/issues/80 . This is not a blocker as long as you are using rpms. Also this issue is already fixed and merged upstream > > > That said, I will use github issues (and pull requests when suitable) > for description of the issues itself. The JIRA task is here just for > management purposes. > > Sooner we do this, the better. > > I will work on this until the request is fixed (so that it could be > merged) and all remaining packaging issues are fixed. > > [1] https://github.com/Tendrl/usmqe-setup/pull/6# > pullrequestreview-11348551 > > -- > Martin Bukatovic > USM QE team > > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel > From mbukatov at redhat.com Thu Dec 8 07:13:57 2016 From: mbukatov at redhat.com (Martin Bukatovic) Date: Thu, 8 Dec 2016 08:13:57 +0100 Subject: [Tendrl-devel] tendrl packaging In-Reply-To: References: <6091ca44-45db-5ba8-691b-b8e3605374d6@redhat.com> Message-ID: On 12/08/2016 07:05 AM, Nishanth Thomas wrote: > I had a look at the Jira issue and found > https://github.com/Tendrl/node_agent/issues/80 . This is not a blocker as > long as you are using rpms. Also this issue is already fixed and merged > upstream This is not a full interpretation of my intent. My plan is to review packaging issues as I'm stabilizing our setup ansible automation. Issues I find during this process may end up being fixed in documentation, qe setup, tendrl code, python setuptools based packaging of tendrl or rpm packaging of tendrl. For example: * https://github.com/Tendrl/common/issues/72 * https://github.com/Tendrl/common/issues/74 Expect more issues like this to be created in near future. Without doing this, I can't be sure that our automation installs the product as expected. -- Martin Bukatovic USM QE team From sankarshan.mukhopadhyay at gmail.com Thu Dec 8 07:19:35 2016 From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay) Date: Thu, 8 Dec 2016 12:49:35 +0530 Subject: [Tendrl-devel] tendrl packaging In-Reply-To: References: <6091ca44-45db-5ba8-691b-b8e3605374d6@redhat.com> Message-ID: On Thu, Dec 8, 2016 at 12:43 PM, Martin Bukatovic wrote: > On 12/08/2016 07:05 AM, Nishanth Thomas wrote: >> I had a look at the Jira issue and found >> https://github.com/Tendrl/node_agent/issues/80 . This is not a blocker as >> long as you are using rpms. Also this issue is already fixed and merged >> upstream > > This is not a full interpretation of my intent. My plan is to review > packaging issues as I'm stabilizing our setup ansible automation. Issues > I find during this process may end up being fixed in documentation, qe > setup, tendrl code, python setuptools based packaging of tendrl or rpm > packaging of tendrl. > > For example: > > * https://github.com/Tendrl/common/issues/72 > * https://github.com/Tendrl/common/issues/74 > > Expect more issues like this to be created in near future. > > Without doing this, I can't be sure that our automation installs the > product as expected. > I understand and I think this is a great initiative. Thanks for doing the ground work before we switch on the automation (and automated tests). For the issues which you are filing, please do add Mrugesh, Rohan, Nishanth so as to ensure that we can provide feedback in an iterative and rapid manner. -- sankarshan mukhopadhyay From rkanade at redhat.com Fri Dec 9 09:48:25 2016 From: rkanade at redhat.com (Rohan Kanade) Date: Fri, 9 Dec 2016 15:18:25 +0530 Subject: [Tendrl-devel] [Tech] On Async IO and Gevent usage in Tendrl Message-ID: Tendrl components use Async IO via gevent in python, here's a good read on those topics https://blogs.gnome.org/markmc/2013/06/04/async-io-and-python/ From fbalak at redhat.com Mon Dec 12 06:21:25 2016 From: fbalak at redhat.com (Filip Balak) Date: Mon, 12 Dec 2016 01:21:25 -0500 (EST) Subject: [Tendrl-devel] Sick day today (maybe tommorow too) In-Reply-To: <648180532.2437853.1481523592226.JavaMail.zimbra@redhat.com> Message-ID: <1633140667.2437882.1481523685024.JavaMail.zimbra@redhat.com> Death in the family. From mkudlej at redhat.com Mon Dec 12 13:23:39 2016 From: mkudlej at redhat.com (Martin Kudlej) Date: Mon, 12 Dec 2016 14:23:39 +0100 Subject: [Tendrl-devel] status reporting tool Message-ID: <9d23f599-8b7d-8351-59fa-97c7733e143c@redhat.com> Hi all, this tool can help you to report your day-to-day status http://did.readthedocs.io/en/latest/overview/ -- Best Regards, Martin Kudlej. RHSC/USM Senior Quality Assurance Engineer Red Hat Czech s.r.o. Phone: +420 532 294 155 E-mail:mkudlej at redhat.com IRC: mkudlej at #brno, #gluster, #storage-qa, #rhs, #rh-ceph, #usm-meeting @ redhat #tendrl-devel @ freenode From japplewh at redhat.com Mon Dec 12 14:07:31 2016 From: japplewh at redhat.com (Jeff Applewhite) Date: Mon, 12 Dec 2016 19:37:31 +0530 Subject: [Tendrl-devel] API documentation Message-ID: Hi All I discussed the API docs issue with Mrugesh today. There is an outstanding PR that QE needs to ack here: https://github.com/Tendrl/documentation/pull/62 Please comment on list if this is sufficient for automated test creation. Thanks, -- Jeff Applewhite Principal Product Manager From sankarshan.mukhopadhyay at gmail.com Mon Dec 12 14:11:33 2016 From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay) Date: Mon, 12 Dec 2016 19:41:33 +0530 Subject: [Tendrl-devel] API documentation In-Reply-To: References: Message-ID: On Mon, Dec 12, 2016 at 7:37 PM, Jeff Applewhite wrote: > I discussed the API docs issue with Mrugesh today. There is an > outstanding PR that QE needs to ack here: > > https://github.com/Tendrl/documentation/pull/62 > > Please comment on list if this is sufficient for automated test creation. Can I request that we add a member of the team to be added so as to bubble this up in attention? -- sankarshan mukhopadhyay From mrugesh at brainfunked.org Mon Dec 12 14:27:57 2016 From: mrugesh at brainfunked.org (Mrugesh Karnik) Date: Mon, 12 Dec 2016 19:57:57 +0530 Subject: [Tendrl-devel] API documentation In-Reply-To: References: Message-ID: On 12 December 2016 at 19:41, Sankarshan Mukhopadhyay < sankarshan.mukhopadhyay at gmail.com> wrote: > > On Mon, Dec 12, 2016 at 7:37 PM, Jeff Applewhite wrote: > > I discussed the API docs issue with Mrugesh today. There is an > > outstanding PR that QE needs to ack here: > > > > https://github.com/Tendrl/documentation/pull/62 > > > > Please comment on list if this is sufficient for automated test creation. > > Can I request that we add a member of the team to be added so as to > bubble this up in attention? The reviewers functionality on github is being a bit buggy. Only one of the reviewers actually shows up. So, we have tagged people in the comments. -- Mrugesh From anbabu at redhat.com Mon Dec 12 14:32:28 2016 From: anbabu at redhat.com (Anmol Babu) Date: Mon, 12 Dec 2016 09:32:28 -0500 (EST) Subject: [Tendrl-devel] Patches for review In-Reply-To: <664709141.1190858.1480914506134.JavaMail.zimbra@redhat.com> References: <1725490733.1190852.1480914459505.JavaMail.zimbra@redhat.com> <664709141.1190858.1480914506134.JavaMail.zimbra@redhat.com> Message-ID: <1313023353.2484223.1481553148246.JavaMail.zimbra@redhat.com> Thanks Shubhendu, Nishanth and Rohan for the reviews. I am working on your comments and I'll update the patches soon. Regards, Anmol ----- Original Message ----- From: "Anmol Babu" To: "Mailing list for the contributors to the Tendrl project" Sent: Monday, December 5, 2016 10:38:26 AM Subject: Re: [Tendrl-devel] Patches for review Kindly review and provide your valuable feedback. Regards, Anmol ----- Original Message ----- From: "Anmol Babu" To: "Mailing list for the contributors to the Tendrl project" Sent: Monday, December 5, 2016 10:37:39 AM Subject: [Tendrl-devel] Patches for review Hi, Following are the patches seeking your kind attention: Specifications: 1. https://github.com/Tendrl/specifications/pull/8 -- Spec for flow framework refactoring(Rohan need your help in raising 3 new spec PRs as per Mrugesh's comments) 2. https://github.com/Tendrl/specifications/pull/18 -- Spec for Pluggability of different supported alert notifying means Node-agent: 1. https://github.com/Tendrl/node_agent/pull/67 -- Atom to check service status + Alerts socket Performance-monitoring: 1. https://github.com/Tendrl/performance_monitoring/pull/2 -- Add time-series db plugins + apis to access time-series Common: 1. https://github.com/Tendrl/common/pull/64 -- Add singleton utility Documentation: 1. https://github.com/Tendrl/documentation/pull/51 -- Add monitoring architecture doc Thanks, Anmol _______________________________________________ Tendrl-devel mailing list Tendrl-devel at redhat.com https://www.redhat.com/mailman/listinfo/tendrl-devel _______________________________________________ Tendrl-devel mailing list Tendrl-devel at redhat.com https://www.redhat.com/mailman/listinfo/tendrl-devel From mrugesh at brainfunked.org Mon Dec 12 15:08:15 2016 From: mrugesh at brainfunked.org (Mrugesh Karnik) Date: Mon, 12 Dec 2016 20:38:15 +0530 Subject: [Tendrl-devel] Specifications repository organisation and check-in meetings Message-ID: The specifications repository on github[1] has had an overhaul in terms of the organisation of the issues. The tagging of the issues[2] should communicate whether they pertain to any specific high level feature and provide a visual cue for a contributor looking for a specific area of the project. The project[3] board `Focus Areas' (for the lack of a better name) in the repository explicitly lists out the priorities of the issues. The idea of a live check in during office hours is to ensure that the work can be assigned priority and be addressed as quickly as possible. Check-ins are currently scheduled to be at 2:30 PM IST, on the IRC channel[4]. The summary of each day's check-in will be sent to this list. The check-in meetings provide updates regarding the ones under the `In Progress' list. The `Priorities' and `Projected: December '16' lists are where new issues to be worked on get picked up from. Check-in meeting protocol: * Individual contributors will be asked during the meeting to provide their updates. Please await the highlight. * Ensure that the issues you're working on are updated nightly with commits and comments. * Don't wait for an entire pull request to be ready, send small commits at least once a day. This applies to both the specifications themselves and the code for individual component issues linked to the specification. * The issues being mentioned in the check-in should be under the `In Progress' list on the project board[2]. * If it's possible that an in progress issue may be completed sufficiently as to allow working on another, use the priorities on the project board, as described above. Mention this as an update in the check-in meeting. * Keep the update ready before the check-in itself. Use the following template: - Github specification issue title and link from [2] - Issues linked to this specification from component repositories that have been updated. - Summary of the update across the whole specification - Any problems, blockers or help required. Feel free to point out a team member whom you'd like to look at the issue. This includes any code reviews or testing required. - Optionally, the issue to be worked on next, later in the day. [1] https://github.com/Tendrl/specifications [2] https://github.com/Tendrl/specifications/issues [3] https://github.com/Tendrl/specifications/projects/2 [4] #tendrl-devel on Freenode From mrugesh at brainfunked.org Mon Dec 12 15:47:30 2016 From: mrugesh at brainfunked.org (Mrugesh Karnik) Date: Mon, 12 Dec 2016 21:17:30 +0530 Subject: [Tendrl-devel] Daily check-in summary for 20161212 Message-ID: All the issues are under https://github.com/Tendrl/specifications/issues unless otherwise stated. Issues under development: * Framework enhancements for better flow control and better handling of definitions in the central store (#34, #32). * Compatibility update for the changes in the gluster get-state output (#30). * Updates to Tendrl's inventory by importing disk and network details from each of the nodes (#43, #41). * Code refactoring (#31). Specifications being developed: * Import cluster workflow (#54). Although this is working functionality, there are some auto-detection related updates to be made. We're doing the whole import cluster workflow specification from scratch for documentation and testing purposes. * Pluggable destinations for alerts, such as email, SNMP etc. (#40). * Flows for provisioning the monitoring stack onto storage nodes (#42). Other updates: * Gowtham is currently writing a specification for "Centralised, layered configuration" (#29). Once done, this specification will be pushed down in priority and he'll take up another specification at a higher priority. * CentOS CI based automation is a priority. The testing team have been making progress on the same. They'll be making updates to a specification linked to #53. * The UI and testing teams will start adding issues and corresponding specifications for their planned or in progress work. -- Mrugesh From mrugesh at brainfunked.org Tue Dec 13 11:13:47 2016 From: mrugesh at brainfunked.org (Mrugesh Karnik) Date: Tue, 13 Dec 2016 16:43:47 +0530 Subject: [Tendrl-devel] [TRACKING] Daily check-in summary for 20161213 Message-ID: NOTE: All the issues are under https://github.com/Tendrl/specifications/issues unless otherwise stated. # Github workflow related updates The "Focus Areas" project (https://github.com/Tendrl/specifications/projects/2) in the specifications repository has had a small update. The "In Progress" list has now been split into Specifications and Implementation. # Errata for yesterday's update: "Updates to Tendrl's inventory by importing disk and network details from each of the nodes (#43, #41)" should have been under "Specifications being developed". # Brought forward, updated All the issues listed below have had updated in terms of either comments, reviews or commits. ## Issues under development: * Framework enhancements for better flow control and better handling of definitions in the central store (#34, #32). * Compatibility update for the changes in the gluster get-state output (#30). * Code refactoring (#31). ## Specifications being developed: * Updates to Tendrl's inventory by importing disk and network details from each of the nodes (#43, #41). * Pluggable destinations for alerts, such as email, SNMP etc. (#40). * Flows for provisioning the monitoring stack onto storage nodes (#42). * CentOS CI based automation (#53) has had some discussion. dahorak will be sending a commit to open the specification pull request. # Brought forward, not updated * The testing team hasn't yet added issues and corresponding specifications for their planned or in progress work. mkudlej requires a demo for the jira-github workflow. The github workflow has been documented on the mailing list and mkudlej has been asked to raise questions on the list itself. * The UI team, kverma and ngupta, hasn't yet committed a specification against Import Cluster UI (#56). This specification needs to be linked to the UX designs, provide API requirements and be reviewed by mkudlej so that tests can be provided to code against. # Other updates * ababu will be linking several issues he's been working on to the appropriate specifications. He'll be creating an issue for the specification for the alerting and monitoring API. This will be a collaboration between him and anivargi. * mbukatovic doesn't have a specification he's currently working against. He's checking the installation and testing related playbooks and filing issues for configuration and packaging. The list of bugs file is: https://github.com/Tendrl/node_agent/issues/97 https://github.com/Tendrl/node_agent/issues/98 https://github.com/Tendrl/gluster_integration/issues/86 https://github.com/Tendrl/node_agent/issues/99 https://github.com/Tendrl/gluster_integration/issues/87 https://github.com/Tendrl/ceph_integration/issues/57 https://github.com/Tendrl/tendrl-api/issues/37 * tasir has picked up the node agent issues #97 and #99 from the above list. * tasir is working on filing a specification for definition file validation (#57). * dahorak has not provided an update for yesterday. (Reason provided: "working on unrelated tasks now"). # Blockers Several issues are being blocked by pending reviews. Most of these need nishanth, rkanade, shubhendu and mkarnik's attention. https://github.com/Tendrl/node_agent/pull/67 https://github.com/Tendrl/performance_monitoring/pull/2 https://github.com/Tendrl/specifications/pull/9 https://github.com/Tendrl/specifications/pull/10 https://github.com/Tendrl/specifications/issues/34 Everything linked against https://github.com/Tendrl/specifications/issues/31 https://github.com/Tendrl/specifications/pull/7 https://github.com/Tendrl/specifications/pull/6 https://github.com/Tendrl/gluster_integration/issues/74 https://github.com/Tendrl/gluster_integration/issues/73 -- Mrugesh From mbukatov at redhat.com Tue Dec 13 13:02:13 2016 From: mbukatov at redhat.com (Martin Bukatovic) Date: Tue, 13 Dec 2016 14:02:13 +0100 Subject: [Tendrl-devel] labeling github issues In-Reply-To: References: <9ecd00cf-3d32-5335-945d-27e89e873e9c@redhat.com> Message-ID: <4e089559-6d12-452d-975a-93f65645babb@redhat.com> On 12/01/2016 11:38 AM, Martin Bukatovic wrote: > On 12/01/2016 11:16 AM, Sankarshan Mukhopadhyay wrote: >> On Thu, Dec 1, 2016 at 3:32 PM, Martin Bukatovic wrote: >>> I would like to assign labels (such as "bug" or "question") to github >>> issues I have created, but I don't seem to have the access rights >>> needed. Could you reconfigure the Tendrl github group so that qe team >>> members can add labels to theirs github issues? >> >> Alright. I'm missing something here. The specific label (names, which >> you indicate) exist. Can you provide me with a link to a particular >> issue? It should be easier for me to figure out what to do. > > The problem I have here is that while the labels exists, and other > team members are using them on some github issues, I'm unable to do > so. > > When I click on "New issue" of any Tendrl project on github, I don't > see the the knobs for setting the label at all [1] - the right panel > which provides those options is missing. Neither I see them when I > try to edit already created issue. Since I'm able to label issues of > my own projects, I suspect that this is related to access rights > of Tendrl github group. > > To try this yourself, try to click on "New issue" button of tendrl > documentation project[2] and compare it with my screenshot[1]. > If you are able to see knobs to set labels in the right panel, while > I'm not provided with this option as shown on the screenshot, we > would need to reconfigure access rights so that the qe team members > can add labels to tendrl github issues. > > Thank you for your help. > > [1] https://ibin.co/33riFN0YCthe.png > [2] https://github.com/Tendrl/documentation/issues/new ping Is there any problem on github access control side? -- Martin Bukatovic USM QE team From amukherj at redhat.com Wed Dec 14 13:10:53 2016 From: amukherj at redhat.com (Atin Mukherjee) Date: Wed, 14 Dec 2016 18:40:53 +0530 Subject: [Tendrl-devel] Impact on tendrl due to BZ 1404110 - POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op ? Message-ID: Team, We have identified an issue related to POSIX_SAME_GFID event where this unwanted event is seen for .trashcan and .trashcan/internal_op folders. This event is meant to emitted from posix stack from Gluster in mkdir codepath in case an existing directory with the new mkdir request shares the same GFID which can lead to inconsistencies. This particular case is observed when a brick stop and start is performed. Now the question I've here is what tendrl is supposed to do once it sees a POSIX_SAME_GFID. Will there be any reactive action taken against it or it just gets notified to the admin? Could you please assess this case w.r.t how does this impact tendrl and if we can live with it? ~ Atin (atinm) From sankarshan at redhat.com Wed Dec 14 13:19:23 2016 From: sankarshan at redhat.com (sankarshan) Date: Wed, 14 Dec 2016 18:49:23 +0530 Subject: [Tendrl-devel] Impact on tendrl due to BZ 1404110 - POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op ? In-Reply-To: References: Message-ID: On 14 December 2016 at 18:40, Atin Mukherjee wrote: > We have identified an issue related to POSIX_SAME_GFID event where this > unwanted event is seen for .trashcan and .trashcan/internal_op folders. > This event is meant to emitted from posix stack from Gluster in mkdir > codepath in case an existing directory with the new mkdir request shares > the same GFID which can lead to inconsistencies. This particular case is > observed when a brick stop and start is performed. Now the question I've > here is what tendrl is supposed to do once it sees a POSIX_SAME_GFID. Will > there be any reactive action taken against it or it just gets notified to > the admin? > > Could you please assess this case w.r.t how does this impact tendrl and if > we can live with it? Alright. So, I'd like to propose this approach. What would a Gluster storage admin do (in absence of Tendrl) in order to deal with this notification and the event which caused it? Are there specific sequence of steps which (s)he would perform and thus additional new flows need to be built into Tendrl? Or, is this a (benign?) event which is more of warning/information and no further (remedial) action is required by the admin? From amukherj at redhat.com Wed Dec 14 13:21:55 2016 From: amukherj at redhat.com (Atin Mukherjee) Date: Wed, 14 Dec 2016 18:51:55 +0530 Subject: [Tendrl-devel] Impact on tendrl due to BZ 1404110 - POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op ? In-Reply-To: References: Message-ID: On Wed, Dec 14, 2016 at 6:49 PM, sankarshan wrote: > On 14 December 2016 at 18:40, Atin Mukherjee wrote: > > We have identified an issue related to POSIX_SAME_GFID event where this > > unwanted event is seen for .trashcan and .trashcan/internal_op folders. > > This event is meant to emitted from posix stack from Gluster in mkdir > > codepath in case an existing directory with the new mkdir request shares > > the same GFID which can lead to inconsistencies. This particular case is > > observed when a brick stop and start is performed. Now the question I've > > here is what tendrl is supposed to do once it sees a POSIX_SAME_GFID. > Will > > there be any reactive action taken against it or it just gets notified to > > the admin? > > > > Could you please assess this case w.r.t how does this impact tendrl and > if > > we can live with it? > > Alright. So, I'd like to propose this approach. What would a Gluster > storage admin do (in absence of Tendrl) in order to deal with this > notification and the event which caused it? Are there specific > sequence of steps which (s)he would perform and thus additional new > flows need to be built into Tendrl? Or, is this a (benign?) event > which is more of warning/information and no further (remedial) action > is required by the admin? > +Pranith - could you chime in with your thoughts here? > > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel > -- ~ Atin (atinm) From pkarampu at redhat.com Wed Dec 14 13:27:31 2016 From: pkarampu at redhat.com (Pranith Kumar Karampuri) Date: Wed, 14 Dec 2016 18:57:31 +0530 Subject: [Tendrl-devel] Impact on tendrl due to BZ 1404110 - POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op ? In-Reply-To: References: Message-ID: On Wed, Dec 14, 2016 at 6:51 PM, Atin Mukherjee wrote: > > > On Wed, Dec 14, 2016 at 6:49 PM, sankarshan wrote: > >> On 14 December 2016 at 18:40, Atin Mukherjee wrote: >> > We have identified an issue related to POSIX_SAME_GFID event where this >> > unwanted event is seen for .trashcan and .trashcan/internal_op folders. >> > This event is meant to emitted from posix stack from Gluster in mkdir >> > codepath in case an existing directory with the new mkdir request shares >> > the same GFID which can lead to inconsistencies. This particular case is >> > observed when a brick stop and start is performed. Now the question I've >> > here is what tendrl is supposed to do once it sees a POSIX_SAME_GFID. >> Will >> > there be any reactive action taken against it or it just gets notified >> to >> > the admin? >> > >> > Could you please assess this case w.r.t how does this impact tendrl and >> if >> > we can live with it? >> >> Alright. So, I'd like to propose this approach. What would a Gluster >> storage admin do (in absence of Tendrl) in order to deal with this >> notification and the event which caused it? Are there specific >> sequence of steps which (s)he would perform and thus additional new >> flows need to be built into Tendrl? Or, is this a (benign?) event >> which is more of warning/information and no further (remedial) action >> is required by the admin? >> > > +Pranith - could you chime in with your thoughts here? > Storage admin should try to fix the directory gfids to make sure two directories won't have same gfid. This log/event is added to help people who want to fix the directory gfids by giving the two directory path names. So if I am a storage admin and I see this issue I will need to immediately call redhat support and give these events/logs which will make life easier for GSS to fix the issue. I am also adding Raghavendra/Nithya. > > >> >> _______________________________________________ >> Tendrl-devel mailing list >> Tendrl-devel at redhat.com >> https://www.redhat.com/mailman/listinfo/tendrl-devel >> > > > > -- > > ~ Atin (atinm) > -- Pranith From nbalacha at redhat.com Wed Dec 14 13:33:11 2016 From: nbalacha at redhat.com (Nithya Balachandran) Date: Wed, 14 Dec 2016 19:03:11 +0530 Subject: [Tendrl-devel] Impact on tendrl due to BZ 1404110 - POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op ? In-Reply-To: References: Message-ID: I think this shows up because the internal dirs already exist on the brick not because they have the same gfid. I've seen the posix log message every time a brick is restarted. This is something we should ignore for these dirs. Regards, Nithya On 14 December 2016 at 18:57, Pranith Kumar Karampuri wrote: > > > On Wed, Dec 14, 2016 at 6:51 PM, Atin Mukherjee > wrote: > >> >> >> On Wed, Dec 14, 2016 at 6:49 PM, sankarshan >> wrote: >> >>> On 14 December 2016 at 18:40, Atin Mukherjee >>> wrote: >>> > We have identified an issue related to POSIX_SAME_GFID event where this >>> > unwanted event is seen for .trashcan and .trashcan/internal_op folders. >>> > This event is meant to emitted from posix stack from Gluster in mkdir >>> > codepath in case an existing directory with the new mkdir request >>> shares >>> > the same GFID which can lead to inconsistencies. This particular case >>> is >>> > observed when a brick stop and start is performed. Now the question >>> I've >>> > here is what tendrl is supposed to do once it sees a POSIX_SAME_GFID. >>> Will >>> > there be any reactive action taken against it or it just gets notified >>> to >>> > the admin? >>> > >>> > Could you please assess this case w.r.t how does this impact tendrl >>> and if >>> > we can live with it? >>> >>> Alright. So, I'd like to propose this approach. What would a Gluster >>> storage admin do (in absence of Tendrl) in order to deal with this >>> notification and the event which caused it? Are there specific >>> sequence of steps which (s)he would perform and thus additional new >>> flows need to be built into Tendrl? Or, is this a (benign?) event >>> which is more of warning/information and no further (remedial) action >>> is required by the admin? >>> >> >> +Pranith - could you chime in with your thoughts here? >> > > Storage admin should try to fix the directory gfids to make sure two > directories won't have same gfid. This log/event is added to help people > who want to fix the directory gfids by giving the two directory path names. > So if I am a storage admin and I see this issue I will need to immediately > call redhat support and give these events/logs which will make life easier > for GSS to fix the issue. > > I am also adding Raghavendra/Nithya. > > >> >> >>> >>> _______________________________________________ >>> Tendrl-devel mailing list >>> Tendrl-devel at redhat.com >>> https://www.redhat.com/mailman/listinfo/tendrl-devel >>> >> >> >> >> -- >> >> ~ Atin (atinm) >> > > > > -- > Pranith > From sankarshan.mukhopadhyay at gmail.com Wed Dec 14 13:36:37 2016 From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay) Date: Wed, 14 Dec 2016 19:06:37 +0530 Subject: [Tendrl-devel] Impact on tendrl due to BZ 1404110 - POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op ? In-Reply-To: References: Message-ID: On Wed, Dec 14, 2016 at 6:57 PM, Pranith Kumar Karampuri wrote: > On Wed, Dec 14, 2016 at 6:51 PM, Atin Mukherjee wrote: > >> >> >> On Wed, Dec 14, 2016 at 6:49 PM, sankarshan wrote: >> >>> On 14 December 2016 at 18:40, Atin Mukherjee wrote: >>> > We have identified an issue related to POSIX_SAME_GFID event where this >>> > unwanted event is seen for .trashcan and .trashcan/internal_op folders. >>> > This event is meant to emitted from posix stack from Gluster in mkdir >>> > codepath in case an existing directory with the new mkdir request shares >>> > the same GFID which can lead to inconsistencies. This particular case is >>> > observed when a brick stop and start is performed. Now the question I've >>> > here is what tendrl is supposed to do once it sees a POSIX_SAME_GFID. >>> Will >>> > there be any reactive action taken against it or it just gets notified >>> to >>> > the admin? >>> > >>> > Could you please assess this case w.r.t how does this impact tendrl and >>> if >>> > we can live with it? >>> >>> Alright. So, I'd like to propose this approach. What would a Gluster >>> storage admin do (in absence of Tendrl) in order to deal with this >>> notification and the event which caused it? Are there specific >>> sequence of steps which (s)he would perform and thus additional new >>> flows need to be built into Tendrl? Or, is this a (benign?) event >>> which is more of warning/information and no further (remedial) action >>> is required by the admin? >>> >> >> +Pranith - could you chime in with your thoughts here? >> > > Storage admin should try to fix the directory gfids to make sure two > directories won't have same gfid. This log/event is added to help people > who want to fix the directory gfids by giving the two directory path names. > So if I am a storage admin and I see this issue I will need to immediately > call redhat support and give these events/logs which will make life easier > for GSS to fix the issue. > For the moment, let us assume that this is running community Gluster on CentOS. What actions is the admin expected to undertake? What is the process of fixing the directory gfids - is this something that is usually undertaken from the 'shell'? > I am also adding Raghavendra/Nithya. > -- sankarshan mukhopadhyay From nbalacha at redhat.com Wed Dec 14 13:40:32 2016 From: nbalacha at redhat.com (Nithya Balachandran) Date: Wed, 14 Dec 2016 19:10:32 +0530 Subject: [Tendrl-devel] Impact on tendrl due to BZ 1404110 - POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op ? In-Reply-To: References: Message-ID: As per the BZ this shows up only when the brick is restarted - the admin should not be doing anything in this case. I think we should fix this in the code to not sent the event in this case. Regards, Nithya On 14 December 2016 at 19:06, Sankarshan Mukhopadhyay < sankarshan.mukhopadhyay at gmail.com> wrote: > On Wed, Dec 14, 2016 at 6:57 PM, Pranith Kumar Karampuri > wrote: > > On Wed, Dec 14, 2016 at 6:51 PM, Atin Mukherjee > wrote: > > > >> > >> > >> On Wed, Dec 14, 2016 at 6:49 PM, sankarshan > wrote: > >> > >>> On 14 December 2016 at 18:40, Atin Mukherjee > wrote: > >>> > We have identified an issue related to POSIX_SAME_GFID event where > this > >>> > unwanted event is seen for .trashcan and .trashcan/internal_op > folders. > >>> > This event is meant to emitted from posix stack from Gluster in mkdir > >>> > codepath in case an existing directory with the new mkdir request > shares > >>> > the same GFID which can lead to inconsistencies. This particular > case is > >>> > observed when a brick stop and start is performed. Now the question > I've > >>> > here is what tendrl is supposed to do once it sees a POSIX_SAME_GFID. > >>> Will > >>> > there be any reactive action taken against it or it just gets > notified > >>> to > >>> > the admin? > >>> > > >>> > Could you please assess this case w.r.t how does this impact tendrl > and > >>> if > >>> > we can live with it? > >>> > >>> Alright. So, I'd like to propose this approach. What would a Gluster > >>> storage admin do (in absence of Tendrl) in order to deal with this > >>> notification and the event which caused it? Are there specific > >>> sequence of steps which (s)he would perform and thus additional new > >>> flows need to be built into Tendrl? Or, is this a (benign?) event > >>> which is more of warning/information and no further (remedial) action > >>> is required by the admin? > >>> > >> > >> +Pranith - could you chime in with your thoughts here? > >> > > > > Storage admin should try to fix the directory gfids to make sure two > > directories won't have same gfid. This log/event is added to help people > > who want to fix the directory gfids by giving the two directory path > names. > > So if I am a storage admin and I see this issue I will need to > immediately > > call redhat support and give these events/logs which will make life > easier > > for GSS to fix the issue. > > > > For the moment, let us assume that this is running community Gluster > on CentOS. What actions is the admin expected to undertake? What is > the process of fixing the directory gfids - is this something that is > usually undertaken from the 'shell'? > > > I am also adding Raghavendra/Nithya. > > > > > > -- > sankarshan mukhopadhyay > > From amukherj at redhat.com Wed Dec 14 13:46:53 2016 From: amukherj at redhat.com (Atin Mukherjee) Date: Wed, 14 Dec 2016 19:16:53 +0530 Subject: [Tendrl-devel] Impact on tendrl due to BZ 1404110 - POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op ? In-Reply-To: References: Message-ID: On Wed, Dec 14, 2016 at 7:03 PM, Nithya Balachandran wrote: > I think this shows up because the internal dirs already exist on the brick > not because they have the same gfid. I've seen the posix log message every > time a brick is restarted. > > This is something we should ignore for these dirs. > gf_event (EVENT_POSIX_SAME_GFID, "gfid=%s;path=%s;" "newpath=%s;brick=%s:%s", uuid_utoa (uuid_req), gfid_path ? gfid_path : "", loc->path, priv->hostname, priv->base_path); And I guess we'd be also passing the directory name in this event, so tendrl can check if the trashcan directory is there in the parameter list, then ignore the event ? IMO, in occurrence of this event, UI at best can notify an admin and then a corrective set of actions need to be taken. > Regards, > Nithya > > On 14 December 2016 at 18:57, Pranith Kumar Karampuri > wrote: > >> >> >> On Wed, Dec 14, 2016 at 6:51 PM, Atin Mukherjee >> wrote: >> >>> >>> >>> On Wed, Dec 14, 2016 at 6:49 PM, sankarshan >>> wrote: >>> >>>> On 14 December 2016 at 18:40, Atin Mukherjee >>>> wrote: >>>> > We have identified an issue related to POSIX_SAME_GFID event where >>>> this >>>> > unwanted event is seen for .trashcan and .trashcan/internal_op >>>> folders. >>>> > This event is meant to emitted from posix stack from Gluster in mkdir >>>> > codepath in case an existing directory with the new mkdir request >>>> shares >>>> > the same GFID which can lead to inconsistencies. This particular case >>>> is >>>> > observed when a brick stop and start is performed. Now the question >>>> I've >>>> > here is what tendrl is supposed to do once it sees a POSIX_SAME_GFID. >>>> Will >>>> > there be any reactive action taken against it or it just gets >>>> notified to >>>> > the admin? >>>> > >>>> > Could you please assess this case w.r.t how does this impact tendrl >>>> and if >>>> > we can live with it? >>>> >>>> Alright. So, I'd like to propose this approach. What would a Gluster >>>> storage admin do (in absence of Tendrl) in order to deal with this >>>> notification and the event which caused it? Are there specific >>>> sequence of steps which (s)he would perform and thus additional new >>>> flows need to be built into Tendrl? Or, is this a (benign?) event >>>> which is more of warning/information and no further (remedial) action >>>> is required by the admin? >>>> >>> >>> +Pranith - could you chime in with your thoughts here? >>> >> >> Storage admin should try to fix the directory gfids to make sure two >> directories won't have same gfid. This log/event is added to help people >> who want to fix the directory gfids by giving the two directory path names. >> So if I am a storage admin and I see this issue I will need to immediately >> call redhat support and give these events/logs which will make life easier >> for GSS to fix the issue. >> >> I am also adding Raghavendra/Nithya. >> >> >>> >>> >>>> >>>> _______________________________________________ >>>> Tendrl-devel mailing list >>>> Tendrl-devel at redhat.com >>>> https://www.redhat.com/mailman/listinfo/tendrl-devel >>>> >>> >>> >>> >>> -- >>> >>> ~ Atin (atinm) >>> >> >> >> >> -- >> Pranith >> > > -- ~ Atin (atinm) From pkarampu at redhat.com Wed Dec 14 13:55:14 2016 From: pkarampu at redhat.com (Pranith Kumar Karampuri) Date: Wed, 14 Dec 2016 19:25:14 +0530 Subject: [Tendrl-devel] Impact on tendrl due to BZ 1404110 - POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op ? In-Reply-To: References: Message-ID: On Wed, Dec 14, 2016 at 7:16 PM, Atin Mukherjee wrote: > > > On Wed, Dec 14, 2016 at 7:03 PM, Nithya Balachandran > wrote: > >> I think this shows up because the internal dirs already exist on the >> brick not because they have the same gfid. I've seen the posix log message >> every time a brick is restarted. >> >> This is something we should ignore for these dirs. >> > > gf_event (EVENT_POSIX_SAME_GFID, > "gfid=%s;path=%s;" > "newpath=%s;brick=%s:%s", > > uuid_utoa (uuid_req), > > gfid_path ? gfid_path : "", > loc->path, > priv->hostname, priv->base_path); > > And I guess we'd be also passing the directory name in this event, so > tendrl can check if the trashcan directory is there in the parameter list, > then ignore the event ? > Anoop is already working to remove trash directory appearing here, so it is better to not have this extra filtering implemented in tendrl IMO > IMO, in occurrence of this event, UI at best can notify an admin and then > a corrective set of actions need to be taken. > Yes. > > >> Regards, >> Nithya >> >> On 14 December 2016 at 18:57, Pranith Kumar Karampuri < >> pkarampu at redhat.com> wrote: >> >>> >>> >>> On Wed, Dec 14, 2016 at 6:51 PM, Atin Mukherjee >>> wrote: >>> >>>> >>>> >>>> On Wed, Dec 14, 2016 at 6:49 PM, sankarshan >>>> wrote: >>>> >>>>> On 14 December 2016 at 18:40, Atin Mukherjee >>>>> wrote: >>>>> > We have identified an issue related to POSIX_SAME_GFID event where >>>>> this >>>>> > unwanted event is seen for .trashcan and .trashcan/internal_op >>>>> folders. >>>>> > This event is meant to emitted from posix stack from Gluster in mkdir >>>>> > codepath in case an existing directory with the new mkdir request >>>>> shares >>>>> > the same GFID which can lead to inconsistencies. This particular >>>>> case is >>>>> > observed when a brick stop and start is performed. Now the question >>>>> I've >>>>> > here is what tendrl is supposed to do once it sees a >>>>> POSIX_SAME_GFID. Will >>>>> > there be any reactive action taken against it or it just gets >>>>> notified to >>>>> > the admin? >>>>> > >>>>> > Could you please assess this case w.r.t how does this impact tendrl >>>>> and if >>>>> > we can live with it? >>>>> >>>>> Alright. So, I'd like to propose this approach. What would a Gluster >>>>> storage admin do (in absence of Tendrl) in order to deal with this >>>>> notification and the event which caused it? Are there specific >>>>> sequence of steps which (s)he would perform and thus additional new >>>>> flows need to be built into Tendrl? Or, is this a (benign?) event >>>>> which is more of warning/information and no further (remedial) action >>>>> is required by the admin? >>>>> >>>> >>>> +Pranith - could you chime in with your thoughts here? >>>> >>> >>> Storage admin should try to fix the directory gfids to make sure two >>> directories won't have same gfid. This log/event is added to help people >>> who want to fix the directory gfids by giving the two directory path names. >>> So if I am a storage admin and I see this issue I will need to immediately >>> call redhat support and give these events/logs which will make life easier >>> for GSS to fix the issue. >>> >>> I am also adding Raghavendra/Nithya. >>> >>> >>>> >>>> >>>>> >>>>> _______________________________________________ >>>>> Tendrl-devel mailing list >>>>> Tendrl-devel at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/tendrl-devel >>>>> >>>> >>>> >>>> >>>> -- >>>> >>>> ~ Atin (atinm) >>>> >>> >>> >>> >>> -- >>> Pranith >>> >> >> > > > -- > > ~ Atin (atinm) > -- Pranith From mrugesh at brainfunked.org Wed Dec 14 13:56:09 2016 From: mrugesh at brainfunked.org (Mrugesh Karnik) Date: Wed, 14 Dec 2016 19:26:09 +0530 Subject: [Tendrl-devel] [TRACKING] Daily check-in summary for 20161213 Message-ID: We have a shiney new meetbot helping us with the check-in meetings. Here's the output from the meeting, which nicely summerises the meeting, with specific action items: https://meetbot.fedoraproject.org/tendrl-devel/2016-12-14/check-in_20161214.2016-12-14-09.02.html In addition to the meetbot output, some specific points I'd like to highlight: * The team is prioritising reviews and merges for specifications. Once merged, the implementation will be targeted only towards these specifications. * Shubhendu's specifications had been blocked on reviews yesterday. All the specifications have been reviewed and merged since yesterday's check-in. He has started working the implmentation of the specifications regarding refactoring and gluster integration. * Specifications regarding the alerting and performance monitoring components have had good progress and are expected to be merged and ready for implementation by the end of this week. * When a specification was requested for things being worked on in the usmqe-tests repository, mbukatovic responded with "we are not going to provide a specification file for changes in usmqe repositories, unless that change requires cooperation with other tenrl people". In the interest of time, it was decided that this topic needs to be discussed further on the mailing list. -- Mrugesh From sankarshan.mukhopadhyay at gmail.com Wed Dec 14 14:05:05 2016 From: sankarshan.mukhopadhyay at gmail.com (Sankarshan Mukhopadhyay) Date: Wed, 14 Dec 2016 19:35:05 +0530 Subject: [Tendrl-devel] Specifications and repositories [was:Re: [TRACKING] Daily check-in summary for 20161213] Message-ID: On Wed, Dec 14, 2016 at 7:26 PM, Mrugesh Karnik wrote: [snip] > * When a specification was requested for things being worked on in the > usmqe-tests repository, mbukatovic responded with "we are not going to > provide a specification file for changes in usmqe repositories, unless > that change requires cooperation with other tenrl people". In the > interest of time, it was decided that this topic needs to be discussed > further on the mailing list. There are two good things I'd like to highlight from this update - (a) Martin has been forthcoming and clear about the approach to be undertaken for this particular repository and (b) it was decided that the list was a better forum to have a conversation around this topic. With that, I have a question - the underlying approach to the specifications is that it provides a clearly articulated description for all participants in the Tendrl project to take stock of tasks being undertaken. Right now, we are a 'small' group - we hope to gather more contributors - perhaps from outside our present group. How can we do better in doing so? -- sankarshan mukhopadhyay From japplewh at redhat.com Wed Dec 14 15:05:35 2016 From: japplewh at redhat.com (Jeff Applewhite) Date: Wed, 14 Dec 2016 10:05:35 -0500 Subject: [Tendrl-devel] Impact on tendrl due to BZ 1404110 - POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op ? In-Reply-To: References: Message-ID: So yes - the console can pop up a notification warning on this if the event is consumed by Tendrl. We can have some discussions on the corrective action an admin would take offline. On Wed, Dec 14, 2016 at 8:55 AM, Pranith Kumar Karampuri wrote: > On Wed, Dec 14, 2016 at 7:16 PM, Atin Mukherjee wrote: > >> >> >> On Wed, Dec 14, 2016 at 7:03 PM, Nithya Balachandran >> wrote: >> >>> I think this shows up because the internal dirs already exist on the >>> brick not because they have the same gfid. I've seen the posix log message >>> every time a brick is restarted. >>> >>> This is something we should ignore for these dirs. >>> >> >> gf_event (EVENT_POSIX_SAME_GFID, >> "gfid=%s;path=%s;" >> "newpath=%s;brick=%s:%s", >> >> uuid_utoa (uuid_req), >> >> gfid_path ? gfid_path : "", >> loc->path, >> priv->hostname, priv->base_path); >> >> And I guess we'd be also passing the directory name in this event, so >> tendrl can check if the trashcan directory is there in the parameter list, >> then ignore the event ? >> > > Anoop is already working to remove trash directory appearing here, so it is > better to not have this extra filtering implemented in tendrl IMO > > >> IMO, in occurrence of this event, UI at best can notify an admin and then >> a corrective set of actions need to be taken. >> > > Yes. > > >> >> >>> Regards, >>> Nithya >>> >>> On 14 December 2016 at 18:57, Pranith Kumar Karampuri < >>> pkarampu at redhat.com> wrote: >>> >>>> >>>> >>>> On Wed, Dec 14, 2016 at 6:51 PM, Atin Mukherjee >>>> wrote: >>>> >>>>> >>>>> >>>>> On Wed, Dec 14, 2016 at 6:49 PM, sankarshan >>>>> wrote: >>>>> >>>>>> On 14 December 2016 at 18:40, Atin Mukherjee >>>>>> wrote: >>>>>> > We have identified an issue related to POSIX_SAME_GFID event where >>>>>> this >>>>>> > unwanted event is seen for .trashcan and .trashcan/internal_op >>>>>> folders. >>>>>> > This event is meant to emitted from posix stack from Gluster in mkdir >>>>>> > codepath in case an existing directory with the new mkdir request >>>>>> shares >>>>>> > the same GFID which can lead to inconsistencies. This particular >>>>>> case is >>>>>> > observed when a brick stop and start is performed. Now the question >>>>>> I've >>>>>> > here is what tendrl is supposed to do once it sees a >>>>>> POSIX_SAME_GFID. Will >>>>>> > there be any reactive action taken against it or it just gets >>>>>> notified to >>>>>> > the admin? >>>>>> > >>>>>> > Could you please assess this case w.r.t how does this impact tendrl >>>>>> and if >>>>>> > we can live with it? >>>>>> >>>>>> Alright. So, I'd like to propose this approach. What would a Gluster >>>>>> storage admin do (in absence of Tendrl) in order to deal with this >>>>>> notification and the event which caused it? Are there specific >>>>>> sequence of steps which (s)he would perform and thus additional new >>>>>> flows need to be built into Tendrl? Or, is this a (benign?) event >>>>>> which is more of warning/information and no further (remedial) action >>>>>> is required by the admin? >>>>>> >>>>> >>>>> +Pranith - could you chime in with your thoughts here? >>>>> >>>> >>>> Storage admin should try to fix the directory gfids to make sure two >>>> directories won't have same gfid. This log/event is added to help people >>>> who want to fix the directory gfids by giving the two directory path names. >>>> So if I am a storage admin and I see this issue I will need to immediately >>>> call redhat support and give these events/logs which will make life easier >>>> for GSS to fix the issue. >>>> >>>> I am also adding Raghavendra/Nithya. >>>> >>>> >>>>> >>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Tendrl-devel mailing list >>>>>> Tendrl-devel at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/tendrl-devel >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> >>>>> ~ Atin (atinm) >>>>> >>>> >>>> >>>> >>>> -- >>>> Pranith >>>> >>> >>> >> >> >> -- >> >> ~ Atin (atinm) >> > > > > -- > Pranith > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel -- Jeff Applewhite Principal Product Manager From amukherj at redhat.com Wed Dec 14 15:14:17 2016 From: amukherj at redhat.com (Atin Mukherjee) Date: Wed, 14 Dec 2016 20:44:17 +0530 Subject: [Tendrl-devel] Impact on tendrl due to BZ 1404110 - POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op ? In-Reply-To: References: Message-ID: On Wed, Dec 14, 2016 at 7:25 PM, Pranith Kumar Karampuri < pkarampu at redhat.com> wrote: > > > On Wed, Dec 14, 2016 at 7:16 PM, Atin Mukherjee > wrote: > >> >> >> On Wed, Dec 14, 2016 at 7:03 PM, Nithya Balachandran > > wrote: >> >>> I think this shows up because the internal dirs already exist on the >>> brick not because they have the same gfid. I've seen the posix log message >>> every time a brick is restarted. >>> >>> This is something we should ignore for these dirs. >>> >> >> gf_event (EVENT_POSIX_SAME_GFID, >> "gfid=%s;path=%s;" >> "newpath=%s;brick=%s:%s", >> >> uuid_utoa >> (uuid_req), >> gfid_path ? gfid_path : "", >> loc->path, >> priv->hostname, priv->base_path); >> >> And I guess we'd be also passing the directory name in this event, so >> tendrl can check if the trashcan directory is there in the parameter list, >> then ignore the event ? >> > > Anoop is already working to remove trash directory appearing here, so it > is better to not have this extra filtering implemented in tendrl IMO > So we should fix it and not defer? > > >> IMO, in occurrence of this event, UI at best can notify an admin and then >> a corrective set of actions need to be taken. >> > > Yes. > > >> >> >>> Regards, >>> Nithya >>> >>> On 14 December 2016 at 18:57, Pranith Kumar Karampuri < >>> pkarampu at redhat.com> wrote: >>> >>>> >>>> >>>> On Wed, Dec 14, 2016 at 6:51 PM, Atin Mukherjee >>>> wrote: >>>> >>>>> >>>>> >>>>> On Wed, Dec 14, 2016 at 6:49 PM, sankarshan >>>>> wrote: >>>>> >>>>>> On 14 December 2016 at 18:40, Atin Mukherjee >>>>>> wrote: >>>>>> > We have identified an issue related to POSIX_SAME_GFID event where >>>>>> this >>>>>> > unwanted event is seen for .trashcan and .trashcan/internal_op >>>>>> folders. >>>>>> > This event is meant to emitted from posix stack from Gluster in >>>>>> mkdir >>>>>> > codepath in case an existing directory with the new mkdir request >>>>>> shares >>>>>> > the same GFID which can lead to inconsistencies. This particular >>>>>> case is >>>>>> > observed when a brick stop and start is performed. Now the question >>>>>> I've >>>>>> > here is what tendrl is supposed to do once it sees a >>>>>> POSIX_SAME_GFID. Will >>>>>> > there be any reactive action taken against it or it just gets >>>>>> notified to >>>>>> > the admin? >>>>>> > >>>>>> > Could you please assess this case w.r.t how does this impact tendrl >>>>>> and if >>>>>> > we can live with it? >>>>>> >>>>>> Alright. So, I'd like to propose this approach. What would a Gluster >>>>>> storage admin do (in absence of Tendrl) in order to deal with this >>>>>> notification and the event which caused it? Are there specific >>>>>> sequence of steps which (s)he would perform and thus additional new >>>>>> flows need to be built into Tendrl? Or, is this a (benign?) event >>>>>> which is more of warning/information and no further (remedial) action >>>>>> is required by the admin? >>>>>> >>>>> >>>>> +Pranith - could you chime in with your thoughts here? >>>>> >>>> >>>> Storage admin should try to fix the directory gfids to make sure two >>>> directories won't have same gfid. This log/event is added to help people >>>> who want to fix the directory gfids by giving the two directory path names. >>>> So if I am a storage admin and I see this issue I will need to immediately >>>> call redhat support and give these events/logs which will make life easier >>>> for GSS to fix the issue. >>>> >>>> I am also adding Raghavendra/Nithya. >>>> >>>> >>>>> >>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Tendrl-devel mailing list >>>>>> Tendrl-devel at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/tendrl-devel >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> >>>>> ~ Atin (atinm) >>>>> >>>> >>>> >>>> >>>> -- >>>> Pranith >>>> >>> >>> >> >> >> -- >> >> ~ Atin (atinm) >> > > > > -- > Pranith > -- ~ Atin (atinm) From julim at redhat.com Wed Dec 14 15:47:21 2016 From: julim at redhat.com (Ju Lim) Date: Wed, 14 Dec 2016 21:17:21 +0530 Subject: [Tendrl-devel] Sprint 7 Planning Recap Message-ID: Team: This is a quick recap / summary of what is proposed for this Sprint 7 (14 Dec 2016 - 27 Dec 2016): - Finish Import Ceph and Gluster Cluster UI and get it ready for testing - TEN-3 - Import a Gluster 3.2 ClusterNEW - TEN-4 - Import a Ceph 2.x ClusterNEW - Alerting and monitoring - backend, UI (stretch goal), get it ready for testing - TEN-84 - Performance Monitoring, Alerts, Notifications NEW - TEN-125 - UX: Migrate and Publish Task and Events Designs ACCEPTED(UX design for Alerting and Task Management) - List views in UI, get it ready for testing - TEN-149 - inventory listing pageBACKLOG - *Note: This user story needs to be updated* - Specs for create Ceph and Gluster cluster workflows - TEN-1 - Install Gluster trusted poolNEW - TEN-2 - Install Ceph 2.x ClusterUSER REQUIREMENTS Other items occurring include: - Refactoring specs to be merged Wed, some repo names changes - Brno office closure last week of Dec 2016, and many QE folks will be out - USA office shutdown (23 Dec 2016 - 2 Jan 2017) - USA folks will be out - Tendrl-Devel meeting minutes can be seen at https://meetbot.fedoraproject.org/tendrl-devel/2016-12-14/check-in_20161214.2016-12-14-09.02.html . Action Items from the Planning discussion earlier: - Docs plan and getting Bobb engaged and starting to sprint together (Sankarshan) - Security related best practices (Sankarshan) - Sharmilla to lead next Sprint 8 planning discussion as US folks will be unavailable - QE (and Everyone) should review the user stories for this Sprint and ensure they are comfortable and understand the user stories and acceptance criteria. Please comment directly on JIRA if you've any questions/comments on the requirements. - Please update the shared calendar with your PTO/OOO. More detail notes can be seen at: https://docs.google.com/a/redhat.com/document/d/15QvJPME9ez8ywP-XqeO9HoY0ttAj_AiSguG-C2CAr2w/edit?usp=sharing . Thank you, Ju From nbalacha at redhat.com Thu Dec 15 03:47:07 2016 From: nbalacha at redhat.com (Nithya Balachandran) Date: Thu, 15 Dec 2016 09:17:07 +0530 Subject: [Tendrl-devel] Impact on tendrl due to BZ 1404110 - POSIX_SAME_GFID event seen for .trashcan folder and .trashcan/internal_op ? In-Reply-To: References: Message-ID: On 14 December 2016 at 20:44, Atin Mukherjee wrote: > > > On Wed, Dec 14, 2016 at 7:25 PM, Pranith Kumar Karampuri < > pkarampu at redhat.com> wrote: > >> >> >> On Wed, Dec 14, 2016 at 7:16 PM, Atin Mukherjee >> wrote: >> >>> >>> >>> On Wed, Dec 14, 2016 at 7:03 PM, Nithya Balachandran < >>> nbalacha at redhat.com> wrote: >>> >>>> I think this shows up because the internal dirs already exist on the >>>> brick not because they have the same gfid. I've seen the posix log message >>>> every time a brick is restarted. >>>> >>>> This is something we should ignore for these dirs. >>>> >>> >>> gf_event (EVENT_POSIX_SAME_GFID, >>> "gfid=%s;path=%s;" >>> "newpath=%s;brick=%s:%s", >>> >>> uuid_utoa >>> (uuid_req), >>> gfid_path ? gfid_path : "", >>> loc->path, >>> priv->hostname, priv->base_path); >>> >>> And I guess we'd be also passing the directory name in this event, so >>> tendrl can check if the trashcan directory is there in the parameter list, >>> then ignore the event ? >>> >> >> Anoop is already working to remove trash directory appearing here, so it >> is better to not have this extra filtering implemented in tendrl IMO >> > > So we should fix it and not defer? > > I think so. It should be fairly simple - just check if the dir you are trying to create already exists with the same gfid. Regards, Nithya > >> >>> IMO, in occurrence of this event, UI at best can notify an admin and >>> then a corrective set of actions need to be taken. >>> >> >> Yes. >> >> >>> >>> >>>> Regards, >>>> Nithya >>>> >>>> On 14 December 2016 at 18:57, Pranith Kumar Karampuri < >>>> pkarampu at redhat.com> wrote: >>>> >>>>> >>>>> >>>>> On Wed, Dec 14, 2016 at 6:51 PM, Atin Mukherjee >>>>> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Wed, Dec 14, 2016 at 6:49 PM, sankarshan >>>>>> wrote: >>>>>> >>>>>>> On 14 December 2016 at 18:40, Atin Mukherjee >>>>>>> wrote: >>>>>>> > We have identified an issue related to POSIX_SAME_GFID event where >>>>>>> this >>>>>>> > unwanted event is seen for .trashcan and .trashcan/internal_op >>>>>>> folders. >>>>>>> > This event is meant to emitted from posix stack from Gluster in >>>>>>> mkdir >>>>>>> > codepath in case an existing directory with the new mkdir request >>>>>>> shares >>>>>>> > the same GFID which can lead to inconsistencies. This particular >>>>>>> case is >>>>>>> > observed when a brick stop and start is performed. Now the >>>>>>> question I've >>>>>>> > here is what tendrl is supposed to do once it sees a >>>>>>> POSIX_SAME_GFID. Will >>>>>>> > there be any reactive action taken against it or it just gets >>>>>>> notified to >>>>>>> > the admin? >>>>>>> > >>>>>>> > Could you please assess this case w.r.t how does this impact >>>>>>> tendrl and if >>>>>>> > we can live with it? >>>>>>> >>>>>>> Alright. So, I'd like to propose this approach. What would a Gluster >>>>>>> storage admin do (in absence of Tendrl) in order to deal with this >>>>>>> notification and the event which caused it? Are there specific >>>>>>> sequence of steps which (s)he would perform and thus additional new >>>>>>> flows need to be built into Tendrl? Or, is this a (benign?) event >>>>>>> which is more of warning/information and no further (remedial) action >>>>>>> is required by the admin? >>>>>>> >>>>>> >>>>>> +Pranith - could you chime in with your thoughts here? >>>>>> >>>>> >>>>> Storage admin should try to fix the directory gfids to make sure two >>>>> directories won't have same gfid. This log/event is added to help people >>>>> who want to fix the directory gfids by giving the two directory path names. >>>>> So if I am a storage admin and I see this issue I will need to immediately >>>>> call redhat support and give these events/logs which will make life easier >>>>> for GSS to fix the issue. >>>>> >>>>> I am also adding Raghavendra/Nithya. >>>>> >>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Tendrl-devel mailing list >>>>>>> Tendrl-devel at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/tendrl-devel >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> >>>>>> ~ Atin (atinm) >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Pranith >>>>> >>>> >>>> >>> >>> >>> -- >>> >>> ~ Atin (atinm) >>> >> >> >> >> -- >> Pranith >> > > > > -- > > ~ Atin (atinm) > From mrugesh at brainfunked.org Thu Dec 15 11:51:47 2016 From: mrugesh at brainfunked.org (Mrugesh Karnik) Date: Thu, 15 Dec 2016 17:21:47 +0530 Subject: [Tendrl-devel] [TRACKING] Daily check-in summary for 20161215 Message-ID: Meetbot output contains all the details: https://meetbot.fedoraproject.org/tendrl-devel/2016-12-15/check-in_20161215.2016-12-15-09.03.html -- Mrugesh From julim at redhat.com Thu Dec 15 16:34:57 2016 From: julim at redhat.com (Ju Lim) Date: Thu, 15 Dec 2016 11:34:57 -0500 Subject: [Tendrl-devel] Tendrl: Create/Install Ceph Cluster UX Design Review: Recap Message-ID: Team: Thank you for those who were able to attend the Tendrl UX Design Review session earlier today. This is a very brief recap of the review (and is provided to accompany the recording earlier as there are some action items and noteworthy discussion points that will impact the Spec and epics/user stories) since there were folks who were not present. The following is a link to the recording: https://bluejeans.com/s/kXzCe/ Please note that all UX Designs and Reviews may be found at: https://tendrl.atlassian.net/wiki/display/TEN/UX+Designs+and+Design+Reviews. To see a list of published UX designs to-date, please go to UI Designs Landing Page on GitHub . UX Design reviewed: Create Ceph Cluster Today, we did a partial review of the Create/Install Ceph Cluster workflow. We got only as far as reviewing a couple steps in the wizard where we talked about general cluster configurations, adding the hosts, and disk configuration. We'll pick up next Monday during the weekly Architecture Meeting from the Networking step onwards. Discussion topics raised during the UX review: - Do we want this to be Reference Architecture driven? This will have guide the Production vs. PoC/demo/eval type deployment configurations and validations. - Uncertainty as to what is the minimum host requirements -- for Tendrl we said host with Operating Systems and possibly SSH. What about for TripleO, which mandates bare metal hosts? - What is the # of hosts we are designing this for, testing for (Perf. & Scale considerations), and want it to support? The same holds true for the cluster, # of disks per hosts, # of networks, etc. - For RGW, what are we planning to do for HA, e.g. pacemaker+corosync, HAproxy, etc.? What does that mean for config in the UI? Is it just the VIP only? For HA, need to confirm it's at least 3 nodes (as user stories currently show 2). - Will we do auto-assigning of node roles? If we need to scale to 100+ hosts, we may want to consider flavors, node roles, etc. Additionally, the Production vs. PoC deployment mode will also change what we show, i.e. allowance for co-location of node roles. - What level of disk configuration do we want to provide? What capabilities will we expose? This includes journal configuration (colo), mappings, etc. What are the disk configuration scenarios we want to ensure we support and design well for. What is the MVP? - How much customizations do we want to allow? Do we want to consider import / export of the disk information to allow customizations? What about if new disks are added? It's fraught with validation issues and concerns. - Ability to zap disks and include / exclude disks - Can we automatically calculate journal size (vs. user enter that information in)? - Other questions raised is if there is SSD's with SAS interface, how will it show in the UI, and how will it be treated? - Further clarification is needed for requirements or rather epics and user stories in multiple areas that will require Jeff, Mrugesh and other Development Team Members, and UXD to collaborate to further refine. Topics not covered yet include: - Networking - # of networks supported, ability to expose speed on the subnet and expansion of each subnet to see which nodes are on it. Implications if we need to do baremetal support in API for configuration (for TripleO)? - Validations -- how will we tackle this? AFAIK, we don't yet have a validation framework that is modular for Tendrl and may want to consider this for pre-flight, in-flight, and post-deployment validations in multiple workflows. Should we look at the TripleO clapper repo for this and maybe leverage some of their validations for Tendrl? - Storage / pool creation - do we want to still allow modification on Data pool and remove RBD and MDS pools? Do we want to include RBD creation in the workflow? There are limitations for RGW pool creation since we lack visibility into region/failure domain information that we need to document what the acceptance criteria for this is. Additionally, if CephFS is going to be used, it will impact pgcalc (which does not consider it) and we'll need to document and provide warnings for. - Clarification on what we?ll be doing on logging in UI (or not). Action Items: - Jeff will be working to clarify the epics, user stories, and acceptance criteria - Next review (to complete this design review) will be next Monday, 19 Dec 2016 during the weekly Tendrl Architecture meeting. Please invite yourself to the meeting if you're not yet in the meeting invite. If I missed anything or folks would like to raise any concerns or new questions, please feel free to add to this or raise it next Monday. Thank you, Ju From jefbrown at redhat.com Thu Dec 15 19:31:49 2016 From: jefbrown at redhat.com (Jeff Brown) Date: Thu, 15 Dec 2016 14:31:49 -0500 Subject: [Tendrl-devel] Tendrl: Create/Install Ceph Cluster UX Design Review: Recap In-Reply-To: References: Message-ID: Hi Ju, This says recap? I didn't have this on my calendar. Jeff On Thu, Dec 15, 2016 at 11:34 AM, Ju Lim wrote: > Team: > > Thank you for those who were able to attend the Tendrl UX Design Review > session earlier today. This is a very brief recap of the review (and is > provided to accompany the recording earlier as there are some action items > and noteworthy discussion points that will impact the Spec and epics/user > stories) since there were folks who were not present. > > The following is a link to the recording: https://bluejeans.com/s/kXzCe/ > > Please note that all UX Designs and Reviews may be found at: > https://tendrl.atlassian.net/wiki/display/TEN/UX+Designs+ > and+Design+Reviews. > > To see a list of published UX designs to-date, please go to UI Designs > Landing Page on GitHub > . > > UX Design reviewed: Create Ceph Cluster > > > Today, we did a partial review of the Create/Install Ceph Cluster > workflow. We got only as far as reviewing a couple steps in the wizard > where we talked about general cluster configurations, adding the hosts, and > disk configuration. We'll pick up next Monday during the weekly > Architecture Meeting from the Networking step onwards. > > Discussion topics raised during the UX review: > > - Do we want this to be Reference Architecture driven? This will have > guide the Production vs. PoC/demo/eval type deployment configurations > and > validations. > - Uncertainty as to what is the minimum host requirements -- for Tendrl > we said host with Operating Systems and possibly SSH. What about for > TripleO, which mandates bare metal hosts? > - What is the # of hosts we are designing this for, testing for (Perf. & > Scale considerations), and want it to support? The same holds true for > the > cluster, # of disks per hosts, # of networks, etc. > - For RGW, what are we planning to do for HA, e.g. pacemaker+corosync, > HAproxy, etc.? What does that mean for config in the UI? Is it just > the > VIP only? For HA, need to confirm it's at least 3 nodes (as user > stories > currently show 2). > - Will we do auto-assigning of node roles? If we need to scale to 100+ > hosts, we may want to consider flavors, node roles, etc. Additionally, > the > Production vs. PoC deployment mode will also change what we show, i.e. > allowance for co-location of node roles. > - What level of disk configuration do we want to provide? What > capabilities will we expose? This includes journal configuration > (colo), > mappings, etc. What are the disk configuration scenarios we want to > ensure > we support and design well for. What is the MVP? > - How much customizations do we want to allow? Do we want to > consider import / export of the disk information to allow > customizations? > What about if new disks are added? It's fraught with validation > issues and > concerns. > - Ability to zap disks and include / exclude disks > - Can we automatically calculate journal size (vs. user enter that > information in)? > - Other questions raised is if there is SSD's with SAS interface, how > will it show in the UI, and how will it be treated? > - Further clarification is needed for requirements or rather epics and > user stories in multiple areas that will require Jeff, Mrugesh and other > Development Team Members, and UXD to collaborate to further refine. > > Topics not covered yet include: > > - Networking - # of networks supported, ability to expose speed on the > subnet and expansion of each subnet to see which nodes are on it. > Implications if we need to do baremetal support in API for configuration > (for TripleO)? > - Validations -- how will we tackle this? AFAIK, we don't yet have a > validation framework that is modular for Tendrl and may want to consider > this for pre-flight, in-flight, and post-deployment validations in > multiple > workflows. Should we look at the TripleO clapper repo for this and > maybe > leverage some of their validations for Tendrl? > - Storage / pool creation - do we want to still allow modification on > Data pool and remove RBD and MDS pools? Do we want to include RBD > creation > in the workflow? There are limitations for RGW pool creation since we > lack > visibility into region/failure domain information that we need to > document > what the acceptance criteria for this is. Additionally, if CephFS is > going > to be used, it will impact pgcalc (which does not consider it) and we'll > need to document and provide warnings for. > - Clarification on what we?ll be doing on logging in UI (or not). > > Action Items: > > - Jeff will be working to clarify the epics, user stories, and > acceptance criteria > - Next review (to complete this design review) will be next Monday, 19 > Dec 2016 during the weekly Tendrl Architecture meeting. Please invite > yourself to the meeting if you're not yet in the meeting invite. > > If I missed anything or folks would like to raise any concerns or new > questions, please feel free to add to this or raise it next Monday. > > Thank you, > Ju > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel > From julim at redhat.com Thu Dec 15 21:27:53 2016 From: julim at redhat.com (Ju Lim) Date: Thu, 15 Dec 2016 16:27:53 -0500 Subject: [Tendrl-devel] Tendrl: Create/Install Ceph Cluster UX Design Review: Recap In-Reply-To: References: Message-ID: Hi Jeff: The event was on the shared calendar. Folks not directly in the Tendrl team are supposed to self-invite themselves. In future, I'll add you to the reviews. Sorry for the inconvenience, Ju On Thu, Dec 15, 2016 at 2:31 PM, Jeff Brown wrote: > Hi Ju, > > This says recap? I didn't have this on my calendar. > > Jeff > > On Thu, Dec 15, 2016 at 11:34 AM, Ju Lim wrote: > > > Team: > > > > Thank you for those who were able to attend the Tendrl UX Design Review > > session earlier today. This is a very brief recap of the review (and is > > provided to accompany the recording earlier as there are some action > items > > and noteworthy discussion points that will impact the Spec and epics/user > > stories) since there were folks who were not present. > > > > The following is a link to the recording: https://bluejeans.com/s/kXzCe/ > > > > Please note that all UX Designs and Reviews may be found at: > > https://tendrl.atlassian.net/wiki/display/TEN/UX+Designs+ > > and+Design+Reviews. > > > > To see a list of published UX designs to-date, please go to UI Designs > > Landing Page on GitHub > > . > > > > UX Design reviewed: Create Ceph Cluster > > > > > > Today, we did a partial review of the Create/Install Ceph Cluster > > workflow. We got only as far as reviewing a couple steps in the wizard > > where we talked about general cluster configurations, adding the hosts, > and > > disk configuration. We'll pick up next Monday during the weekly > > Architecture Meeting from the Networking step onwards. > > > > Discussion topics raised during the UX review: > > > > - Do we want this to be Reference Architecture driven? This will have > > guide the Production vs. PoC/demo/eval type deployment configurations > > and > > validations. > > - Uncertainty as to what is the minimum host requirements -- for > Tendrl > > we said host with Operating Systems and possibly SSH. What about for > > TripleO, which mandates bare metal hosts? > > - What is the # of hosts we are designing this for, testing for > (Perf. & > > Scale considerations), and want it to support? The same holds true > for > > the > > cluster, # of disks per hosts, # of networks, etc. > > - For RGW, what are we planning to do for HA, e.g. pacemaker+corosync, > > HAproxy, etc.? What does that mean for config in the UI? Is it just > > the > > VIP only? For HA, need to confirm it's at least 3 nodes (as user > > stories > > currently show 2). > > - Will we do auto-assigning of node roles? If we need to scale to > 100+ > > hosts, we may want to consider flavors, node roles, etc. > Additionally, > > the > > Production vs. PoC deployment mode will also change what we show, i.e. > > allowance for co-location of node roles. > > - What level of disk configuration do we want to provide? What > > capabilities will we expose? This includes journal configuration > > (colo), > > mappings, etc. What are the disk configuration scenarios we want to > > ensure > > we support and design well for. What is the MVP? > > - How much customizations do we want to allow? Do we want to > > consider import / export of the disk information to allow > > customizations? > > What about if new disks are added? It's fraught with validation > > issues and > > concerns. > > - Ability to zap disks and include / exclude disks > > - Can we automatically calculate journal size (vs. user enter that > > information in)? > > - Other questions raised is if there is SSD's with SAS interface, > how > > will it show in the UI, and how will it be treated? > > - Further clarification is needed for requirements or rather epics and > > user stories in multiple areas that will require Jeff, Mrugesh and > other > > Development Team Members, and UXD to collaborate to further refine. > > > > Topics not covered yet include: > > > > - Networking - # of networks supported, ability to expose speed on the > > subnet and expansion of each subnet to see which nodes are on it. > > Implications if we need to do baremetal support in API for > configuration > > (for TripleO)? > > - Validations -- how will we tackle this? AFAIK, we don't yet have a > > validation framework that is modular for Tendrl and may want to > consider > > this for pre-flight, in-flight, and post-deployment validations in > > multiple > > workflows. Should we look at the TripleO clapper repo for this and > > maybe > > leverage some of their validations for Tendrl? > > - Storage / pool creation - do we want to still allow modification on > > Data pool and remove RBD and MDS pools? Do we want to include RBD > > creation > > in the workflow? There are limitations for RGW pool creation since we > > lack > > visibility into region/failure domain information that we need to > > document > > what the acceptance criteria for this is. Additionally, if CephFS is > > going > > to be used, it will impact pgcalc (which does not consider it) and > we'll > > need to document and provide warnings for. > > - Clarification on what we?ll be doing on logging in UI (or not). > > > > Action Items: > > > > - Jeff will be working to clarify the epics, user stories, and > > acceptance criteria > > - Next review (to complete this design review) will be next Monday, 19 > > Dec 2016 during the weekly Tendrl Architecture meeting. Please invite > > yourself to the meeting if you're not yet in the meeting invite. > > > > If I missed anything or folks would like to raise any concerns or new > > questions, please feel free to add to this or raise it next Monday. > > > > Thank you, > > Ju > > _______________________________________________ > > Tendrl-devel mailing list > > Tendrl-devel at redhat.com > > https://www.redhat.com/mailman/listinfo/tendrl-devel > > > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel > From dnarayan at redhat.com Fri Dec 16 06:47:09 2016 From: dnarayan at redhat.com (Darshan Narayana Murthy) Date: Fri, 16 Dec 2016 01:47:09 -0500 (EST) Subject: [Tendrl-devel] Using gdeploy for gluster brick provisioning In-Reply-To: <271974042.5126710.1481866665404.JavaMail.zimbra@redhat.com> Message-ID: <2134659667.5138990.1481870829555.JavaMail.zimbra@redhat.com> Hi sac, Considering that tendrl will use gdeploy to provision gluster cluster, It seems very logical to use gdeploy for brick provisioning as well. Instead of having the code to provision bricks in tendrl code base. But for us to consume brick provisioning from gdeploy, we need some changes in it. Like gdeploy has to use tools like blivet, libstoragemgmt underneath to provision gluster bricks. To discuss the feasibility of including these changes, we can raise RFE against gdeploy. What would be the right place for this ? Thanks, Darshan From surs at redhat.com Fri Dec 16 08:48:02 2016 From: surs at redhat.com (Sachidananda URS) Date: Fri, 16 Dec 2016 14:18:02 +0530 Subject: [Tendrl-devel] Using gdeploy for gluster brick provisioning In-Reply-To: <2134659667.5138990.1481870829555.JavaMail.zimbra@redhat.com> References: <271974042.5126710.1481866665404.JavaMail.zimbra@redhat.com> <2134659667.5138990.1481870829555.JavaMail.zimbra@redhat.com> Message-ID: Hi Darshan, On Fri, Dec 16, 2016 at 12:17 PM, Darshan Narayana Murthy < dnarayan at redhat.com> wrote: > Hi sac, > > Considering that tendrl will use gdeploy to provision gluster cluster, It > seems very > logical to use gdeploy for brick provisioning as well. Instead of having > the code to > provision bricks in tendrl code base. But for us to consume brick > provisioning from > gdeploy, we need some changes in it. Like gdeploy has to use tools like > blivet, > libstoragemgmt underneath to provision gluster bricks. > > That will be a lot of changes. We will have to see how this can be achieved. > To discuss the feasibility of including these changes, we can raise RFE > against > gdeploy. What would be the right place for this ? > > Github would the place for that. Can you please raise an issue on github? -sac From dnarayan at redhat.com Fri Dec 16 10:49:43 2016 From: dnarayan at redhat.com (Darshan Narayana Murthy) Date: Fri, 16 Dec 2016 05:49:43 -0500 (EST) Subject: [Tendrl-devel] Using gdeploy for gluster brick provisioning In-Reply-To: References: <271974042.5126710.1481866665404.JavaMail.zimbra@redhat.com> <2134659667.5138990.1481870829555.JavaMail.zimbra@redhat.com> Message-ID: <186164899.5192288.1481885383959.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Sachidananda URS" > To: "Darshan Narayana Murthy" > Cc: "Mailing list for the contributors to the Tendrl project" > Sent: Friday, December 16, 2016 2:18:02 PM > Subject: Re: Using gdeploy for gluster brick provisioning > > Hi Darshan, > > On Fri, Dec 16, 2016 at 12:17 PM, Darshan Narayana Murthy < > dnarayan at redhat.com> wrote: > > > Hi sac, > > > > Considering that tendrl will use gdeploy to provision gluster cluster, It > > seems very > > logical to use gdeploy for brick provisioning as well. Instead of having > > the code to > > provision bricks in tendrl code base. But for us to consume brick > > provisioning from > > gdeploy, we need some changes in it. Like gdeploy has to use tools like > > blivet, > > libstoragemgmt underneath to provision gluster bricks. > > > > > That will be a lot of changes. We will have to see how this can be achieved. > > > > To discuss the feasibility of including these changes, we can raise RFE > > against > > gdeploy. What would be the right place for this ? > > > > > > Github would the place for that. Can you please raise an issue on github? Thanks sac, Have raised an issue: https://github.com/gluster/gdeploy/issues/257 to discuss further about this -Darshan > > -sac > From sankarshan at redhat.com Mon Dec 19 03:04:15 2016 From: sankarshan at redhat.com (sankarshan) Date: Mon, 19 Dec 2016 08:34:15 +0530 Subject: [Tendrl-devel] Using gdeploy for gluster brick provisioning In-Reply-To: <186164899.5192288.1481885383959.JavaMail.zimbra@redhat.com> References: <271974042.5126710.1481866665404.JavaMail.zimbra@redhat.com> <2134659667.5138990.1481870829555.JavaMail.zimbra@redhat.com> <186164899.5192288.1481885383959.JavaMail.zimbra@redhat.com> Message-ID: On 16 December 2016 at 16:19, Darshan Narayana Murthy wrote: > > > ----- Original Message ----- >> From: "Sachidananda URS" >> To: "Darshan Narayana Murthy" >> Cc: "Mailing list for the contributors to the Tendrl project" >> Sent: Friday, December 16, 2016 2:18:02 PM >> Subject: Re: Using gdeploy for gluster brick provisioning >> >> Hi Darshan, >> >> On Fri, Dec 16, 2016 at 12:17 PM, Darshan Narayana Murthy < >> dnarayan at redhat.com> wrote: >> >> > Hi sac, >> > >> > Considering that tendrl will use gdeploy to provision gluster cluster, It >> > seems very >> > logical to use gdeploy for brick provisioning as well. Instead of having >> > the code to >> > provision bricks in tendrl code base. But for us to consume brick >> > provisioning from >> > gdeploy, we need some changes in it. Like gdeploy has to use tools like >> > blivet, >> > libstoragemgmt underneath to provision gluster bricks. >> > >> > >> That will be a lot of changes. We will have to see how this can be achieved. >> >> >> > To discuss the feasibility of including these changes, we can raise RFE >> > against >> > gdeploy. What would be the right place for this ? >> > >> > >> >> Github would the place for that. Can you please raise an issue on github? > > Thanks sac, > > Have raised an issue: https://github.com/gluster/gdeploy/issues/257 to discuss > further about this > Thanks. Sachidananda, one of the things the Tendrl team would be looking forward to is understanding how this request aligns with the development roadmap for gdeploy. From mrugesh at brainfunked.org Mon Dec 19 11:09:56 2016 From: mrugesh at brainfunked.org (Mrugesh Karnik) Date: Mon, 19 Dec 2016 16:39:56 +0530 Subject: [Tendrl-devel] [TRACKING] Summary for the week of 12th Dec, priorities for the week of 19th Dec Message-ID: Firstly, apologies for the delay, as this update was to be sent on Friday. However, I was not able to to send it due to an unexpected situation at home. All the listed ids are of issues on the specifications repository (https://github.com/specifications/issues) Progress from last week, December 12th to 16th, in no particular order: * Pluggable delivery endpoints for alerts (#40). * API integration to expose time series data (#62). This enables the display of graphs and utilisation information on the UI. * Inventory for disks and networks per node (#41, #43). * Fixes to be compatible with the get-state output changes introduced in gluster upstream, post 3.9 (#30). * Object specific flows, to enable per object actions to be displayed on the UI (#34). * Versioned namespaces, to allow dynamic and backwards compatible support for newer versions of the storage systems and upgrades for the storage systems and tendrl components (#36). * Some streamlining to provide a centralised view of the definitions to ensure that all the tendrl components have a single view of the available functionality (#39), optimisations to reduce etcd traffic for definitions (#37) and additional attributes to aid dynamic API generation (#33, #35, #38). * Refactoring to remove duplication of functionality between components and segretate the core framework into the common library (#31). * Built-in utilities that can be referenced in flows and reused across various components (#72). * Additional common classes to provide an abstracted view of the definitions (#32). Priorities for the week of 19th to 23rd December: The following considerations have influenced the priorities this week: * Couple of our component owners are on leave: Anmol (alerting and monitoring), Darshan (node agent). * The import cluster workflow needs to be enhanced to provide auto-detection of the deployed storage system and complete UI flows. However, on Friday, it was discovered that additional details need to be spec'd out for the node agent. This specification (#87) would impact the provisioning support for the storage systems and the monitoring stack, along with the import cluster workflow itself. * The impending holidays in Europe and the US. Here are the priorities: * Finish off the list views in the UI. Along with the implementation of the UI itself (#75, #84, #65), it also requires some enhancements from the monitoring stack (#62, #79) and the API (#35). Ceph pool utilisation data needs to be gathered in the ceph integration component (#80), as part of the core stack and is being worked on as well. ** In parallel the node agent specification (#87) mentioned above will be finished which enables all the following features to proceed. ** Also in parallel is the specification for auto detection of various services by the node agent (#46). ** Also in parallel is the specification that enables real-time updates from in-flight operations to be gathered in the backend (#55). * The above three specifications would be completed along with the list views, to enable the enhanced import cluster workflow from the UI (#54, #56) to picked up. * In parallel to all of the above, the logging is being improved across the stack (#28) with machine parseable logs that would allow tracing. * The alerting component will be paused this week, since Anmol is on leave. However, it will be picked up immediately after the import cluster workflow, which should be completed by the time Anmol is back from leave. * Discussions and Q/A on the UX designs before the UX team is unavailable due to the holidays. * The specifications being worked in parallel to the list views and import cluster workflows contribute to the provisioning support as well. In addition to these, in the second half of the week, we'll be picking up specifications for defining the Ceph and Gluster flows (#51, #49), the exact scope and integration of the provisioning modules (#47). -- Mrugesh From nthomas at redhat.com Mon Dec 19 11:42:29 2016 From: nthomas at redhat.com (Nishanth Thomas) Date: Mon, 19 Dec 2016 17:12:29 +0530 Subject: [Tendrl-devel] [TRACKING] Daily check-in summary for 20161219 Message-ID: Meetbot output contains all the details: https://meetbot.fedoraproject.org/tendrl-devel/2016-12-19/check-in_20161219.2016-12-19-09.02.html Thanks, Nishanth From mbukatov at redhat.com Mon Dec 19 11:51:44 2016 From: mbukatov at redhat.com (Martin Bukatovic) Date: Mon, 19 Dec 2016 12:51:44 +0100 Subject: [Tendrl-devel] Using gdeploy for gluster brick provisioning In-Reply-To: <186164899.5192288.1481885383959.JavaMail.zimbra@redhat.com> References: <271974042.5126710.1481866665404.JavaMail.zimbra@redhat.com> <2134659667.5138990.1481870829555.JavaMail.zimbra@redhat.com> <186164899.5192288.1481885383959.JavaMail.zimbra@redhat.com> Message-ID: <167437eb-4e2b-fe1e-a6b8-3ed0cd69ae54@redhat.com> On 12/16/2016 11:49 AM, Darshan Narayana Murthy wrote: > > > ----- Original Message ----- >> From: "Sachidananda URS" >> To: "Darshan Narayana Murthy" >> Cc: "Mailing list for the contributors to the Tendrl project" >> Sent: Friday, December 16, 2016 2:18:02 PM >> Subject: Re: Using gdeploy for gluster brick provisioning >> >> Hi Darshan, >> >> On Fri, Dec 16, 2016 at 12:17 PM, Darshan Narayana Murthy < >> dnarayan at redhat.com> wrote: >> >>> Hi sac, >>> >>> Considering that tendrl will use gdeploy to provision gluster cluster, It >>> seems very >>> logical to use gdeploy for brick provisioning as well. Instead of having >>> the code to >>> provision bricks in tendrl code base. But for us to consume brick >>> provisioning from >>> gdeploy, we need some changes in it. Like gdeploy has to use tools like >>> blivet, >>> libstoragemgmt underneath to provision gluster bricks. >>> >>> >> That will be a lot of changes. We will have to see how this can be achieved. >> >> >>> To discuss the feasibility of including these changes, we can raise RFE >>> against >>> gdeploy. What would be the right place for this ? >>> >>> >> >> Github would the place for that. Can you please raise an issue on github? > > Thanks sac, > > Have raised an issue: https://github.com/gluster/gdeploy/issues/257 to discuss > further about this Isn't this related to https://github.com/Tendrl/documentation/issues/49? I'm asking because there is no update related to the new gdeploy approach there. -- Martin Bukatovic USM QE team From nthomas at redhat.com Tue Dec 20 11:08:18 2016 From: nthomas at redhat.com (Nishanth Thomas) Date: Tue, 20 Dec 2016 16:38:18 +0530 Subject: [Tendrl-devel] [TRACKING] Daily check-in summary for 20161220 Message-ID: Meetbot output contains all the details: https://meetbot.fedoraproject.org/tendrl-devel/2016-12-20/check-in_20161220.2016-12-20-09.04.html Thanks, Nishanth From japplewh at redhat.com Tue Dec 20 21:23:14 2016 From: japplewh at redhat.com (Jeff Applewhite) Date: Tue, 20 Dec 2016 16:23:14 -0500 Subject: [Tendrl-devel] create Ceph cluster requirements updated Message-ID: https://tendrl.atlassian.net/browse/TEN-2 please review -- Jeff Applewhite Principal Product Manager From julim at redhat.com Tue Dec 20 21:24:32 2016 From: julim at redhat.com (Ju Lim) Date: Tue, 20 Dec 2016 16:24:32 -0500 Subject: [Tendrl-devel] Import Cluster UX Design Review - workflow summary Message-ID: Team: Based on the review we had earlier today, here's a summary of the import cluster UI workflow (based on our discussion): *Triggers (where this workflow is launched):* - Landing Page / First Time Experience - Cluster List view *Workflow:* 1. User clicks on ?Import Cluster.? 2. User specifies whether he / she wishes to import a Ceph or Gluster cluster. 3. If Tendrl is able to discover hosts, which would need to have the Tendrl node agent pre-installed and pre-configured, it will display the list of automatically discovered hosts (for the cluster type user selected in #2) that are not being managed by Tendrl. 4a. User selects 1 host from the auto-discovered hosts. - For Ceph, you need specify a Monitor host within the Ceph Cluster. Initially, Ceph 2.x cluster will be supported. - For Gluster, you can specify any host within the Gluster trusted storage pool (or cluster). Initially, Gluster 3.2 cluster supported. If User does not want to select a host from the auto-discovered hosts list, proceed to #4b. 4b. If there are no hosts presented, System should automatically prompt user to specify a host (bootstrap node). - If Ceph, user would need to specify a Monitor host. - If Gluster, it can be any host within the Gluster trusted pool. 5. System prompts user whether to use login credentials or SSH keys for the selected host. - If login credentials, user specifies the user and password. If non-root user, then password has to be sudo password. - If SSH keys, user has to provide SSH key. System will assume (and use) the same credentials or SSH key for all hosts in the same cluster. 6. System lists a confirmation screen with all the hosts in the cluster along with login credentials or SSH keys, and it will visually indicate any host in the cluster whereby the login credentials or SSH key does not work. - The list will include host name, IP address, Operating System, Gluster / Ceph + release. - For Ceph, displaying server role for each host is a nice-to-have (if possible). 7. (Optional) User can change / overwrite any of the login credentials or SSH key that is having problems. - This is probably rare and an edge case. What this means is that user needs to fix this before he/she can resume this workflow. 8. If cluster associated with the selected host contains an unsupported configuration (e.g. unsupported Ceph or Gluster release, System notifies user to select another cluster to import or to cancel import. - For non-production Ceph / Gluster clusters, System will warn user that the cluster is considered Poc / demo cluster and may have restricted capabilities after cluster is completed. The same applies to EC volumes that are not supported or volume types not supported in the initial Tendrl release. Same for Gluster (and/or host) hooks for gluster trusted storage pools (if applicable). This is just a short list of what's not supported in the initial list, and the fuller list on what's supported or qualified should be listed in the user story. 9. System generates a task for the import cluster as part of the execution. I figured I'd send this summary out, and folks can think about it before tomorrow's UX design review discussion. *References* - JIRA: https://tendrl.atlassian.net/browse/TEN-3 (Import Gluster Trusted Storage Pool) - JIRA: https://tendrl.atlassian.net/browse/TEN-4 (Import Ceph Cluster) - UX Design: https://redhat.invisionapp.com/share/R88EUSGJK Thank you, Ju From mrugesh at brainfunked.org Wed Dec 21 10:19:34 2016 From: mrugesh at brainfunked.org (Mrugesh Karnik) Date: Wed, 21 Dec 2016 15:49:34 +0530 Subject: [Tendrl-devel] [TRACKING] Daily check-in summary for 20161221 Message-ID: https://meetbot.fedoraproject.org/tendrl-devel/2016-12-21/check-in_20161221.2016-12-21-09.04.html -- Mrugesh From japplewh at redhat.com Wed Dec 21 14:00:08 2016 From: japplewh at redhat.com (Jeff Applewhite) Date: Wed, 21 Dec 2016 09:00:08 -0500 Subject: [Tendrl-devel] Import Cluster UX Design Review - workflow summary In-Reply-To: References: Message-ID: This looks good Ju - I think this captures what we discussed. The only thing I would add is that there would be some docs work on specifying what the pre-requisites are for *import* so that the experience is seamless Here are a few items - A Pre-existing user with full sudo access to install and configure packages - (which could perhaps be limited after import) - A common password or ssh public key that will give us the needed access - NTP configured on all nodes - Proper hostname/domain name setup - Optionally a manual setup of the tendrl node agent to avoid keys/passwords - ?? On Tue, Dec 20, 2016 at 4:24 PM, Ju Lim wrote: > Team: > > Based on the review we had earlier today, here's a summary of the import > cluster UI workflow (based on our discussion): > > *Triggers (where this workflow is launched):* > > - Landing Page / First Time Experience > - Cluster List view > > > *Workflow:* > > 1. User clicks on ?Import Cluster.? > > 2. User specifies whether he / she wishes to import a Ceph or Gluster > cluster. > > 3. If Tendrl is able to discover hosts, which would need to have the Tendrl > node agent pre-installed and pre-configured, it will display the list of > automatically discovered hosts (for the cluster type user selected in #2) > that are not being managed by Tendrl. > > 4a. User selects 1 host from the auto-discovered hosts. > > - For Ceph, you need specify a Monitor host within the Ceph Cluster. > Initially, Ceph 2.x cluster will be supported. > - For Gluster, you can specify any host within the Gluster trusted > storage pool (or cluster). Initially, Gluster 3.2 cluster supported. > > > If User does not want to select a host from the auto-discovered hosts list, > proceed to #4b. > > > 4b. If there are no hosts presented, System should automatically prompt > user to specify a host (bootstrap node). > > - If Ceph, user would need to specify a Monitor host. > - If Gluster, it can be any host within the Gluster trusted pool. > > 5. System prompts user whether to use login credentials or SSH keys for the > selected host. > > - If login credentials, user specifies the user and password. If > non-root user, then password has to be sudo password. > - If SSH keys, user has to provide SSH key. > > System will assume (and use) the same credentials or SSH key for all hosts > in the same cluster. > > > 6. System lists a confirmation screen with all the hosts in the cluster > along with login credentials or SSH keys, and it will visually indicate any > host in the cluster whereby the login credentials or SSH key does not work. > > > - The list will include host name, IP address, Operating System, > Gluster / Ceph + release. > - For Ceph, displaying server role for each host is a nice-to-have (if > possible). > > 7. (Optional) User can change / overwrite any of the login credentials or > SSH key that is having problems. > > - This is probably rare and an edge case. What this means is that user > needs to fix this before he/she can resume this workflow. > > 8. If cluster associated with the selected host contains an unsupported > configuration (e.g. unsupported Ceph or Gluster release, System notifies > user to select another cluster to import or to cancel import. > > - For non-production Ceph / Gluster clusters, System will warn user that > the cluster is considered Poc / demo cluster and may have restricted > capabilities after cluster is completed. The same applies to EC volumes > that are not supported or volume types not supported in the initial > Tendrl > release. Same for Gluster (and/or host) hooks for gluster trusted > storage > pools (if applicable). > > This is just a short list of what's not supported in the initial list, > and the fuller list on what's supported or qualified should be listed in > the user story. > > 9. System generates a task for the import cluster as part of the execution. > > > I figured I'd send this summary out, and folks can think about it before > tomorrow's UX design review discussion. > > *References* > > - JIRA: https://tendrl.atlassian.net/browse/TEN-3 (Import Gluster > Trusted Storage Pool) > - JIRA: https://tendrl.atlassian.net/browse/TEN-4 (Import Ceph Cluster) > - UX Design: https://redhat.invisionapp.com/share/R88EUSGJK > > > Thank you, > Ju > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel > -- Jeff Applewhite Principal Product Manager From mkudlej at redhat.com Wed Dec 21 20:41:02 2016 From: mkudlej at redhat.com (Martin Kudlej) Date: Wed, 21 Dec 2016 21:41:02 +0100 Subject: [Tendrl-devel] labeling github issues In-Reply-To: References: <9ecd00cf-3d32-5335-945d-27e89e873e9c@redhat.com> Message-ID: <86df1867-453b-ef8d-0022-21c1018b2d53@redhat.com> Hi, could admins of github repositories give people from QE team permission to set labels, please? This will be very helpful for us and it gives us ability to better sort and search issues and pull requests in Tendrl repositories. Thank you! On 12/01/2016 11:38 AM, Martin Bukatovic wrote: > On 12/01/2016 11:16 AM, Sankarshan Mukhopadhyay wrote: >> On Thu, Dec 1, 2016 at 3:32 PM, Martin Bukatovic wrote: >>> I would like to assign labels (such as "bug" or "question") to github >>> issues I have created, but I don't seem to have the access rights >>> needed. Could you reconfigure the Tendrl github group so that qe team >>> members can add labels to theirs github issues? >> >> Alright. I'm missing something here. The specific label (names, which >> you indicate) exist. Can you provide me with a link to a particular >> issue? It should be easier for me to figure out what to do. > > The problem I have here is that while the labels exists, and other > team members are using them on some github issues, I'm unable to do > so. > > When I click on "New issue" of any Tendrl project on github, I don't > see the the knobs for setting the label at all [1] - the right panel > which provides those options is missing. Neither I see them when I > try to edit already created issue. Since I'm able to label issues of > my own projects, I suspect that this is related to access rights > of Tendrl github group. > > To try this yourself, try to click on "New issue" button of tendrl > documentation project[2] and compare it with my screenshot[1]. > If you are able to see knobs to set labels in the right panel, while > I'm not provided with this option as shown on the screenshot, we > would need to reconfigure access rights so that the qe team members > can add labels to tendrl github issues. > > Thank you for your help. > > [1] https://ibin.co/33riFN0YCthe.png > [2] https://github.com/Tendrl/documentation/issues/new > -- Best Regards, Martin Kudlej. RHSC/USM Senior Quality Assurance Engineer Red Hat Czech s.r.o. Phone: +420 532 294 155 E-mail:mkudlej at redhat.com IRC: mkudlej at #brno, #gluster, #storage-qa, #rhs, #rh-ceph, #usm-meeting @ redhat #tendrl-devel @ freenode From japplewh at redhat.com Wed Dec 21 22:37:56 2016 From: japplewh at redhat.com (Jeff Applewhite) Date: Wed, 21 Dec 2016 17:37:56 -0500 Subject: [Tendrl-devel] labeling github issues In-Reply-To: <86df1867-453b-ef8d-0022-21c1018b2d53@redhat.com> References: <9ecd00cf-3d32-5335-945d-27e89e873e9c@redhat.com> <86df1867-453b-ef8d-0022-21c1018b2d53@redhat.com> Message-ID: Done for the QE group for both of your usmqe-* repos - for other repos this requires write access on the repo. We probably need some discussion on this before I make that change. By default all users get read on all repos. Changing this to write has lots of implications that should be thought through. Optionally the QE team could be added to all the relevant repos with write access. https://help.github.com/articles/repository-permission-levels-for-an-organization/ On Wed, Dec 21, 2016 at 3:41 PM, Martin Kudlej wrote: > Hi, > > could admins of github repositories give people from QE team permission to > set labels, please? > This will be very helpful for us and it gives us ability to better sort > and search issues and pull requests in Tendrl repositories. > > Thank you! > > On 12/01/2016 11:38 AM, Martin Bukatovic wrote: > >> On 12/01/2016 11:16 AM, Sankarshan Mukhopadhyay wrote: >> >>> On Thu, Dec 1, 2016 at 3:32 PM, Martin Bukatovic >>> wrote: >>> >>>> I would like to assign labels (such as "bug" or "question") to github >>>> issues I have created, but I don't seem to have the access rights >>>> needed. Could you reconfigure the Tendrl github group so that qe team >>>> members can add labels to theirs github issues? >>>> >>> >>> Alright. I'm missing something here. The specific label (names, which >>> you indicate) exist. Can you provide me with a link to a particular >>> issue? It should be easier for me to figure out what to do. >>> >> >> The problem I have here is that while the labels exists, and other >> team members are using them on some github issues, I'm unable to do >> so. >> >> When I click on "New issue" of any Tendrl project on github, I don't >> see the the knobs for setting the label at all [1] - the right panel >> which provides those options is missing. Neither I see them when I >> try to edit already created issue. Since I'm able to label issues of >> my own projects, I suspect that this is related to access rights >> of Tendrl github group. >> >> To try this yourself, try to click on "New issue" button of tendrl >> documentation project[2] and compare it with my screenshot[1]. >> If you are able to see knobs to set labels in the right panel, while >> I'm not provided with this option as shown on the screenshot, we >> would need to reconfigure access rights so that the qe team members >> can add labels to tendrl github issues. >> >> Thank you for your help. >> >> [1] https://ibin.co/33riFN0YCthe.png >> [2] https://github.com/Tendrl/documentation/issues/new >> >> > -- > Best Regards, > Martin Kudlej. > RHSC/USM Senior Quality Assurance Engineer > Red Hat Czech s.r.o. > > Phone: +420 532 294 155 > E-mail:mkudlej at redhat.com > IRC: mkudlej at #brno, #gluster, #storage-qa, #rhs, #rh-ceph, > #usm-meeting @ redhat > #tendrl-devel @ freenode > > > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel > -- Jeff Applewhite Principal Product Manager From mrugesh at brainfunked.org Thu Dec 22 11:21:29 2016 From: mrugesh at brainfunked.org (Mrugesh Karnik) Date: Thu, 22 Dec 2016 16:51:29 +0530 Subject: [Tendrl-devel] [TRACKING] Daily check-in summary for 20161222 Message-ID: https://meetbot.fedoraproject.org/tendrl-devel/2016-12-22/check-in_20161222.2016-12-22-09.08.html -- Mrugesh From nthomas at redhat.com Fri Dec 23 09:52:21 2016 From: nthomas at redhat.com (Nishanth Thomas) Date: Fri, 23 Dec 2016 15:22:21 +0530 Subject: [Tendrl-devel] [TRACKING] Daily check-in summary for 20161223 Message-ID: https://meetbot.fedoraproject.org/tendrl-devel/2016-12-23/check-in_20161223.2016-12-23-09.03.html Thanks, Nishanth From mrugesh at brainfunked.org Mon Dec 26 10:03:46 2016 From: mrugesh at brainfunked.org (Mrugesh Karnik) Date: Mon, 26 Dec 2016 15:33:46 +0530 Subject: [Tendrl-devel] mkarnik's availability during the week for 26th Dec Message-ID: Hi, I'm on leave on Wednesday, 28th and Friday, 30th December. On the 29th, Thursday, I'll be able to do reviews in morning - I won't be available post 4 PM. Throughout the week, I'll have limited connectivity. I'll be ensuring that I'm able to do reviews and commit the specification changes that I'm working on, both today, Monday and tomorrow, Tuesday. I'll most likely not be on IRC. I'll be completely unreachable over the following time spans: * Wednesday, till 8 PM * Thursday, post 4 PM * Friday, till 8 PM -- Mrugesh From mrugesh at brainfunked.org Mon Dec 26 14:43:50 2016 From: mrugesh at brainfunked.org (Mrugesh Karnik) Date: Mon, 26 Dec 2016 20:13:50 +0530 Subject: [Tendrl-devel] [TRACKING] Summary for the week of 19th Dec, priorities for the week of 26th Dec Message-ID: Originally I conceived that weekly updates would be sent on Fridays. However, over the past couple of weeks, I've come to think that Mondays are much more appropriate. Weekend enable thoughts as to the status of the prior week and the priorities for the upcoming week. That's the input that enables the weekly updates. So, as of now, I'll be sending these updates on Monday mornings. All the listed ids are of issues on the specifications repository (https://github.com/specifications/issues) Progress from last week, December 19th to 23rd: Feature based progress: * List views via the UI: The UX discussions were helpful in imparting clarity upon the UI team for the implementation. Specifications and implementations for both the UI and the related core APIs have been merged. The monitoring APIs will be completed this week. These had to be paused due to PTOs However, we had specified the required details for the integration between the core API and the monitoring API before Anmol's PTO. As such, the API work was able to proceed without being blocked. A demo recording of the merged will be sent to the list by Wednesday. * Import cluster via the UI: Several discussions regarding the UX have brought forward a requirement for an enhanced import cluster workflow. This enhanced workflow would allow auto-deployment of the node agents on each of the cluster nodes via a bootstrap node. This workflow requires a new specification and has dependencies upon the provisioning functionality. The current implementation is addressing some intermediate gaps such as auto-detection of the deployment layout of the storage systems and enabling the UI based workflow. With the completion of the list views functionality, this feature will now be implemented this week. * Tagged logging: A specification has been submitted and is under review. I'd like to extend special thanks the ViaQ project members who have been kind enough to review the specification and submit their comments, just a day prior to the holidays in the US, on an hour's notice. Merged specifications, with implementation being reviewed: * Pluggable delivery endpoints for alerts (#40). * Fixes to be compatible with the get-state output changes introduced in gluster upstream, post 3.9 (#30). The implementation is awaiting a merge on pending reviews. * Refactoring to remove duplication of functionality between components and segretate the core framework into the common library (#31). Multiple rounds of pull requests and reviews have been done. Expected to be merged this week. * Built-in utilities that can be referenced in flows and reused across various components (#72). Specifications yet to be merged: * API integration to expose time series data (#62). This enables the display of graphs and utilisation information on the UI. The specification is still in progress. It is almost ready to be merged and implemented however. * Inventory for disks and networks per node (#41, #43). The disk inventory specification in particular underwent a long discussion and evolution over the week. Both specifications are about to be merged, pending an additional review to ensure that the API requirements are satisfied. Implementation is already underway since everything other than the data model in etcd has been finalised. Specifications that had to be paused due to PTOs: * Object specific flows, to enable per object actions to be displayed on the UI (#34). * Versioned namespaces, to allow dynamic and backwards compatible support for newer versions of the storage systems and upgrades for the storage systems and tendrl components (#36). * Some streamlining to provide a centralised view of the definitions to ensure that all the tendrl components have a single view of the available functionality (#39), optimisations to reduce etcd traffic for definitions (#37) and additional attributes to aid dynamic API generation (#33, #35, #38). New specifications worked on: * Tracking ceph pool utilisation (#80). This data feeds into the list views UI. The specification is under review and is being updated. The implementation details were optimised thanks to contributions for folks on the ceph-devel list. * Mrugesh has submitted an initial node agent specification (#87), which provides an overview of the specific implementation paths to be undertaken by various other node agent related specifications. An updated with the data model impact will be sent tomorrow. Priorities for the week of 26th to 30th: The following considerations have influenced the priorities this week: * Rohan is on leave this week, so the framework enhancements will need to stay paused. * Mrugesh is on leave for most of the week. Some reviews and merges may be delayed. Here are the priorities: * Import cluster workflow and auto-detection of the storage systems and Tendrl components themselves (#46, #54). * Import cluster UI (#56). * Pending monitoring stack and alerting implementations against existing specifications regarding API integration, list views in the UI etc. (#62, #79, #40). * Tagged logging (#28) and updates for in-flight operations (#55). It was discovered in the tagged logging specification that some of the functionality required for updates will be provided with the tagged logging implementation. -- Mrugesh From mrugesh at brainfunked.org Mon Dec 26 14:47:54 2016 From: mrugesh at brainfunked.org (Mrugesh Karnik) Date: Mon, 26 Dec 2016 20:17:54 +0530 Subject: [Tendrl-devel] [TRACKING] Daily check-in summary for 20161223 In-Reply-To: References: Message-ID: Team, I've sent a pull request against #87, for the node agent functionality. I've added everybody who works on the backend as a reviewer. Please take a look at the specification and comment. Also, please take the details in the specification and see if they could be applied them to the other specifications mentioned in the issue description. I'll be sending a substantial update to the specification tomorrow, with the data model impact. It would be ideal if any changes to the specifications dependent on #87 could be made by Wednesday EoD, so that I could review them on Thursday morning. Please refer to the weekly summary for the priorities this week to help prioritise any specific ones for updates. Thanks. On 23 December 2016 at 15:22, Nishanth Thomas wrote: > https://meetbot.fedoraproject.org/tendrl-devel/2016-12-23/check-in_20161223.2016-12-23-09.03.html > > Thanks, > Nishanth > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel From shtripat at redhat.com Tue Dec 27 12:16:54 2016 From: shtripat at redhat.com (Shubhendu Tripathi) Date: Tue, 27 Dec 2016 17:46:54 +0530 Subject: [Tendrl-devel] Regarding unique checksum for gluster cluster peers Message-ID: Hi Mrugesh, Self and Darshan had a small discussion with Atin and Sameekshan sometime back. As we know gluster doesnt allow a node to participate in multiple cluster and so there is no concept of cluster-id associated to the peers. We discussed the option of generating unique check-sum on all the peers of a cluster using peer information and suggestion goes as below - On all the cluster nodes: - Use `gluster peer probe` to get the list of all the connected peers and get their peer-ids (which is unique for each of the cluster peers) - For the current peer get the peer-ids from `/var/lib/glusterd/glusterd.info` - Order the peer-ids in some order (internal to tendrl node-agent) - Generate a check-sum out of this - Use the same logic on all the node-agents of the peers - Use this check-sum as unique identifier for cluster Thanks and Regards, Shubhendu From amukherj at redhat.com Tue Dec 27 12:29:50 2016 From: amukherj at redhat.com (Atin) Date: Tue, 27 Dec 2016 17:59:50 +0530 Subject: [Tendrl-devel] Regarding unique checksum for gluster cluster peers In-Reply-To: References: Message-ID: On 12/27/2016 05:46 PM, Shubhendu Tripathi wrote: > Hi Mrugesh, > > Self and Darshan had a small discussion with Atin and Sameekshan > sometime back. > As we know gluster doesnt allow a node to participate in multiple > cluster and so there is no concept of cluster-id associated to the peers. > > We discussed the option of generating unique check-sum on all the > peers of a cluster using peer information and suggestion goes as below - > > On all the cluster nodes: > - Use `gluster peer probe` to get the list of all the connected peers > and get their peer-ids (which is unique for each of the cluster peers) I believe you meant gluster peer list here :) > - For the current peer get the peer-ids from > `/var/lib/glusterd/glusterd.info` > - Order the peer-ids in some order (internal to tendrl node-agent) > - Generate a check-sum out of this > - Use the same logic on all the node-agents of the peers > - Use this check-sum as unique identifier for cluster > > Thanks and Regards, > Shubhendu > > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel From shtripat at redhat.com Wed Dec 28 01:49:31 2016 From: shtripat at redhat.com (Shubhendu Tripathi) Date: Tue, 27 Dec 2016 20:49:31 -0500 (EST) Subject: [Tendrl-devel] Regarding unique checksum for gluster cluster peers Message-ID: Sent from Samsung Mobile -------- Original message -------- From: Atin Date:27/12/2016 17:59 (GMT+05:30) To: tendrl-devel at redhat.com Subject: Re: [Tendrl-devel] Regarding unique checksum for gluster cluster peers On 12/27/2016 05:46 PM, Shubhendu Tripathi wrote: > Hi Mrugesh, > > Self and Darshan had a small discussion with Atin and Sameekshan > sometime back. > As we know gluster doesnt allow a node to participate in multiple > cluster and so there is no concept of cluster-id associated to the peers. > > We discussed the option of generating unique check-sum on all the > peers of a cluster using peer information and suggestion goes as below - > > On all the cluster nodes: > - Use `gluster peer probe` to get the list of all the connected peers > and get their peer-ids (which is unique for each of the cluster peers) I believe you meant gluster peer list here :) Thanks for pointing out. Its correct. We need to do peer list using 'gluster peer status'. > - For the current peer get the peer-ids from > `/var/lib/glusterd/glusterd.info` > - Order the peer-ids in some order (internal to tendrl node-agent) > - Generate a check-sum out of this > - Use the same logic on all the node-agents of the peers > - Use this check-sum as unique identifier for cluster > > Thanks and Regards, > Shubhendu > > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel _______________________________________________ Tendrl-devel mailing list Tendrl-devel at redhat.com https://www.redhat.com/mailman/listinfo/tendrl-devel From amukherj at redhat.com Wed Dec 28 03:46:50 2016 From: amukherj at redhat.com (Atin Mukherjee) Date: Wed, 28 Dec 2016 03:46:50 +0000 Subject: [Tendrl-devel] Regarding unique checksum for gluster cluster peers In-Reply-To: References: Message-ID: gluster peer list would be a better choice here. On Wed, 28 Dec 2016 at 07:19, Shubhendu Tripathi wrote: > > > > > > > Sent from Samsung Mobile > > > > -------- Original message -------- > > From: Atin > > Date:27/12/2016 17:59 (GMT+05:30) > > To: tendrl-devel at redhat.com > > Subject: Re: [Tendrl-devel] Regarding unique checksum for gluster cluster > peers > > > > > > > > On 12/27/2016 05:46 PM, Shubhendu Tripathi wrote: > > > Hi Mrugesh, > > > > > > Self and Darshan had a small discussion with Atin and Sameekshan > > > sometime back. > > > As we know gluster doesnt allow a node to participate in multiple > > > cluster and so there is no concept of cluster-id associated to the peers. > > > > > > We discussed the option of generating unique check-sum on all the > > > peers of a cluster using peer information and suggestion goes as below - > > > > > > On all the cluster nodes: > > > - Use `gluster peer probe` to get the list of all the connected peers > > > and get their peer-ids (which is unique for each of the cluster peers) > > > > I believe you meant gluster peer list here :) > > > > Thanks for pointing out. Its correct. We need to do peer list using > 'gluster peer status'. > > > > > > > - For the current peer get the peer-ids from > > > `/var/lib/glusterd/glusterd.info` > > > - Order the peer-ids in some order (internal to tendrl node-agent) > > > - Generate a check-sum out of this > > > - Use the same logic on all the node-agents of the peers > > > - Use this check-sum as unique identifier for cluster > > > > > > Thanks and Regards, > > > Shubhendu > > > > > > _______________________________________________ > > > Tendrl-devel mailing list > > > Tendrl-devel at redhat.com > > > https://www.redhat.com/mailman/listinfo/tendrl-devel > > > > _______________________________________________ > > Tendrl-devel mailing list > > Tendrl-devel at redhat.com > > https://www.redhat.com/mailman/listinfo/tendrl-devel > > _______________________________________________ > > Tendrl-devel mailing list > > Tendrl-devel at redhat.com > > https://www.redhat.com/mailman/listinfo/tendrl-devel > > -- - Atin (atinm) From avishwan at redhat.com Wed Dec 28 06:15:18 2016 From: avishwan at redhat.com (Aravinda) Date: Wed, 28 Dec 2016 11:45:18 +0530 Subject: [Tendrl-devel] Regarding unique checksum for gluster cluster peers In-Reply-To: References: Message-ID: <58057778-89d6-fa77-5aae-82ef494fb893@redhat.com> Checksum will change when new node is added/removed from the existing cluster. So checksum from peers list may not be suitable for finding Cluster ID. regards Aravinda On 12/28/2016 09:16 AM, Atin Mukherjee wrote: > gluster peer list would be a better choice here. > > On Wed, 28 Dec 2016 at 07:19, Shubhendu Tripathi > wrote: > >> >> >> >> >> >> Sent from Samsung Mobile >> >> >> >> -------- Original message -------- >> >> From: Atin >> >> Date:27/12/2016 17:59 (GMT+05:30) >> >> To: tendrl-devel at redhat.com >> >> Subject: Re: [Tendrl-devel] Regarding unique checksum for gluster cluster >> peers >> >> >> >> >> >> >> >> On 12/27/2016 05:46 PM, Shubhendu Tripathi wrote: >> >>> Hi Mrugesh, >>> Self and Darshan had a small discussion with Atin and Sameekshan >>> sometime back. >>> As we know gluster doesnt allow a node to participate in multiple >>> cluster and so there is no concept of cluster-id associated to the peers. >>> We discussed the option of generating unique check-sum on all the >>> peers of a cluster using peer information and suggestion goes as below - >>> On all the cluster nodes: >>> - Use `gluster peer probe` to get the list of all the connected peers >>> and get their peer-ids (which is unique for each of the cluster peers) >> >> >> I believe you meant gluster peer list here :) >> >> >> >> Thanks for pointing out. Its correct. We need to do peer list using >> 'gluster peer status'. >> >> >> >> >> >>> - For the current peer get the peer-ids from >>> `/var/lib/glusterd/glusterd.info` >>> - Order the peer-ids in some order (internal to tendrl node-agent) >>> - Generate a check-sum out of this >>> - Use the same logic on all the node-agents of the peers >>> - Use this check-sum as unique identifier for cluster >>> Thanks and Regards, >>> Shubhendu >>> _______________________________________________ >>> Tendrl-devel mailing list >>> Tendrl-devel at redhat.com >>> https://www.redhat.com/mailman/listinfo/tendrl-devel >> >> >> _______________________________________________ >> >> Tendrl-devel mailing list >> >> Tendrl-devel at redhat.com >> >> https://www.redhat.com/mailman/listinfo/tendrl-devel >> >> _______________________________________________ >> >> Tendrl-devel mailing list >> >> Tendrl-devel at redhat.com >> >> https://www.redhat.com/mailman/listinfo/tendrl-devel >> >> -- > - Atin (atinm) > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel From shtripat at redhat.com Wed Dec 28 06:18:25 2016 From: shtripat at redhat.com (Shubhendu Tripathi) Date: Wed, 28 Dec 2016 11:48:25 +0530 Subject: [Tendrl-devel] Regarding unique checksum for gluster cluster peers In-Reply-To: <58057778-89d6-fa77-5aae-82ef494fb893@redhat.com> References: <58057778-89d6-fa77-5aae-82ef494fb893@redhat.com> Message-ID: <3169d47a-e4a2-32eb-77fc-d446c03c8c58@redhat.com> On 12/28/2016 11:45 AM, Aravinda wrote: > Checksum will change when new node is added/removed from the existing > cluster. So checksum from peers list may not be suitable for finding > Cluster ID. Ah, thats a good point and in that case we would need to update the check-sum in tendrl as well for all the nodes. @Mrugesh, comments?? > > regards > Aravinda > > On 12/28/2016 09:16 AM, Atin Mukherjee wrote: >> gluster peer list would be a better choice here. >> >> On Wed, 28 Dec 2016 at 07:19, Shubhendu Tripathi >> wrote: >> >>> >>> >>> >>> >>> >>> Sent from Samsung Mobile >>> >>> >>> >>> -------- Original message -------- >>> >>> From: Atin >>> >>> Date:27/12/2016 17:59 (GMT+05:30) >>> >>> To: tendrl-devel at redhat.com >>> >>> Subject: Re: [Tendrl-devel] Regarding unique checksum for gluster >>> cluster >>> peers >>> >>> >>> >>> >>> >>> >>> >>> On 12/27/2016 05:46 PM, Shubhendu Tripathi wrote: >>> >>>> Hi Mrugesh, >>>> Self and Darshan had a small discussion with Atin and Sameekshan >>>> sometime back. >>>> As we know gluster doesnt allow a node to participate in multiple >>>> cluster and so there is no concept of cluster-id associated to the >>>> peers. >>>> We discussed the option of generating unique check-sum on all the >>>> peers of a cluster using peer information and suggestion goes as >>>> below - >>>> On all the cluster nodes: >>>> - Use `gluster peer probe` to get the list of all the connected peers >>>> and get their peer-ids (which is unique for each of the cluster peers) >>> >>> >>> I believe you meant gluster peer list here :) >>> >>> >>> >>> Thanks for pointing out. Its correct. We need to do peer list using >>> 'gluster peer status'. >>> >>> >>> >>> >>> >>>> - For the current peer get the peer-ids from >>>> `/var/lib/glusterd/glusterd.info` >>>> - Order the peer-ids in some order (internal to tendrl node-agent) >>>> - Generate a check-sum out of this >>>> - Use the same logic on all the node-agents of the peers >>>> - Use this check-sum as unique identifier for cluster >>>> Thanks and Regards, >>>> Shubhendu >>>> _______________________________________________ >>>> Tendrl-devel mailing list >>>> Tendrl-devel at redhat.com >>>> https://www.redhat.com/mailman/listinfo/tendrl-devel >>> >>> >>> _______________________________________________ >>> >>> Tendrl-devel mailing list >>> >>> Tendrl-devel at redhat.com >>> >>> https://www.redhat.com/mailman/listinfo/tendrl-devel >>> >>> _______________________________________________ >>> >>> Tendrl-devel mailing list >>> >>> Tendrl-devel at redhat.com >>> >>> https://www.redhat.com/mailman/listinfo/tendrl-devel >>> >>> -- >> - Atin (atinm) >> _______________________________________________ >> Tendrl-devel mailing list >> Tendrl-devel at redhat.com >> https://www.redhat.com/mailman/listinfo/tendrl-devel > > _______________________________________________ > Tendrl-devel mailing list > Tendrl-devel at redhat.com > https://www.redhat.com/mailman/listinfo/tendrl-devel From shtripat at redhat.com Wed Dec 28 09:43:56 2016 From: shtripat at redhat.com (Shubhendu Tripathi) Date: Wed, 28 Dec 2016 15:13:56 +0530 Subject: [Tendrl-devel] [TRACKING] Daily check-in summary for 20161228 Message-ID: <99e53989-0eb9-fe25-0b6b-9c7fa2730537@redhat.com> https://meetbot.fedoraproject.org/tendrl-devel/2016-12-28/check-in_20161228.2016-12-28-09.02.html Regards, Shubhendu From shtripat at redhat.com Thu Dec 29 09:25:20 2016 From: shtripat at redhat.com (Shubhendu Tripathi) Date: Thu, 29 Dec 2016 14:55:20 +0530 Subject: [Tendrl-devel] [TRACKING] Daily check-in summary for 20161229 Message-ID: https://meetbot.fedoraproject.org/tendrl-devel/2016-12-29/check-in_20161229.2016-12-29-09.01.html Regards, Shubhendu From shtripat at redhat.com Fri Dec 30 09:18:44 2016 From: shtripat at redhat.com (Shubhendu Tripathi) Date: Fri, 30 Dec 2016 14:48:44 +0530 Subject: [Tendrl-devel] [TRACKING] Daily check-in summary for 20161230 Message-ID: <248451e6-d102-742a-0bfe-cf08cdc478f4@redhat.com> https://meetbot.fedoraproject.org/tendrl-devel/2016-12-30/check-in_20161230.2016-12-30-09.00.html Regards, Shubhendu