[Tendrl-devel] UX: Design Approach and Information Architecture

Ric Wheeler rwheeler at redhat.com
Tue Oct 4 07:36:00 UTC 2016


On 10/04/2016 07:08 AM, John Spray wrote:
> On Sun, Oct 2, 2016 at 8:53 PM, John Spray <jspray at redhat.com> wrote:
>> On Fri, Sep 30, 2016 at 9:53 PM, Ju Lim <julim at redhat.com> wrote:
>>> Hi John:
>>>
>>> Sorry for the slow response — I was tied up in offsite meetings in the
>>> later part of this week.
>>>
>>> I appreciate all the feedback and have taken some time to think about the
>>> IA a little more.  I can’t think of strong enough use cases/user stories to
>>> have snapshots and quotas at the 2nd level seeing that the user would have
>>> other places to trigger related actions/workflows — specifically from when
>>> they are “standing” on the Pool/RBD/File Share (storage top-level entity)
>>> itself as well as within the Cluster object details itself.  It’s always
>>> in-context of the storage top-level entity or Cluster.  The only exception
>>> is for reporting-type use cases across all clusters being managed, but even
>>> that is probably more applicable for quotas for things like
>>> chargebacks/showback, which is probably not a core use case for Tendrl.
>>> I’m glad you and Nishant raised this concern as I think some of the initial
>>> underlying assumptions have evolved especially if there’s a higher level
>>> management tool that leverages what Tendrl collects.
>>>
>>> I've made updates to the document UX Design Approach and Information
>>> Architecture
>>> <https://docs.google.com/a/redhat.com/presentation/d/1P4ejy0Q7BT0PHa2H4wC_nFAh--Db72NpX8yT44qU3Rc/edit?usp=sharing>
>>>   to reflect what I stated above (i.e. removal of snapshots and quotas from
>>> the 2nd level nav).
>>>
>>>
>>> Regarding whether it’s worth rethinking using the whole first level and
>>> possibly bringing up to the top-level CephFS, RBD, RGW, etc.  This was
>>> something I had thought about early on but lumped everything under Storage
>>> as putting them at the top level might get unwieldy as more things are
>>> added.  If we were going to remain static at only supporting Ceph and
>>> Gluster, then it would be okay.  However, to support future things that get
>>> added, ideally you don’t want to keep adding them to the top as there
>>> wouldn’t be room to grow and also it might be “surprising” when the
>>> top-level menu keeps changing and expanding, and eventually wrapping when
>>> we run out of room.
>> I see what you mean, I think this depends on a lot on how multiple
>> storage systems are handled.  If we had an install-time sense of which
>> type of system was being managed (e.g. installing particular named
>> module package for the storage system), then it would be reasonable to
>> put everything at the top level as presumably users wouldn't install
>> anything they didn't need.  If the console didn't know that at install
>> time, then you'd need to avoid having too much at the top level
>> (although I'm not sure it's that much better to have an unbounded list
>> of subsystems under Storage instead of at the top level).
>>
>> So I guess it would actually be useful to know for context, are users
>> going to see a UI that is configured at install time to show/hide the
>> ceph/gluster specific pieces?  Any comments from engineers on this?
>> As a Ceph person I would really like it if we could give Ceph users
>> something that didn't show them any un-needed Gluster stuff.
> *bump* can anyone tell me the answer?
>
> John

Hi John,

For the UX bits that Red Hat developers are working on, I don't see us having 
different top levels for gluster and ceph. Not something certainly that we tried 
for in the current management application.

Other projects of course can build something ceph (or gluster) specific for 
their UX top bits though.

Regards,
Ric

>
>>> With regards to user identities, I believe it’s cluster-specific, so that
>>> would get triggered from within the cluster object details, as it wasn’t
>>> also a frequent use case and I assumed typically performed at initial setup
>>> time.  This could be handled through an Edit Cluster, and/or optional step
>>> in the Create Cluster workflow, and/or an optional step in the Create RBD /
>>> Pool workflow, and/or a dedicated workflow optimized for configuration of
>>> user identities.  Does this make sense?
>> Yes, I think it will naturally pop up in multiple places.  Keys belong
>> to a cluster, but creating one is usually contextual to a particular
>> subsystem (crafting the right kind of key for rbd, cephfs, rgw depends
>> which pool, filesystem, etc you want to access).  This would actually
>> be a situation where having some metadata in ceph to mark which
>> subsystem a key is intended for would be handy (we don't currently
>> support any "tagging" mechanism for keys, but it has been discussed),
>> so I'd encourage anyone working on key management to think about that
>> early so that we can look at putting anything needed into ceph.
>>
>> I wonder how it was concluded that it wasn't a frequent use case?
>> Don't all users need to create keys to access their cluster?
>> Hopefully folks testing this solution are not all just using their
>> "client.admin" key for everything (akin to logging in as root).
>>
>> John
>>
>>> Thoughts?
>>>
>>> Thanks,
>>> Ju
>>>
>>> On Wed, Sep 28, 2016 at 7:15 PM, John Spray <jspray at redhat.com> wrote:
>>>
>>>> On Wed, Sep 28, 2016 at 8:41 PM, Ju Lim <julim at redhat.com> wrote:
>>>>> Hi Nishant:
>>>>>
>>>>> Thanks for reviewing.  With regards how the dashboard will look when Ceph
>>>>> and Gluster is in the picture:
>>>>>
>>>>> When there is a single storage subsystem present, the single dashboard
>>>> for
>>>>> the single storage subsystem is presented by default.
>>>>>
>>>>> When there are multiple storage subsystems present, the Dashboard would
>>>>> present each dashboard in its own tab.  Basically, we'd have a Ceph tab
>>>> and
>>>>> a Gluster tab.
>>>>>
>>>>> Ideally if the default dashboard tab can be specified on a per user
>>>> basis,
>>>>> e.g. in a user’s profile/setting -- initially configured by the
>>>>> Administrator user. It's a nice-to-have at this point, and if folks
>>>> agree,
>>>>> it should be added to the backlog.
>>>>>
>>>>> With regards to unifying concepts such as quotas, file shares, etc., you
>>>>> raise a fair concern. There's multiple approaches to it. For the quotas,
>>>>> snapshots, my current thinking is that when we get to the Quotas or
>>>>> Snapshots section, we would either have a unified view with filtering
>>>>> capabilities (for the different types), or tabbed views, or a 3rd level
>>>>> navigation, or some other combination.  This will be determined at a
>>>> later
>>>>> date when the feature gets added. Going with a tabbed approach or 3rd
>>>> level
>>>>> navigation could potentially allow for a plug-in based approach whereby
>>>>> specific storage subsystem capabilities can be exposed as needed.
>>>>>
>>>>> For File Shares, one approach I suggested was to handle it via the user's
>>>>> workflow. Another approach could be to handle it via the navigation which
>>>>> potentially increases complexity and may impact navigation by introducing
>>>>> more levels of navigation. Our goal is not to go deeper than 3 levels of
>>>>> navigation as it will become an unwieldy user experience if things are
>>>>> buried too deeply. That being said, whether it's handled via the
>>>> workflow,
>>>>> navigation, or some other means, using a plug-in based approach should
>>>> still
>>>>> work.
>>>> Regarding navigation depth, I wonder if it's worth rethinking using
>>>> the whole first level just to get into "Storage" or "Clusters" --
>>>> isn't everything we do about storage clusters?  In my mind, things
>>>> like cephfs, rbd, rgw are really top-level items.
>>>>
>>>> Regarding unifying quotas and snapshots, I think it would be natural
>>>> for a user to go cephfs->quotas rather than quotas->cephfs, because at
>>>> any given moment they are probably concentrating on one particular
>>>> subsystem.  I think the cephfs quotas and cephfs snapshots have more
>>>> in common (and belong closer together) than e.g. cephfs quotas and
>>>> gluster quotas, or cephfs snapshots and rbd snapshots.  In a cephfs
>>>> UI, we would probably hope to see a tree view that showed directories
>>>> in the filesystem and enabled things like setting layouts and quotas
>>>> on directories -- picturing that, I don't see a sane way for cephfs
>>>> quotas to live under a top-level Quotas subsystem unless the whole
>>>> widget was duplicated inside the CephFS subsystem as well.
>>>>
>>>> Not trying to entirely derail this into a discussion of CephFS, but I
>>>> think the best way to see if this design is a good fit is to work
>>>> through some of the more challenging cases like the N types of quota
>>>> that will ultimately need managing (not just those supported in the
>>>> first release of the software), and see if what comes out makes sense.
>>>>
>>>> By the way, I was just taking another look at slide 12, and I don't
>>>> see user identities[1] (i.e. the identities that we use for access
>>>> control of Ceph clients, not the Tendrl users).  Those should probably
>>>> be in there somewhere.  They intersect both the "Cluster" section
>>>> (ceph keys are managed by the mons) and the RBD/RGW/CephFS parts (to
>>>> generate a key with meaningful capabilities you need to know which
>>>> subsystem it's going to be using).
>>>>
>>>> John
>>>>
>>>> 1. http://docs.ceph.com/docs/hammer/rados/operations/user-management/
>>>>
>>>>
>>>>> I've made updates to the document UX Design Approach and Information
>>>>> Architecture to incorporate the points raised by both yours and John's
>>>>> feedback.
>>>>>
>>>>> Thank you,
>>>>> Ju
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Sep 28, 2016 at 3:55 AM, Nishanth Thomas <nthomas at redhat.com>
>>>> wrote:
>>>>>> Its good that we are not going to unify the dashboard pieces. With this
>>>> in
>>>>>> mind how the main dashboard looks if we have a deployment with ceph and
>>>>>> Gluster?
>>>>>> Also I have a concern with respect to unifying 'File shares' 'quotas'
>>>> etc,
>>>>>> as it might complicate the navigation. Rather we could look at a plug-in
>>>>>> based approach where file system specific things could be kept
>>>> separately
>>>>>> and installed on a need basis.
>>>>>>
>>>>>> On Wed, Sep 28, 2016 at 2:12 AM, Ju Lim <julim at redhat.com> wrote:
>>>>>>> Hi John:
>>>>>>>
>>>>>>> Thank you for taking the time to review and sharing your feedback.
>>>>>>>
>>>>>>> WRT terminology, I think that's a tricky one and I can understand and
>>>>>>> appreciate the confusion with RBD (vs. Block Device) and File Share.
>>>> The
>>>>>>> original decision to go with RBD was that it was very Ceph-specific in
>>>> terms
>>>>>>> of the implementation and varies from LVM and other block technologies.
>>>>>>> Having said that, I would agree that File Share deviates from this
>>>> initial
>>>>>>> premise since it could mean a Gluster volume or a CephFS fileshare.
>>>>>>>
>>>>>>> The current thinking regarding File Share is to use it for both Ceph
>>>> and
>>>>>>> Gluster, and have a way to qualify it through the notion of the
>>>> workload.
>>>>>>> Part of the rationale was to try to make it so users did not have to
>>>> think
>>>>>>> should I use Ceph or Gluster, but rather let the System make an
>>>>>>> "intelligent" choice for the mere mortal user based on just enough
>>>>>>> information that user provides.  This way, the user thinks of it as
>>>> Red Hat
>>>>>>> Storage providing this capability, and if we swapped out Ceph or
>>>> Gluster for
>>>>>>> something in the future, it would allow for that.  In conjunction,
>>>> there
>>>>>>> would still be a way for a user to qualify it for Ceph or Gluster as
>>>> well,
>>>>>>> so it's not just for mere mortal but also a more knowledge / ninja
>>>> user.
>>>>>>> Having said that, RGW (and Swift) are definitely in mind too, so this
>>>> is
>>>>>>> not limited to just File Shares but applies to pretty much block,
>>>> object,
>>>>>>> and file services.
>>>>>>>
>>>>>>> With regards to quotas and snapshots, I do agree that there are many
>>>>>>> different kinds.  It did not seem prudent to create a Pool Snapshot,
>>>> RBD
>>>>>>> Snapshot and Clones, Volume Snapshot and Clones, File Share Quotas,
>>>> Pool
>>>>>>> Quotas, Directory Quotas, etc. at the 2nd level navigation as it is not
>>>>>>> scalable (and would get very crowded very quickly), and would create a
>>>>>>> fairly complex experience for the user.  My current thinking is that
>>>> when we
>>>>>>> get to the Quotas or Snapshots section, we would either have a unified
>>>> view
>>>>>>> with filtering capabilities (for the different types), or tabbed
>>>> views, or a
>>>>>>> 3rd level navigation, or some other combination.  Since quotas and
>>>> snapshots
>>>>>>> are not in the immediate release plans, I've deferred this to the
>>>> detailed
>>>>>>> design stage when we tackle those topics.
>>>>>>>
>>>>>>> I hope this addresses your concerns raised.  I'll update the slidedeck
>>>>>>> with these comments before it gets published out more formally.
>>>>>>>
>>>>>>> Thanks again,
>>>>>>> Ju
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Sep 26, 2016 at 5:34 AM, John Spray <jspray at redhat.com> wrote:
>>>>>>>> On Fri, Sep 23, 2016 at 7:03 PM, Ju Lim <julim at redhat.com> wrote:
>>>>>>>>> Hi.  I've started documenting the UX Design Approach and Information
>>>>>>>>> Architecture (JIRA TEN-40).  It includes some of our guiding
>>>>>>>>> principles and
>>>>>>>>> decisions that we've made in our design.
>>>>>>>>>
>>>>>>>>> Here's an initial draft that you can take a look at:
>>>>>>>>>
>>>>>>>>> https://docs.google.com/a/redhat.com/presentation/d/
>>>> 1P4ejy0Q7BT0PHa2H4wC_nFAh--Db72NpX8yT44qU3Rc/edit?usp=sharing
>>>>>>>>> Note: For now it's open to be comments by anyone with a Red Hat
>>>>>>>>> account.  I
>>>>>>>>> plan to publish this to Jira and/or GitHub before the end of Sprint
>>>> 1
>>>>>>>>> (Oct
>>>>>>>>> 4), but I'd like to open this up to review, comments, and discussion
>>>>>>>>> in the
>>>>>>>>> meantime.
>>>>>>>> Thanks for posting!
>>>>>>>>
>>>>>>>> My main piece of feedback is on terminology -- I think it's going to
>>>>>>>> get confusing to use a mixture of qualified names (e.g. "RBD" instead
>>>>>>>> of "block device") and unqualified names (e.g. "File Share" when we
>>>>>>>> really mean "Gluster").
>>>>>>>>
>>>>>>>> I have a special attachment to use of "File Share" because when Tendrl
>>>>>>>> gets support for CephFS, everywhere the term is used is going to need
>>>>>>>> qualifying with whether we're talking about Ceph or Gluster.  I don't
>>>>>>>> think anyone is working on adding Tendrl support for CephFS at the
>>>>>>>> moment, but it should be part of the design process, along with RGW.
>>>>>>>>
>>>>>>>> Terms like "Quotas" are also dangerous, because we have so many
>>>>>>>> different kinds.  In Ceph alone, we have Quotas on pools, and then a
>>>>>>>> totally different type of quota on directories in cephfs.  Same for
>>>>>>>> snapshots, we have pool snapshots, rbd snapshots, cephfs snapshots.
>>>>>>>>
>>>>>>>> John
>>>>>>>>
>>>>>>>>> Thank you,
>>>>>>>>> Ju Lim
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Tendrl-devel mailing list
>>>>>>>>> Tendrl-devel at redhat.com
>>>>>>>>> https://www.redhat.com/mailman/listinfo/tendrl-devel
>>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Tendrl-devel mailing list
>>>>>>>> Tendrl-devel at redhat.com
>>>>>>>> https://www.redhat.com/mailman/listinfo/tendrl-devel
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Ju Lim
>>>>>>> Red Hat
>>>>>>> Office: 978-399-0422
>>>>>>> Mobile: 781-507-1323
>>>>>>> Email: julim at redhat.com
>>>>>>> IRC: julim
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Tendrl-devel mailing list
>>>>>>> Tendrl-devel at redhat.com
>>>>>>> https://www.redhat.com/mailman/listinfo/tendrl-devel
>>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Tendrl-devel mailing list
>>>>>> Tendrl-devel at redhat.com
>>>>>> https://www.redhat.com/mailman/listinfo/tendrl-devel
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Ju Lim
>>>>> Red Hat
>>>>> Office: 978-399-0422
>>>>> Mobile: 781-507-1323
>>>>> Email: julim at redhat.com
>>>>> IRC: julim
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Tendrl-devel mailing list
>>>>> Tendrl-devel at redhat.com
>>>>> https://www.redhat.com/mailman/listinfo/tendrl-devel
>>>>>
>>>> _______________________________________________
>>>> Tendrl-devel mailing list
>>>> Tendrl-devel at redhat.com
>>>> https://www.redhat.com/mailman/listinfo/tendrl-devel
>>>>
>>>
>>>
>>> --
>>> Ju Lim
>>> Red Hat
>>> Office: 978-399-0422
>>> Mobile: 781-507-1323
>>> Email: julim at redhat.com
>>> IRC: julim
>>> _______________________________________________
>>> Tendrl-devel mailing list
>>> Tendrl-devel at redhat.com
>>> https://www.redhat.com/mailman/listinfo/tendrl-devel
> _______________________________________________
> Tendrl-devel mailing list
> Tendrl-devel at redhat.com
> https://www.redhat.com/mailman/listinfo/tendrl-devel





More information about the Tendrl-devel mailing list