[EnMasse] multitenancy and density (was Re: EnMasse multitenancy roles)

Rob Godfrey rgodfrey at redhat.com
Mon Mar 27 07:42:35 UTC 2017


On 24 March 2017 at 22:12, Gordon Sim <gsim at redhat.com> wrote:

> On 23/03/17 13:34, Ulf Lilleengen wrote:
>
>> Hi,
>>
>> Resending this as 3 of us were not subscribed to the list.
>>
>> After our discussion yesterday, I've tried to collect my thoughts on
>> multitenancy. In our past discussions there have been sort of 2 views on
>> multitenancy: one where multitenancy is handled within the dispatch
>> router, and one with multiple isolated router networks. As Rob mentioned
>> (and I agree) we should think of supporting both.
>>
>> I don't think took into account supporting isolated and non-isolated
>> tenants when we discussed this earlier. And I'm not sure if we should
>> think of it as just 1 role or 2 roles externally:
>>
>> * Client - connects to the messaging endpoint
>> * Tenant - Manages one address space
>> (* Instance - Have 1 or more tenants)
>> * Messaging operator - Manages EnMasse instances and tenants
>> * OpenShift operator - Manages OpenShift
>>
>> Instances are isolated into separate OpenShift namespaces, while a
>> tenant may share the same instance (routers and possibly brokers) with
>> other tenants.
>>
>> Does it make sense to think of it this way? With this definition we have
>> support for multiple instances today, but not multiple tenants within
>> the same instance.
>>
>
>
I'm fine with the above terms, though I'd prefer to find an alternative
name for "Tenant"... The Tenant "role" in your definition is, I think, the
Address Space Manager.  I'm not sure what role would be specifically tied
to an instance - that seems to be covered by the Messaging Operator... I
guess one question is who (Address Space Manager or Messaging Operator) has
the ability to scale-up/scale-down aspects of the service.


> I have also been trying to collect my thoughts, specifically on the
> motivation for density and how best to address it. (The following is a bit
> of a ramble)
>
> The desire for greater density is a desire for more efficient utilisation
> of resources: cpu, memory, file handles and disk space.
>
> My assumption is that virtualisation provides efficient *cpu* utilisation
> without requiring shared processes.
>
> The way the broker journal works, I doubt there is any gain in the
> efficient use of *disk space* from sharing a broker between tenants as
> opposed to having each tenant use their own broker.
>
> I'm also assuming the bulk of the file handles used will be from by the
> applications own connections into the messaging service, so there would be
> no significant gain in efficiency in file handle utilisation from sharing
> infrastructure between tenants either.
>
> So I think the issue of density boils down to memory use.
>
> The argument for sharing a broker (or router) between tenants being that
> there is some minimum memory overhead for a broker (or router) process,
> independent of how much work it is actually doing, and that this overhead
> is significant.
>
> Clearly there *is* some overhead, but perhaps then it would be worth
> experimenting a little to see if we can determine what it is, whether it
> can be reduced or tuned down in any way, and how it compares to the amount
> of memory consumed by different workloads.
>
> Focusing just on the core messaging components to begin with, the minimal
> install of the current architecture would be a single router and single
> broker (since a broker can now host multiple queues).
>
> For a single broker, the router only adds any value if the application(s)
> require the direct semantics that it offers and is only needed if those
> semantics are required over a connection that also accesses brokered
> addresses.
>
>
For consistency / simplicity of deployment we might also want also want to
have the router consistently provide the interface to
authentication/authorisation services... however I would hope that the
router would be "lightweight" with regards to memory (and cpu and disk)
compared to a broker.  Personally if we want to try to reduce the footprint
of each address I would look at whether we can decompose the "broker" part
in such a way as if all we need is a "queue" then a queue is all that is
being provided.  Of course whether Java is the best platform to provide
memory efficient services is an entirely different question :-).


> If an application/tenant needs more than a single broker/router, it would
> seem to me that there would be little benefit from trying to share with
> other tenants.
>
>
Overall the above is very similar to my current thinking.  Where a shared
router infrastructure might provide more value (and this is just some very
hazy thinking on my part) is potentially things like cross
cluster/datacenter links - pure infrastructure which is not specific to a
given "application".  The only other "plus" for shared router
infrastructure is potentially some sort of reduction on the number of
entities that need to be "managed"/"monitored"... however I think tying a
router to a particular application rather than having all routers shared
across many applications is actually probably easier from a management
perspective.



> So the most compelling use case for shared infrastructure is where there
> are a lot of very small applications that could share a broker. Perhaps
> this use case would be better catered for by Rob (Godfrey)'s 'virtual
> broker' concept? I.e. maybe we have quite different underlying
> infrastructure for different service+plan combinations?
>
>
Agreed - as I was saying last week, I think we need to go through each of
the axes of scaling we want to cater for and define how we are going to
achieve that.  The pattern for very dense, low usage queues is very
different from a massively distributed high throughput queue.

-- Rob


> _______________________________________________
> enmasse mailing list
> enmasse at redhat.com
> https://www.redhat.com/mailman/listinfo/enmasse
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/enmasse/attachments/20170327/a4bd67e1/attachment.htm>


More information about the enmasse mailing list