[almighty] New thread - discussion on: capacity, scalability, responsive tests (and performance tests)

Todd Mancini tmancini at redhat.com
Wed Oct 12 18:35:33 UTC 2016


Agreed, this is an important topic. I'm happy to respond, as my answers
generally cause all sorts of interesting turmoil.

How Many Users?

As many as possible. I know that's a flip answer, but it's true. So let me
rephrase the question ever so slightly differently -- what user capacity
can we handle per 'deployment unit'. I'm just making words up here, but the
basic gist is this: take a sampling of, say, EC2 machine sizes (t2.large,
c4.4xlarge, r3.2xlarge, etc.) and determine how many simultaneous users we
can comfortably accommodate on them. We'll also need to guesstimate storage
per user (probably based upon 3 user types: low usage, average usage and
high usage). With all of that, we can start to model out how much we need
to purchase in order to support X users per month, for any value of X.

Separately we'll model out values for X over time, and thereby we'll be
able to create a cloud budget. (e.g. "we anticipate hosting costs of
$120,000 in August to support a predicted 750,000 users.")

But, in case it's not abundantly clear, we need to be able to scale to tens
of millions of users, with some reasonable percentage of the population
active at any point in time (say, 10%).

How Many Work Items?

If you want to bucket as Small, Medium and Large projects, that may be too
coarse, but it works out to something like 5,000; 100,000; 3,000,000. Here,
again, it may make more sense to model # of work items per user (with low,
average and high usage users), and then we can make a model of total # of
work items over time (as we'll already have a prediction for # of users).

Required Query Performance?

This is a really hard one to answer, Most user interactions will be via the
web user interface, and here we can employ all sorts of tricks to load the
data on demand (e.g., when you scroll down in Twitter). Really the only
metric that matters here is that the site is responsive, and there is
plenty of research out there that correlates response time to user
abandonment. (e.g. 100ms is good. 2,000ms is very bad.) In instances where
there is a need to process, say, 100,000 WIs all at once, that's generally
related to reporting and we've got more flexibility in those cases.

It's less important to answer "how long does it take to process a query
which returns 5,000" and more important to ask "what do we have to do to
draw a Kanban board with the 30 relevant work items from a database of
10,000,000 work items."

   -Todd

On Wed, Oct 12, 2016 at 1:47 PM, Michael Kleinhenz <kleinhenz at redhat.com>
wrote:

> Thanks for bringing this topic up. I think it is really important to set
> goals here as soon as possible and continuously challenge the system
> architecture with them. Scaling is possibly a nasty neckbreaker when not
> considered from the start. The PDD does not contain real numbers here.
>
> On Wed, Oct 12, 2016 at 7:15 PM, Leonard Dimaggio <ldimaggi at redhat.com>
> wrote:
>
>> 'Afternoon everyone,
>>
>> I wanted to start a discussion about a topic that we'll have to consider
>> soon - system capacity, scalability, responsiveness - and how we create/run
>> automated performance tests. It's obviously premature to run stress tests
>> today, but we want to avoid a situation where we don't build for high
>> performance and scalability in the future.
>>
>> Some topics we should discuss:
>>
>>    - For our hosted and on-premise service, how many concurrent users do
>>    we want to support?
>>    - For a project, how many work items will constitute a "small,"
>>    "medium," or "large" project?
>>    - What response times do we want to support for queries that return
>>    10 workitems, 100 workitems, 10000 workitems
>>    - etc
>>
>> We will want to build automated tests to verify the
>> performance/throughput, reliability, etc. - Ideally we'll start with a
>> basic framework for tests that can be run directly against the core and
>> through the UI - and we'll want to start building the framework and tests
>> early so that they can be improved incrementally in the sprints.
>> Does anyone have opinions, suggestions, requests, etc?
>>
>>
>> Thanks!,
>> Len D.
>>
>>
>>
>> --
>> Len DiMaggio (ldimaggi at redhat.com)
>> JBoss by Red Hat
>> 314 Littleton Road
>> Westford, MA 01886  USA
>> tel:  978.392.3179
>> cell: 781.472.9912
>> http://www.redhat.com
>> http://community.jboss.org/people/ldimaggio
>>
>>
>>
>> _______________________________________________
>> almighty-public mailing list
>> almighty-public at redhat.com
>> https://www.redhat.com/mailman/listinfo/almighty-public
>>
>>
>
>
> --
> Michael Kleinhenz
> Principal Software Engineer
>
> Red Hat Deutschland GmbH
> Werner-von-Siemens-Ring 14
> 85630 Grasbrunn
> Germany
>
> RED HAT | TRIED. TESTED. TRUSTED.
> Red Hat GmbH, www.de.redhat.com,
> Registered seat: Grasbrunn, Commercial register: Amtsgericht München, HRB
> 153243,
> Managing Directors: Paul Argiry, Charles Cachera, Michael Cunningham,
> Michael O'Neill
>
> _______________________________________________
> almighty-public mailing list
> almighty-public at redhat.com
> https://www.redhat.com/mailman/listinfo/almighty-public
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/almighty-public/attachments/20161012/61952078/attachment.htm>


More information about the almighty-public mailing list