[Pulp-dev] Stages API Performance Data Collection
dawalker at redhat.com
Wed Sep 19 18:56:01 UTC 2018
Cool, thanks for the clarification. Sounds great.
Associate Software Engineer
On Wed, Sep 19, 2018 at 1:04 PM, Brian Bouterse <bbouters at redhat.com> wrote:
> On Mon, Sep 17, 2018 at 3:10 PM Dana Walker <dawalker at redhat.com> wrote:
>> I love this idea! Running benchmarks as we go will allow us to react
>> quickly if there are unforeseen performance pain points.
>> Have you run anything similar to this proposal back in Pulp2 or
>> elsewhere? I'm a little concerned about the storage capacity needed for
>> the sheer number of sqlite3 databases generated. Maybe a script could
>> periodically empty /var/lib/pulp/debug/ as it reaches certain configured
>> size/age limits?
> We did have a similar feature in Pulp2 that would output a cProfile with
> the filename being the task UUID. (docs link below). I don't think storage
> wasn't an issue there, but users would have to confirm for us. When users
> would use it, they would turn the feature on, run the troublesome workload,
> then turn it off again so it's usually a few tasks only. I think each db
> will be very small < 1MB probably.
>> Dana Walker
>> Associate Software Engineer
>> Red Hat
>> On Mon, Sep 17, 2018 at 2:36 PM, Brian Bouterse <bbouters at redhat.com>
>>> I'm interested in implementing a data collection feature for Pulp3. This
>>> will allow us to easily and accurately benchmark pipeline performance to
>>> clearly show improvement as we make changes. Borrowing from my old queueing
>>> theory days... here is a data collection feature proposal:
>>> Any comment/ideas are welcome. Thank you!
>>> Pulp-dev mailing list
>>> Pulp-dev at redhat.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Pulp-dev