[Pulp-dev] Pulp multinode testing

Michael Hrivnak mhrivnak at redhat.com
Tue Feb 7 16:16:43 UTC 2017


I think there are two ways to approach it, both valuable:

1) be able to test a Pulp deployment without regard to how it is deployed.
Don't care how many machines are involved or which services are running
where. This will help us validate new and interesting deployment
approaches, such as a container-based deployment. Any admin who automates a
multi-node deployment could fire up a test environment and validate their
setup.

2) test specific features relevant to multi-node deployment. This could be
challenging, but may be worthwhile. I suggest listing specific multi-node
behaviors you want to test, and prioritize each on its own. Examples:
- Failover of celerybeat and pulp_resource_manager
- Seeing services on all machines in the status API
- using the same credentials to hit the REST API on multiple machines
- retrieve the same content from multiple machines
- run the DB and/or broker on a separate machine

Which multi-node features you want to test, and how, will inform what
deployment model you decide to focus on.

Michael

On Tue, Feb 7, 2017 at 10:26 AM, Elyezer Rezende <erezende at redhat.com>
wrote:

> I would also consider 2 machines with all services running on both. This
>> would ensure that all the services work correctly with multiple instances
>> running on multiple machines. You may need a simple load-balancer for all
>> HTTP traffic to ensure you're getting a mix of both machines serving API
>> requests and content.
>>
>
> Do you think we should have this as final architecture or the testing
> should be done on both architectures?
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/pulp-dev/attachments/20170207/cd46b13b/attachment.htm>


More information about the Pulp-dev mailing list