[Pulp-list] /var/lib/pulp on glusterfs?
zberrie at redhat.com
Mon Nov 2 17:45:44 UTC 2015
Thanks. I’ll see what I can do with the limited hardware I have. I think the best I’ll be able to do is to perform the same test with local disc on a single virtualized node accessing the underlying disc directly and then against glusterfs served by a cluster.
I like glusterfs but in attempting to sell it I’ve run into a lot of situations where what it can’t do seems to outshine what it can do.
A first glance pulp does sound like a very good use-case. Since it’s relatively large file operations without hotspots (AFAIK).
The problem that remains is the requirement for MongoDB ... which probably should not be run on glusterfs. In a hyper-converged virtualization configuration that’s all we have. So we might need to figure out another answer for that.
Zak Berrie, RHCE
(formerly Zak Brown)
Red Hat, Inc.
> On Nov 2, 2015, at 6:09 AM, Brian Bouterse <bbouters at redhat.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
> I personally haven't used Pulp with /var/lib/pulp hosted on glusterfs,
> but it should work. I've heard from others that they've specifically
> done it, and that it worked for them.
> The clustering guide  outlines Pulp's storage system requirements
> in agnostic of a specific filesystem. That should have all the
> necessary details regarding mount points, POSIX user permissions,
> SELinux labels, etc.
> For performance testing of the disk-heavy Pulp operations, I would
> probably test sync, publish, re-sync, re-publish. I would also sync
> from a large, on-premise high speed content source (ie: a local sync'd
> copy of RHEL or EPEL). I would try to ensure the network could serve
> the bits to Pulp during a sync or re-sync faster than the disk
> operations of Pulp which would cause the Pulp disk speed to become the
> rate limiting component. Good testing methodology should be used. For
> example, have the system serving the hosted content Pulp is syncing
> from be a separate system that the device under test (Pulp+glusterfs).
> Also, maybe look through the importer and distributor options that are
> being used to see what impact those have on performance.
> It would be great if you share any benchmarks that you do with the
> Pulp community.
> - -pulp
> - -Brian
> On 10/29/2015 12:51 PM, Zak Berrie wrote:
>> I’m experimenting with hyper-converged virtualization based on
>> oVirt (RHEV) and Gluster. In this configuration a small set of
>> nodes provides both virtualization and storage services on each
>> physical system.
>> One of the workloads that I’m planning to run on this environment
>> is Satellite 6. I’m wondering if it might make sense to locate
>> the pulp data directory (/var/lib/pulp) on glusterfs directly
>> rather than inside of the VM image (which is in turn is served by
>> gluster). It seems to make sense to remove some layers if
>> I’m curious if anyone has already attempted to run pulp on top of
>> Beyond that, if I were to perform some testing of different
>> configurations what do you think is a valid way to benchmark pulp
>> Of course mongodb and PostgreSQL and MongoDB are monsters of their
>> own… I’m working out ways to make sure that Mongo and Postgres will
>> only run on fast SSD-based storage but that’s for another list.
>> -- Zak Berrie, RHCE (formerly Zak Brown) Solutions Architect Red
>> Hat, Inc. (310) 293-1949 http://bit.ly/zb-bluejeans
>> _______________________________________________ Pulp-list mailing
>> list Pulp-list at redhat.com
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2
> -----END PGP SIGNATURE-----
> Pulp-list mailing list
> Pulp-list at redhat.com
More information about the Pulp-list