[Linux-cluster] How to run same service in parallel in RedHat Cluster 5.0
Ruben Sajnovetzky
rsajnove at cisco.com
Wed Sep 28 00:33:23 UTC 2011
I might be doing something wrong, because you say "you are fine" but didn't
work :(
All servers have "/opt/app" mounted in same internal disk partition.
They are not shared, it is just that all have identical layout.
I tried to create:
Resource name: Central_FS
Device: /dev/mapper/VolGroup00-optvol
FS Type: ext3
Mount point: /opt
And
Resource name: Collector_FS
Device: /dev/mapper/VolGroup00-optvol
FS Type: ext3
Mount point: /opt
When I tried to save it I found in the /var/log/messages:
clurgmgrd[4174]: <notice> Reconfiguring
clurgmgrd[4174]: <err> Unique attribute collision. type=fs attr=mountpoint
value=/opt
clurgmgrd[4174]: <err> Error storing fs resource
Thanks for your help and ideas!
On 27-Sep-2011 8:19 PM, "Digimer" <linux at alteeve.com> wrote:
> On 09/27/2011 05:04 PM, Ruben Sajnovetzky wrote:
>>
>> Good example, thanks.
>> Not sure if is doable because we could have 10 servers and the idea to have
>> 10 service instances could be tricky to admin :(
>
> Oh? How so? The file would be a bit long, but even with ten definitions
> it should still be manageable. Particularly so if you use a tool like luci.
>
>> What about the other q, related with the usage of same name of devices and
>> mounting points?
>
> I didn't follow that question. Rather, that sounds like a much bigger
> question...
>
> If '/opt/app' is local to each node, containing separate installs of the
> application, it should be fine. However, I expect this is not the case,
> of you'd not be asking.
>
> If, on the other hand, '/opt/app' is a shared storage (ie: an NFS mount,
> GFS2 partition, etc) then it should still be fine. Look again at that
> link and search for '/xen_shared'. That is a common chunk of space
> (using clvmd and gfs2) which is un/mounted by the cluster and it is
> mounted in the same place on all nodes (and uses the same LV device name).
>
> If I am not answering your question, please ask again. :)
More information about the Linux-cluster
mailing list