[Linux-cluster] problems with clvmd and lvms on rhel6.1
lists at alteeve.ca
Fri Aug 10 15:15:37 UTC 2012
On 08/10/2012 11:07 AM, Poós Krisztián wrote:
> Dear all,
> I hope that anyone run into this problem in the past, so maybe can help
> me resolving this issue.
> There is a 2 node rhel cluster with quorum also.
> There are clustered lvms, where the -c- flag is on.
> If I start clvmd all the clustered lvms became online.
> After this if I start rgmanager, it deactivates all the volumes, and not
> able to activate them anymore as there are no such devices anymore
> during the startup of the service, so after this, the service fails.
> All lvs remain without the active flag.
> I can manually bring it up, but only if after clvmd is started, I set
> the lvms manually offline by the lvchange -an <lv>
> After this, when I start rgmanager, it can take it online without
> problems. However I think, this action should be done by the rgmanager
> itself. All the logs is full with the next:
> rgmanager Making resilient: lvchange -an ....
> rgmanager lv_exec_resilient failed
> rgmanager lv_activate_resilient stop failed on ....
> As well, sometimes the lvs/clvmd commands are also hanging. I have to
> restart clvmd to make it work again. (sometimes killing it)
> Anyone has any idea, what to check?
> Thanks and regards,
Please paste your cluster.conf file with minimal edits.
Papers and Projects: https://alteeve.com
More information about the Linux-cluster