Hello Jay<br><br> <span dir="ltr" id=":1gl">The error is so clear, you have lvm 
configured in exclusive mode, that means you can't access to your vg 
from more then one node at a time</span><br><div class="gmail_quote">2012/6/21 Jay Tingle <span dir="ltr"><<a href="mailto:yogsothoth@sinistar.org" target="_blank">yogsothoth@sinistar.org</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi All,    I am having a problem using pvmove during some testing with Red Hat<br>
Cluster using CLVM on RHEL 6.2.  I have 3 nodes which are ESXi 5u1 VMs with the<br>
'multi-writer' flag set for the shared vmdk devices. I keep getting locking<br>
errors during the pvmove.  Everything else seems to be working great as far as<br>
CLVM goes.  Searching through the list archives and consulting the manuals it<br>
looks like all you need is to have cmirrord running.  The RHEL 6 manual<br>
mentions cmirror-kmod which doesn't seem to exist anymore.  Is there still a<br>
kernel module on RHEL 6?  I am standard clvm with ext4 in an active/passive<br>
cluster.  Anyone know what I am doing wrong?  Below is my lvm config and my<br>
cluster config.  Thanks in advance.<br>
<br>
[root@rhc6esx1 ~]# rpm -qa|grep -i lvm<br>
lvm2-libs-2.02.87-6.el6.x86_64<br>
lvm2-2.02.87-6.el6.x86_64<br>
lvm2-cluster-2.02.87-6.el6.<u></u>x86_64<br>
[root@rhc6esx1 ~]# rpm -q cman<br>
cman-3.0.12.1-23.el6.x86_64<br>
[root@rhc6esx1 ~]# rpm -q cmirror<br>
cmirror-2.02.87-6.el6.x86_64<br>
<br>
[root@rhc6esx1 ~]# ps -ef|grep cmirror<br>
root     21253 20692  0 13:37 pts/1    00:00:00 grep cmirror<br>
root     31858     1  0 13:18 ?        00:00:00 cmirrord<br>
       [root@rhc6esx1 ~]# pvs|grep cfq888dbvg<br>
 /dev/sdf1  cfq888dbvg        lvm2 a--  20.00g     0<br>
 /dev/sdi1  cfq888dbvg        lvm2 a--  20.00g     0<br>
 /dev/sdj1  cfq888dbvg        lvm2 a--  20.00g     0<br>
 /dev/sdk1  cfq888dbvg        lvm2 a--  80.00g 80.00g<br>
<br>
[root@rhc6esx1 ~]# pvmove -v /dev/sdi1 /dev/sdk1<br>
   Finding volume group "cfq888dbvg"<br>
   Executing: /sbin/modprobe dm-log-userspace<br>
   Archiving volume group "cfq888dbvg" metadata (seqno 7).<br>
   Creating logical volume pvmove0<br>
   Moving 5119 extents of logical volume cfq888dbvg/cfq888_db<br>
 Error locking on node rhc6esx1-priv: Device or resource busy<br>
 Error locking on node rhc6esx3-priv: Volume is busy on another node<br>
 Error locking on node rhc6esx2-priv: Volume is busy on another node<br>
 Failed to activate cfq888_db<br>
<br>
[root@rhc6esx1 ~]# clustat<br>
Cluster Status for rhc6 @ Thu Jun 21 13:35:49 2012<br>
Member Status: Quorate<br>
<br>
Member Name                                              ID   Status<br>
------ ----                                              ---- ------<br>
rhc6esx1-priv                                                1 Online, Local, rgmanager<br>
rhc6esx2-priv                                                2 Online, rgmanager<br>
rhc6esx3-priv                                                3 Online, rgmanager<br>
/dev/block/8:33                                              0 Online, Quorum Disk<br>
<br>
Service Name                                    Owner (Last)                                    State<br>
------- ----                                    ----- ------                                    -----<br>
service:cfq888_grp                              rhc6esx1-priv                                   started<br>
<br>
[root@rhc6esx1 ~]# lvm dumpconfig<br>
 devices {<br>
       dir="/dev"<br>
       scan="/dev"<br>
       obtain_device_list_from_udev=1<br>
       preferred_names=["^/dev/mpath/<u></u>", "^/dev/mapper/mpath", "^/dev/[hs]d"]<br>
       filter="a/.*/"<br>
       cache_dir="/etc/lvm/cache"<br>
       cache_file_prefix=""<br>
       write_cache_state=1<br>
       sysfs_scan=1<br>
       md_component_detection=1<br>
       md_chunk_alignment=1<br>
       data_alignment_detection=1<br>
       data_alignment=0<br>
       data_alignment_offset_<u></u>detection=1<br>
       ignore_suspended_devices=0<br>
       disable_after_error_count=0<br>
       require_restorefile_with_uuid=<u></u>1<br>
       pv_min_size=2048<br>
       issue_discards=0<br>
 }<br>
 dmeventd {<br>
       mirror_library="libdevmapper-<u></u>event-lvm2mirror.so"<br>
       snapshot_library="<u></u>libdevmapper-event-<u></u>lvm2snapshot.so"<br>
 }<br>
 activation {<br>
       checks=0<br>
       udev_sync=1<br>
       udev_rules=1<br>
       verify_udev_operations=0<br>
       missing_stripe_filler="error"<br>
       reserved_stack=256<br>
       reserved_memory=8192<br>
       process_priority=-18<br>
       mirror_region_size=512<br>
       readahead="auto"<br>
       mirror_log_fault_policy="<u></u>allocate"<br>
       mirror_image_fault_policy="<u></u>remove"<br>
       snapshot_autoextend_threshold=<u></u>100<br>
       snapshot_autoextend_percent=20<br>
       use_mlockall=0<br>
       monitoring=1<br>
       polling_interval=15<br>
 }<br>
 global {<br>
       umask=63<br>
       test=0<br>
       units="h"<br>
       si_unit_consistency=1<br>
       activation=1<br>
       proc="/proc"<br>
       locking_type=3<br>
       wait_for_locks=1<br>
       fallback_to_clustered_locking=<u></u>1<br>
       fallback_to_local_locking=1<br>
       locking_dir="/var/lock/lvm"<br>
       prioritise_write_locks=1<br>
       abort_on_internal_errors=0<br>
       detect_internal_vg_cache_<u></u>corruption=0<br>
       metadata_read_only=0<br>
       mirror_segtype_default="<u></u>mirror"<br>
 }<br>
 shell {<br>
       history_size=100<br>
 }<br>
 backup {<br>
       backup=1<br>
       backup_dir="/etc/lvm/backup"<br>
       archive=1<br>
       archive_dir="/etc/lvm/archive"<br>
       retain_min=10<br>
       retain_days=30<br>
 }<br>
 log {<br>
       verbose=0<br>
       syslog=1<br>
       overwrite=0<br>
       level=0<br>
       indent=1<br>
       command_names=0<br>
       prefix="  "<br>
 }<br>
<br>
<br>
[root@rhc6esx1 ~]# ccs -h localhost --getconf<br>
<cluster config_version="273" name="rhc6"><br>
 <fence_daemon clean_start="0" post_fail_delay="20" post_join_delay="60"/><br>
 <clusternodes><br>
   <clusternode name="rhc6esx1-priv" nodeid="1"><br>
     <fence><br>
       <method name="1"><br>
         <device name="fence_vmware" uuid="422a2b6a-4093-2694-65e0-<u></u>a01332ef54bd"/><br>
       </method><br>
     </fence><br>
   </clusternode><br>
   <clusternode name="rhc6esx2-priv" nodeid="2"><br>
     <fence><br>
       <method name="1"><br>
         <device name="fence_vmware" uuid="422a9c5d-f9e2-8150-340b-<u></u>c84b834ba068"/><br>
       </method><br>
     </fence><br>
   </clusternode><br>
   <clusternode name="rhc6esx3-priv" nodeid="3"><br>
     <fence><br>
       <method name="1"><br>
         <device name="fence_vmware" uuid="422af24c-909f-187d-4e64-<u></u>2a28cbe5d09d"/><br>
       </method><br>
     </fence><br>
   </clusternode><br>
 </clusternodes><br>
 <cman expected_votes="5"/><br>
 <fencedevices><br>
   <fencedevice agent="fence_vmware_soap" ipaddr="192.168.1.111" login="mrfence" name="fence_vmware" passwd="FenceM3" ssl="yes" verbose="yes"/><br>

 </fencedevices><br>
 <totem token="30000"/><br>
 <quorumd interval="1" label="rhc6esx-quorum" stop_cman="1" tko="10" votes="2"/><br>
 <logging logfile_priority="info" syslog_facility="daemon" syslog_priority="warning" to_logfile="yes" to_syslog="yes"><br>
   <logging_daemon logfile="/var/log/cluster/<u></u>qdiskd.log" name="qdiskd"/><br>
   <logging_daemon logfile="/var/log/cluster/<u></u>fenced.log" name="fenced"/><br>
   <logging_daemon logfile="/var/log/cluster/dlm_<u></u>controld.log" name="dlm_controld"/><br>
   <logging_daemon logfile="/var/log/cluster/gfs_<u></u>controld.log" name="gfs_controld"/><br>
   <logging_daemon debug="on" logfile="/var/log/cluster/<u></u>rgmanager.log" name="rgmanager"/><br>
   <logging_daemon name="corosync" to_logfile="no"/><br>
 </logging><br>
 <rm log_level="7"><br>
   <failoverdomains><br>
     <failoverdomain name="rhc6esx3_home" nofailback="1" ordered="1" restricted="1"><br>
       <failoverdomainnode name="rhc6esx3-priv" priority="1"/><br>
       <failoverdomainnode name="rhc6esx2-priv" priority="2"/><br>
       <failoverdomainnode name="rhc6esx1-priv" priority="3"/><br>
     </failoverdomain><br>
     <failoverdomain name="rhc6esx2_home" nofailback="1" ordered="1" restricted="1"><br>
       <failoverdomainnode name="rhc6esx2-priv" priority="1"/><br>
       <failoverdomainnode name="rhc6esx1-priv" priority="2"/><br>
       <failoverdomainnode name="rhc6esx3-priv" priority="3"/><br>
     </failoverdomain><br>
     <failoverdomain name="rhc6esx1_home" nofailback="1" ordered="1" restricted="1"><br>
       <failoverdomainnode name="rhc6esx1-priv" priority="1"/><br>
       <failoverdomainnode name="rhc6esx2-priv" priority="2"/><br>
       <failoverdomainnode name="rhc6esx3-priv" priority="3"/><br>
     </failoverdomain><br>
   </failoverdomains><br>
   <resources><br>
     <lvm name="cfq888vg_lvm" self_fence="1" vg_name="cfq888vg"/><br>
     <lvm name="cfq888bkpvg_lvm" self_fence="1" vg_name="cfq888bkpvg"/><br>
     <lvm name="cfq888dbvg_lvm" self_fence="1" vg_name="cfq888dbvg"/><br>
     <lvm name="cfq888revg_lvm" vg_name="cfq888revg"/><br>
     <lvm name="cfq888flashvg_lvm" self_fence="1" vg_name="cfq888flashvg"/><br>
     <ip address="192.168.1.31" monitor_link="1"/><br>
     <fs device="/dev/cfq888vg/cfq888" force_fsck="0" force_unmount="1" fstype="ext4" mountpoint="/cfq888" name="cfq888_mnt" self_fence="0"/><br>

     <fs device="/dev/cfq888vg/cfq888_<u></u>ar" force_fsck="0" force_unmount="1" fstype="ext4" mountpoint="/cfq888/cfq888_ar" name="cfq888_ar_mnt" self_fence="0"/><br>

     <fs device="/dev/cfq888vg/cfq888_<u></u>sw" force_fsck="0" force_unmount="1" fstype="ext4" mountpoint="/cfq888/cfq888_sw" name="cfq888_sw_mnt" self_fence="0"/><br>

     <fs device="/dev/cfq888bkpvg/<u></u>cfq888_dmp" force_fsck="0" force_unmount="1" fstype="ext4" mountpoint="/cfq888/cfq888_<u></u>dmp" name="cfq888_dmp_mnt" self_fence="0"/><br>

     <fs device="/dev/cfq888bkpvg/<u></u>cfq888_bk" force_fsck="0" force_unmount="1" fstype="ext4" mountpoint="/cfq888/cfq888_bk" name="cfq888_bk_mnt" self_fence="0"/><br>

     <fs device="/dev/cfq888dbvg/<u></u>cfq888_db" force_fsck="0" force_unmount="1" fstype="ext4" mountpoint="/cfq888/cfq888_db" name="cfq888_db_mnt" self_fence="0"/><br>

     <fs device="/dev/cfq888flashvg/<u></u>cfq888_flash" force_fsck="0" force_unmount="1" fstype="ext4" mountpoint="/cfq888/cfq888_bk/<u></u>cfq888_flash" name="cfq888_flash_mnt" self_fence="0"/><br>

     <fs device="/dev/cfq888revg/<u></u>cfq888_rd" force_fsck="0" force_unmount="1" fstype="ext4" mountpoint="/cfq888/cfq888_rd" name="cfq888_rd_mnt" self_fence="0"/><br>

     <oracledb home="/u01/app/oracle/product/<u></u>11.2.0/dbhome_1" listener_name="cfq888_lsnr" name="cfq888" type="base" user="oracle"/><br>
   </resources><br>
   <service autostart="1" domain="rhc6esx1_home" exclusive="0" name="cfq888_grp" recovery="restart"><br>
     <lvm ref="cfq888vg_lvm"/><br>
     <lvm ref="cfq888bkpvg_lvm"/><br>
     <lvm ref="cfq888dbvg_lvm"/><br>
     <lvm ref="cfq888revg_lvm"/><br>
     <lvm ref="cfq888flashvg_lvm"/><br>
     <fs ref="cfq888_mnt"><br>
       <fs ref="cfq888_ar_mnt"/><br>
       <fs ref="cfq888_sw_mnt"/><br>
       <fs ref="cfq888_dmp_mnt"/><br>
       <fs ref="cfq888_bk_mnt"><br>
         <fs ref="cfq888_flash_mnt"/><br>
       </fs><br>
       <fs ref="cfq888_db_mnt"/><br>
       <fs ref="cfq888_rd_mnt"/><br>
     </fs><br>
     <ip ref="192.168.1.31"/><br>
     <oracledb ref="cfq888"/><br>
   </service><br>
 </rm><br>
</cluster><br>
<br>
<br>
thanks,<br>
--Jason<br>
<br>
<br>
<br>
--<br>
Linux-cluster mailing list<br>
<a href="mailto:Linux-cluster@redhat.com" target="_blank">Linux-cluster@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/linux-cluster" target="_blank">https://www.redhat.com/<u></u>mailman/listinfo/linux-cluster</a><br>
</blockquote></div><br><br clear="all"><br>-- <br>esta es mi vida e me la vivo hasta que dios quiera<br>