[Linux-cluster] RHCS + DRBD - DRBD file systems not mounting

Goutam Baul goutam.baul at rp-sg.in
Wed Aug 28 03:50:30 UTC 2013


 

 

>On 2013/8/27 wrote

      

>Are you using san with emc powerpath?

 

Yes

 

>drbd service start on boot?

>start drbd by hand and give the command ls -l dev/drbd/by-res/r0/1 and ls
-l 
>dev/drbd/by-res/r0/0

 

It works fine if we do things manually. The only issue comes when we try to
run things under rgmanager.

>>On 2013/8/27 Goutam Baul wrote

>>Dear List,

 >>We have configured DRBD 8.4.3 (community version) in the sense that we
are able to manually  start DRBD at both the nodes, make nodes primary or
secondary and >>data is getting replicated between the primary and the
secondary node. We are doing this for our postfix mail server. The drbd
configuration file is

 >>resource r0 {

>>startup {

>>    wfc-timeout 1;

>>    degr-wfc-timeout 2;

>>  }

>>  volume 0 {

>>    device    /dev/drbd1;

>>    disk      /dev/emcpowera2;

>>    meta-disk /dev/emcpowera5;

>>  }

>>  volume 1 {

>>    device    /dev/drbd2;

>>    disk      /dev/emcpowera3;

>>    meta-disk /dev/emcpowera6;

>>  }

>>  on drmail1.cesc.co.in {

>>    address   10.50.4.14:7789;

>>  }

>>  on drbdtest2.cesc.co.in {

>>    address   10.50.81.253:7789;

>>  }

>>}

 

>>We want to run postfix mailing system on a RHCS cluster suite for high
availability. The mail-store and the mail-queue of postfix will be on the
DRBD setup so that >>they are replicated from our data center to our
disaster recovery site. The cluster.conf is as follows:

 

>><?xml version="1.0"?>

>><cluster config_version="27" name="dr-mail">

 >>       <fence_daemon clean_start="1" post_join_delay="30"/>

 >>       <clusternodes>

>>                <clusternode name="drmail1.cesc.co.in" nodeid="1">

>>                        <fence>

>>                                <method name="Method1_drac">

>>                                        <device name="fence_drac1"/>

 >>                               </method>

 >>                        </fence>

 >>               </clusternode>

>>                <clusternode name="drmail2.cesc.co.in" nodeid="2">

>>                        <fence>

>>                                <method name="Method1_drac">

>>                                        <device name="fence_drac2"/>

>>                                </method>

 >>                        </fence>

>>                </clusternode>

>>        </clusternodes>

>>        <cman expected_votes="1" two_node="1"/>

>>        <fencedevices>

>>                <fencedevice agent="fence_drac5" cmd_prompt="admin1->"
ipaddr="10.50.4.22" login="root" module_name="fence_drac1"
name="fence_drac1" passwd="calvin" secure="on"/>

 >>               <fencedevice agent="fence_drac5" cmd_prompt="admin1->"
ipaddr="10.50.4.23" login="root" module_name="fence_drac2"
name="fence_drac2" passwd="calvin" secure="on"/>

 >>        </fencedevices>

  >>      <rm>

   >>             <failoverdomains>

 >>                       <failoverdomain name="dr_mail" nofailback="0"
ordered="1" restricted="1">

 >>                               <failoverdomainnode
name="drmail1.cesc.co.in" priority="1"/>

>>                                <failoverdomainnode
name="drmail2.cesc.co.in" priority="2"/>

                        </failoverdomain>

 >>               </failoverdomains>

 >>               <resources>

  >>                     <ip address="10.50.4.20" monitor_link="on"
sleeptime="10"/>

 >>                       <drbd name="drbd-mail" resource="r0">

  >>                            <fs device="/dev/drbd/by-res/r0/0"
force_unmount="1" fsid="61850" mountpoint="/home" fstype="ext3"
name="drbd-mailstore" options="noatime"/>

    >>                          <fs device="/dev/drbd/by-res/r0/1"
force_unmount="1" fsid="61855" mountpoint="/var/spool/postfix" fstype="ext3"
name="drbd-spool" options="noatime"/>

  >>                      </drbd>

  >>                      <script file="/etc/init.d/postfix" name="postfix"
/>

  >>              </resources>

 >>               <service domain="dr_mail" name="mail" recovery="relocate">

 >>                       <ip ref="10.50.4.20"/>

  >>                      <drbd ref="drbd-mail" />

                        <script ref="postfix" />

  >>              </service>

  >>      </rm>

>> </cluster>

>> If we run this RHCS cluster, we get the error in the log as 

 >> Aug 27 17:38:19 drmail1 rgmanager[4695]: drbd not found in
/proc/modules. Do you need to modprobe?

 >> If we run the RHCS cluster after issuing the modprobe drbd command we
get the following message

 >> 1: Failure: (127) Device minor not allocated

>> additional info from kernel:

>> unknown minor

>> Command 'drbdsetup role 1' terminated with exit code 10

>> <debug>  DRBD resource r0 already configured

>> DRBD resource r0 already configured

>> Start of mail complete

 >> But the DRBD service is not starting up and therefore the corresponding
devices are not being created. Thus the log file returns the error

>> Aug 27 17:38:33 drmail1 rgmanager[5147]: stop: Could not match
/dev/drbd/by-res/r0/1 with a real device

>> Aug 27 17:38:33 drmail1 rgmanager[5166]: stop: Could not match
/dev/drbd/by-res/r0/0 with a real device

 >> We have built the DRBD rpm files from the source using rpmbuild and
copied the drbd.sh.rhcs and drbd.metadata.rhcs files from the script folder
of the source to the >> /usr/share/cluster folder as drbd.sh and
drbd.metadata respectively. We are using RHEL 6.1 (64 bit).

  

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20130828/a1d1d142/attachment.htm>


More information about the Linux-cluster mailing list