mirroring

Cesar Covarrubias cesar at uci.edu
Tue Oct 20 19:31:24 UTC 2009


Ok, my notes are below. Please note that this is a scary scary scary 
procedure to run on a production system. I don't take responsibility for 
any data loss or downtime. This worked for me, but I would test the 
procedure 10 times over on a dev box before you do it on a production 
box. And of course, I think it is a good idea to hear comments from 
fellow admins on this list for their opinions on this documentation 
before you do anything.

Cesar

PROCEDURE:

*Migrating an Existing Linux to RAID1*

<!> *WARNING WARNING WARNING: this copies your data from the existing 
disk to the new disk then overwrites your existing disk*

<!> *BACKUP existing disk NOW!*

<!> *Must be done in SINGLE-USER MODE* -- Change inittab to have level 1 
as the default

    *

      linux:~ # telinit 1

Beginning disk onfiguration:

    * /dev/sda - Installed non-raid system disk
    * /dev/sda1 - boot partition
    * /dev/sda2 - swap partition
    * /dev/sda3 - root partition
    * /dev/sdb - Empty disk for first raid mirror
    * /dev/md1 - boot mirrored partition
    * /dev/md2 - swap mirrored partition
    * /dev/md3 - root mirrored partition

Check current RAID configuration:

    *

      linux:~ # cat /proc/mdstat
      Personalities :
      unused devices: <none>

Confirm that both disks are the same size.

    *

      linux:~ # cat /proc/partitions
      major minor  #blocks  name

         8     0    2097152 sda
         8     1     514048 sda1
         8     2    1582402 sda2
         8    16    2097152 sdb

Make sure that your devices do not have labels and that you are 
referencing the disks by device name.

    *

      linux:~ # cat /etc/fstab
      /dev/sda3                       ext3    defaults        1 1
      /dev/sda1               /boot                   ext3    defaults        1 2
      none                    /dev/pts                devpts  gid=5,mode=620  0 0
      none                    /dev/shm                tmpfs   defaults        0 0
      none                    /proc                   proc    defaults        0 0
      none                    /sys                    sysfs   defaults        0 0
      /dev/sda2               swap                    swap    defaults        0 0

Check current boot menu:

    *

      linux:~ # cat /etc/grub.conf
      #...
      #boot=/dev/sda
      default=0
      timeout=5
      splashimage=(hd0,0)/grub/splash.xpm.gz
      hiddenmenu
      title Red Hat Enterprise Linux AS (2.6.9-67.0.4.ELsmp)
              root (hd0,0)
              kernel /vmlinuz-2.6.9-67.0.4.ELsmp ro root=LABEL=/ rhgb quiet
              initrd /initrd-2.6.9-67.0.4.ELsmp.img
      ...

Change the partition type on the existing non-raid disk to type 'fd'

    *

      [root at elcapitan ~]# fdisk /dev/sda
        ... 
      Command (m for help): p

      Disk /dev/sda: 16.1 GB, 16106127360 bytes
      255 heads, 63 sectors/track, 1958 cylinders
      Units = cylinders of 16065 * 512 = 8225280 bytes

         Device Boot      Start         End      Blocks   Id  System
      /dev/sda1   *           1          64      514048+  83  Linux
      /dev/sda2              65         319     2048287+  82  Linux swap
      /dev/sda3             320        1958    13165267+  83  Linux


      Command (m for help): t
      Partition number (1-4): 1
      Hex code (type L to list codes): fd
      Changed system type of partition 1 to fd (Linux raid autodetect)

      Command (m for help): t
      Partition number (1-4): 2
      Hex code (type L to list codes): fd
      Changed system type of partition 2 to fd (Linux raid autodetect)

      Command (m for help): t
      Partition number (1-4): 3
      Hex code (type L to list codes): fd
      Changed system type of partition 3 to fd (Linux raid autodetect)

      Command (m for help): p

      Disk /dev/sda: 16.1 GB, 16106127360 bytes
      255 heads, 63 sectors/track, 1958 cylinders
      Units = cylinders of 16065 * 512 = 8225280 bytes

         Device Boot      Start         End      Blocks   Id  System
      /dev/sda1   *           1          64      514048+  fd  Linux raid autodetect
      /dev/sda2              65         319     2048287+  fd  Linux raid autodetect
      /dev/sda3             320        1958    13165267+  fd  Linux raid autodetect


      Command (m for help): w
      The partition table has been altered!

      Calling ioctl() to re-read partition table.

      WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
      The kernel still uses the old table.
      The new table will be used at the next reboot.
      Syncing disks.

Copy the non-raid disk's partition to the empty disk.

    *

      linux:~ # sfdisk -d /dev/sda | sfdisk /dev/sdb 
      Checking that no-one is using this disk right now ...
      OK

      Disk /dev/sdb: 1958 cylinders, 255 heads, 63 sectors/track
      Old situation:
      Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

         Device Boot Start     End   #cyls    #blocks   Id  System
      /dev/sdb1          0       -       0          0    0  Empty
      /dev/sdb2          0       -       0          0    0  Empty
      /dev/sdb3          0       -       0          0    0  Empty
      /dev/sdb4          0       -       0          0    0  Empty
      New situation:
      Units = sectors of 512 bytes, counting from 0

         Device Boot    Start       End   #sectors  Id  System
      /dev/sdb1   *        63   1028159    1028097  fd  Linux raid autodetect
      /dev/sdb2       1028160   5124734    4096575  fd  Linux raid autodetect
      /dev/sdb3       5124735  31455269   26330535  fd  Linux raid autodetect
      /dev/sdb4             0         -          0   0  Empty
      Successfully wrote the new partition table

      Re-reading the partition table ...

      If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
      to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
      (See fdisk(8).)


      linux:~ # cat /proc/partitions
      major minor  #blocks  name

         8     0    2097152 sda
         8     1     514048 sda1
         8     2    1582402 sda2
         8    16    2097152 sdb
         8    17     514048 sdb1
         8    18    1582402 sdb2

Reboot to single user to reload sda's modified partition table

Select the non-raid disk boot (Red Hat Enterprise Linux AS)

*Build the degraded RAID array*

Create the degraded RAID array on the empty disk, but leave out the 
existing system disk for now.

    *

      linux:~ # mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb1 missing
      mdadm: array /dev/md1 started.

      linux:~ # mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sdb2 missing
      mdadm: array /dev/md2 started.

      linux:~ # mdadm --create /dev/md3 --level=1 --raid-devices=2 /dev/sdb3 missing
      mdadm: array /dev/md2 started.


      linux:~ # cat /proc/mdstat
      Personalities : [raid1]
      md3 : active raid1 sdb3[0]
            13165184 blocks [2/1] [U_]

      md2 : active raid1 sdb2[0]
            2048192 blocks [2/1] [U_]

      md1 : active raid1 sdb1[0]
            513984 blocks [2/1] [U_]

      unused devices: <none>

Create the degraded RAID array configuration file.

    *

      linux:~ # cat << EOF > /etc/mdadm.conf
      > DEVICE /dev/sdb1 /dev/sdb2 /dev/sdb3
      > ARRAY /dev/md1 devices=/dev/sdb1,missing
      > ARRAY /dev/md2 devices=/dev/sdb2,missing
      > ARRAY /dev/md3 devices=/dev/sdb3,missing
      > EOF

      linux:~ # cat /etc/mdadm.conf
      DEVICE /dev/sdb1 /dev/sdb2 /dev/sdb3
      ARRAY /dev/md1 devices=/dev/sdb1,missing
      ARRAY /dev/md2 devices=/dev/sdb2,missing
      ARRAY /dev/md3 devices=/dev/sdb3,missing

Create filesystems (and swap) on new partitions:

    *

      linux:~ # mkswap /dev/md2
      Setting up swapspace version 1, size = 526315 kB

      linux:~ # mkfs.ext3 /dev/md1

      linux:~ # mkfs.ext3 /dev/md3

Confirm the degraded RAID array is functioning with only the previously 
empty disk.

    *

      linux:~ # mdadm --detail --scan
      ARRAY /dev/md3 level=raid1 num-devices=2 UUID=1e7c0428:b32909e8:252c5d95:77113b4a
      ARRAY /dev/md2 level=raid1 num-devices=2 UUID=79e38a93:dc308a0a:5f431903:58f1198c
      ARRAY /dev/md1 level=raid1 num-devices=2 UUID=2bcef1d5:2f24c0a2:f3a696a4:7212f482

      linux:~ # mdadm --stop --scan

      linux:~ # mdadm --detail --scan

<!> WARNING: Make sure you have created the /etc/mdadm.conf above, or 
mdadm --assemble --scan fails.

    *

      linux:~ # mdadm --assemble --scan
      mdadm: /dev/md1 has been started with 1 drive (out of 2).
      mdadm: /dev/md2 has been started with 1 drive (out of 2).
      mdadm: /dev/md3 has been started with 1 drive (out of 2).

      linux:~ # mdadm --detail --scan
      ARRAY /dev/md3 level=raid1 num-devices=2 UUID=1e7c0428:b32909e8:252c5d95:77113b4a
      ARRAY /dev/md2 level=raid1 num-devices=2 UUID=79e38a93:dc308a0a:5f431903:58f1198c
      ARRAY /dev/md1 level=raid1 num-devices=2 UUID=2bcef1d5:2f24c0a2:f3a696a4:7212f482

Backup the original initrd.

    *

      linux:~ # cd /boot
      linux:~ # mv initrd-`uname -r`.img initrd-`uname -r`.img.orig

Add raid1 to kernel modules to be loaded into the initrd. If 
INITRD_MODULES already exists in /etc/sysconfig/kernel, add raid1 to the 
space delimited list. Otherwise:

    *

      echo INITRD_MODULES='"raid1"' >> /etc/sysconfig/kernel

Build new initrd:

    *

      linux:/boot # head /etc/sysconfig/kernel | grep INITRD_MODULES
      INITRD_MODULES="raid1"

      linux:/boot # mkinitrd -v initrd-`uname -r`.img `uname -r`

<!> WARNING: If you attempt to boot the degraded RAID array, without 
referencing an initrd that contains the raid1 driver or raidautorun, 
then you will get a message that the /dev/md2 device is not found, and 
the server hangs.

Modify the grub.conf so you can boot from the non-RAID or the degraded 
RAID array, in case you make mistakes during the migration.

    * RAID root will be (hd1,0)
    * root=/dev/md3
    * Change initrd for non-raid boot to be .orig
          o

            linux:/boot/grub # vi /etc/grub.conf

            linux:/boot/grub # cat /etc/grub.conf
            # ...
            #boot=/dev/sda
            default=1
            timeout=5
            splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu
            title Red Hat Enterprise Linux AS (2.6.9-67.0.4.EL)
                     root (hd0,0)
                     kernel /vmlinuz-2.6.9-67.0.4.EL ro root=LABEL=/ rhgb quiet
                     initrd /initrd-2.6.9-67.0.4.EL.img
            title Red Hat Enterprise Linux AS (2.6.9-67.0.4.ELsmp)
                     root (hd0,0)
                     kernel /vmlinuz-2.6.9-67.0.4.ELsmp ro root=/dev/sda3 rhgb quiet
                     initrd /initrd-2.6.9-67.0.4.ELsmp.img.orig
            title RAID
                     root (hd1,0)
                     kernel /vmlinuz-2.6.9-67.0.4.ELsmp ro root=/dev/md3 rhgb quiet
                     initrd /initrd-2.6.9-67.0.4.ELsmp.img

Copy the entire system from the non-raid device to the degraded RAID array.

    *

      linux:/ # cd /mnt
      linux:/mnt # mkdir newroot
      linux:/mnt # mount /dev/md3 /mnt/newroot
      linux:/mnt # cd /mnt/newroot

Create boot dir and mount md1 on it for when files are copied

    *

      linux:/mnt/newroot # mkdir boot
      linux:/mnt/newroot # mount /dev/md1 /mnt/newroot/boot

Do not copy mnt or proc to the degraded RAID array, but create place 
holders for them.

    *

      linux:/mnt/newroot # mkdir mnt proc boot

<!> WARNING: The /mnt/newroot/proc directory is used for the proc 
filesystem mount point. If it's missing, you will get an error saying 
/proc is not mounted, and the system will hang at boot time.

    *

      linux:/mnt/newroot # for x in `ls / | egrep -v "(^mnt$|^proc$)"` ; do echo "Copy files: /$x -> /mnt/newroot/$x ..."; cp -a /$x /mnt/newroot; done
      Copy files: /bin -> /mnt/newroot/bin ... done
      Copy files: /boot -> /mnt/newroot/boot ... done
      Copy files: /dev -> /mnt/newroot/dev ... done
      Copy files: /etc -> /mnt/newroot/etc ... done
      Copy files: /home -> /mnt/newroot/home ... done
      Copy files: /lib -> /mnt/newroot/lib ... done
      Copy files: /media -> /mnt/newroot/media ... done
      Copy files: /opt -> /mnt/newroot/opt ... done
      Copy files: /root -> /mnt/newroot/root ... done
      Copy files: /sbin -> /mnt/newroot/sbin ... done
      Copy files: /srv -> /mnt/newroot/srv ... done
      Copy files: /sys -> /mnt/newroot/sys ... done
      Copy files: /tmp -> /mnt/newroot/tmp ... done
      Copy files: /var -> /mnt/newroot/var ... done
      Copy files: /usr -> /mnt/newroot/usr ... done

<!> WARNING: If you attempt to copy files that have ACL's, you will get 
a warning that the original permissions cannot be restored. You will 
need to restore any ACL's manually. You may also get some permission 
denied errors on files in the sys directory. Check the files, but you 
shouldn't have to worry about the errors.

    *

      linux:/mnt/newroot # ls /
      .  ..  bin  boot  dev  etc  home  lib  media  mnt  opt  proc  root  sbin  srv  sys  tmp  usr  var

      linux:/mnt/newroot # ls /mnt/newroot
      .  ..  bin  boot  dev  etc  home  lib  media  mnt  opt  proc  root  sbin  srv  sys  tmp  usr  var

Modify the fstab file on the degraded RAID array so that the system can 
boot it.

    *

      linux:~ # cat /mnt/newroot/etc/fstab
      /dev/sda3             /                       ext3    defaults        1 1
      /dev/sda1             /boot                   ext3    defaults        1 2
      none                  /dev/pts                devpts  gid=5,mode=620  0 0
      none                  /dev/shm                tmpfs   defaults        0 0
      none                  /proc                   proc    defaults        0 0
      none                  /sys                    sysfs   defaults        0 0
      /dev/sda2             swap                    swap    defaults        0 0

      linux:/mnt/newroot # vi /mnt/newroot/etc/fstab

      linux:~ # cat /mnt/newroot/etc/fstab
      /dev/md3            /                       ext3    defaults        1 1
      /dev/md1            /boot                   ext3    defaults        1 2
      none                /dev/pts                devpts  gid=5,mode=620  0 0
      none                /dev/shm                tmpfs   defaults        0 0
      none                /proc                   proc    defaults        0 0
      none                /sys                    sysfs   defaults        0 0
      /dev/md2            swap                    swap    defaults        0 0

Reboot to single user again, using the new RAID.

At this point you should be running your system from the degraded RAID 
array, and the non-raid disk is not even mounted.

    *

      linux:~ # mount
      /dev/md3 on / type ext3 (rw)
      ...
      /dev/md1 on /boot type ext3 (rw)
      ...

Update the raid configuration file to include both disks.

    *

      linux:~ # cat << EOF > /etc/mdadm.conf
      > DEVICE /dev/sdb1 /dev/sdb2 /dev/sdb3 /dev/sda1 /dev/sda2 /dev/sda3
      > ARRAY /dev/md1 devices=/dev/sdb1,/dev/sda1
      > ARRAY /dev/md2 devices=/dev/sdb2,/dev/sda2
      > ARRAY /dev/md3 devices=/dev/sdb3,/dev/sda3
      > EOF

      linux:~ cat /etc/mdadm.conf
      DEVICE /dev/sdb1 /dev/sdb2 /dev/sdb3 /dev/sda1 /dev/sda2 /dev/sda3
      ARRAY /dev/md1 devices=/dev/sdb1,/dev/sda1
      ARRAY /dev/md2 devices=/dev/sdb2,/dev/sda2
      ARRAY /dev/md3 devices=/dev/sdb3,/dev/sda3

Add the non-raid disk partitions into their respective raid array.

<!> ***WARNING*** <!>
*THIS IS THE POINT OF NO RETURN.*
<!> ***WARNING*** <!>

    *

      linux:~ # mdadm /dev/md1 -a /dev/sda1
      mdadm: hot added /dev/sda1

      linux:~ # mdadm /dev/md2 -a /dev/sda2
      mdadm: hot added /dev/sda2

      linux:~ # mdadm /dev/md3 -a /dev/sda3
      mdadm: hot added /dev/sda3

      linux:~ # cat /proc/mdstat
      Personalities : [raid1]
      md2 : active raid1 sda2[2] sdb2[0]
            2048192 blocks [2/1] [U_]
              resync=DELAYED
      md3 : active raid1 sda3[2] sdb3[0]
            13165184 blocks [2/1] [U_]
              resync=DELAYED
      md1 : active raid1 sda1[2] sdb1[0]
            513984 blocks [2/1] [U_]
            [=======>.............]  recovery = 38.0% (195904/513984) finish=1.3min speed=4014K/sec
      unused devices: <none>

*After* recovery done, install grub onto both disks so can boot from 
either in case of failure.

    *

      linux:~ # grub

          GNU GRUB  version 0.95  (640K lower / 3072K upper memory)

       [ Minimal BASH-like line editing is supported.  For the first word, TAB
         lists possible command completions.  Anywhere else TAB lists the possible
         completions of a device/filename.]

      grub> device (hd0) /dev/sda

      grub> root (hd0,0)
       Filesystem type is ext2fs, partition type 0xfd
       grub> setup (hd0)
       Checking if "/boot/grub/stage1" exists... no
       Checking if "/grub/stage1" exists... yes
       Checking if "/grub/stage2" exists... yes
       Checking if "/grub/e2fs_stage1_5" exists... yes
       Running "embed /grub/e2fs_stage1_5 (hd0)"...  16 sectors are embedded.
      succeeded
       Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub
      .conf"... succeeded
      Done.

      grub> device (hd1) /dev/sdb

      grub> root (hd1,0)
       Filesystem type is ext2fs, partition type 0xfd

      grub> setup (hd1)
       Checking if "/boot/grub/stage1" exists... no
       Checking if "/grub/stage1" exists... yes
       Checking if "/grub/stage2" exists... yes
       Checking if "/grub/e2fs_stage1_5" exists... yes
       Running "embed /grub/e2fs_stage1_5 (hd1)"...  16 sectors are embedded.
      succeeded
       Running "install /grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/grub/stage2 /grub/grub
      .conf"... succeeded
      Done.

      grub> quit

<!> **WARNING**: If you do not reinstall grub, after rebooting you will 
get GRUB on screen all by itself. If that happens, boot from your 
install CD1. Select Installation, your language, and Boot installed 
system. Once the system is up, follow the steps above to install grub 
onto the drives.

Remove the original initrd, as it is useless at this point.

    *

      linux:/boot # ls -l /boot/initrd*
      rw-r--r--  1 root root 788775 Mar  7 11:09 /boot/initrd-2.6.9-67.0.4.EL.img
      -rw-r--r--  1 root root 783027 Mar  7 13:19 /boot/initrd-2.6.9-67.0.4.ELsmp.img
      -rw-r--r--  1 root root 788995 Mar  7 11:05 /boot/initrd-2.6.9-67.EL.img
      -rw-r--r--  1 root root 774575 Mar  7 11:05 /boot/initrd-2.6.9-67.ELsmp.img
      -rw-r--r--  1 root root 774575 Mar  7 11:05 /boot/initrd-2.6.9-67.ELsmp.img.orig

      linux:/boot # rm /boot/initrd-*.orig

      linux:/boot # cd grub


      Remove the non-raid boot now useless option(s). 

      Change the boot disk to (hd0,0), the first disk.
        {{{
      linux:/boot/grub # cat /etc/grub.conf
      # ...
      title LinuxRaid
          root (hd0,1)
          kernel /vmlinuz-2.6.9-67.0.4.ELsmp ro root=/dev/md3 rhgb quiet
          initrd /initrd-2.6.9-67.0.4.ELsmp.img

<!> Change inittab to have level 3 or 5 (whichever it was before) as the 
default

Reboot multi-user

linux:~ # mdadm --detail --scan
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=66e0c793:ebb91af6:f1d5cde8:81f9b986
   devices=/dev/sdb2,/dev/sda2
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=0c70c3f5:28556506:9bd29f42:0486b2ea
   devices=/dev/sdb1,/dev/sda1


cesar at uci.edu wrote:
> I believe I have notes on how to do this at my office. As soon as I get there tomorrow I will check and send if I have them.
>
> Cesar
> ------Original Message------
> From: Geofrey Rainey
> Sender: redhat-list-bounces at redhat.com
> To: General Red Hat Linux discussion list
> ReplyTo: General Red Hat Linux discussion list
> Subject: RE: mirroring
> Sent: Oct 19, 2009 6:54 PM
>
> It might be possible with software raid. I don't think it would be with
> hardware raid because you'd have to delete partitions etc.
>
> -----Original Message-----
> From: redhat-list-bounces at redhat.com
> [mailto:redhat-list-bounces at redhat.com] On Behalf Of Sir June
> Sent: Tuesday, 20 October 2009 1:13 p.m.
> To: General Red Hat Linux discussion list
> Subject: mirroring
>
> Hi,
>
> I have 2 identical disks but I had installed RHEL 4 on first disk only,
> i have /boot and / partitions only.  Now, i want to have mirror (raid1)
> with the 2nd disk. Can i do this without destroying data on the 1st
> disk?   Is there a good howto out there ? 
>
>  Sir June
>
>
>       
>
>   




More information about the redhat-list mailing list