[Linux-cluster] ha-lvm

Jonathan Barber jonathan.barber at gmail.com
Thu Nov 4 10:42:29 UTC 2010

On 3 November 2010 17:55, Randy Zagar <zagar at arlut.utexas.edu> wrote:
> Hash: SHA1
> I frequently find that I'm unable to umount volumes, even after lsof
> and fuser return nothing relevant, and have to "force" a "lazy" umount
> like so:
>    umount -lf /dir
> because both "umount /dir" and "umount -f /dir" fail.

That's a cool option, but I'd be very worried about corrupting the
filesystem if it was mounted on a second node whilst a process was
holding the filesystem open on the original node.

> - -RZ
>> On Nov 3, 2010, at 2:15 AM, "Jankowski, Chris"
>> <Chris.Jankowski at hp.com> wrote:
>>> Corey,
>>> I vaguely remember from my work on UNIX clusters many years ago
>>> that if /dir is the mount point of a mounted filesystem then cd
>>> /dir or into any directory below /dir from an interactive shell
>>> will prevent an unmount of the filesystem i.e. umount /dir will
>>> fail. I believe that this restriction is because it will create
>>> an inconsistency in the state of the shell process. lsof will not
>>> show it.
>>> Of course most users after login end up in the home directory by
>>> default.
>>> I believe that Linux will have the same semantics as UNIX. You
>>> can test that easily on a standalone Linux box.
>>> Regards,
>>> Chris Jankowski
>>> -----Original Message----- From: linux-cluster-bounces at redhat.com
>>> [mailto:linux-cluster- bounces at redhat.com] On Behalf Of Corey
>>> Kovacs Sent: Wednesday, 3 November 2010 07:15 To: linux
>>> clustering Subject: [Linux-cluster] ha-lvm
>>> Folks,
>>> I have a 5 node cluster backed by an FC SAN with 5 VG's each with
>>> a single LVM.
>>> I am using ha_lvm and have lvm.conf configured to use tags as per
>>> the instructions. Things work fine until I try to migrate the
>>> volume containing our home dir (all others work as expected) The
>>> umount for that volume fails and depending on the active config,
>>> the node reboots itself (self_fence=1) or it simply fails and
>>> get's disabled.
>>> lsof doesn't reveal anything "holding" onto that mount point yet
>>> the umount fails consistently (force_umount is enabled)
>>> Furthermore, it appears I have at least one ov my VG's with bad
>>> tags, is there a way to show what tags a VG has?
>>> I've gone over the config several times and although I cannot
>>> show the config, here is a basic rundown in case something jumps
>>> out...
>>> 5 nodes, dl360g5 2xQcore w/16GB ram EVA8100 2x4GB FC, multipath
>>> 5VG's each w/a single lv each with an ext3 fs. ha lvm in is use
>>> as a measure of protection for the ext3 fs's local locking only
>>> via lvm.conf tags enabled via lvm.conf initrd's are newer than
>>> the lvm.conf changes.
>>> I did notice that the ext3 label in use on the home volume was
>>> not of the form /home (it was /ha_home) from early testing but
>>> I've corrected that and the umount fail still occurs.
>>> If anyone has any ideas I'd appreciate it.
>>> -- Linux-cluster mailing list Linux-cluster at redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-cluster
> Version: GnuPG v1.4.10 (GNU/Linux)
> Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org/
> iEYEARECAAYFAkzRoi4ACgkQKQP9Tvu8x8xq3wCghKNS6//Pv0kDF6RggnCCk0b4
> oaEAn3uO3rDQUNAjlaXHr0yojzaUiXU8
> =HaFU
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster

Jonathan Barber <jonathan.barber at gmail.com>

More information about the Linux-cluster mailing list