[linux-lvm] Running vgscan while swap on LVM is already active

Urs Thuermann urs at isnogud.escape.de
Fri Jun 11 16:24:21 UTC 2004


OK, following up to myself.  I've done some experiments to understand
what's going on in the kernel.  For those interested, here is what
I've just learned:

Urs Thuermann <urs at isnogud.escape.de> writes:

> But is the active swap space somehow tied to the device node
> /dev/vg0/swap in the file system?  I thought this is not the case.
> And even if it is, this behavior still suprises me because it the
> device node is recreated with the same device number and even inode
> number.

This wasn't quite correct.  When running vgscan while /dev/vg0/swap is
active, the device node has of course the same major/minor device
number after the vgscan but a new inode number.  So it seems that the
kernel keeps the inode of the device node open as long as the device
is used as swap.  vgscan unlinks /dev/vg0/swap, but the inode isn't
removed since it's still open.  Then vgscan recreates /dev/vg0/swap
but on a different inode.  This is why I get

isnogud:root# swapon -s
Filename                        Type            Size    Used    Priority
/dev/vg0/swap (deleted)         partition       262136  0       -2

like lsof on deleted files.  Another experiment also shows this:  I
mounted a newly creates file system to /mnt, created a device node for
my swap space on it and turn swap on on it:

   $ mkfs /dev/vg0/foo
   $ mount /dev/vg0/foo /mnt
   $ cp -a /dev/vg0/swap /mnt/s
   $ swapon /mnt/s

Then I can't unmount /mnt since it's busy because the kernel holds an
open inode on /mnt.  I first have to swapoff

   $ swapoff
   $ umount /mnt

This is different from mounting a file system, where the kernel
doesn't seem to keep the inode of the block device open.  You can
unmount the file system holding the device node while the device is
still being mounted and also vgscan can unlink and recreate the device
node with the same inode number because the inode isn't busy.
Therefore, you can still unlink LVs after vgscan but not run swapoff
on an LV after vgscan.

BTW, is there a way to see all open inodes in the kernel?  I haven't
found anything in the /proc FS.  lsof shows open inodes held by
processes but obviously the kernel itself can also have open inodes.


urs



More information about the linux-lvm mailing list