[linux-lvm] Recent dissections

Luben Tuikov luben at splentec.com
Fri Oct 25 11:55:01 UTC 2002


As the folks on linux-xfs know, I've recently tried to
get xfs on top of lvm on top of raid5 on top of some
scsi disks to work.

Here are some recent curiosities which for now seem
to reconcile mount-s in D and BUG() in unlock_page():

Here is the basic framework:

 FS --> LVM --> md(raid5) --> scsi disks           (1)

when FS = ext2, the above set up works alright,
when FS = xfs, then as you know either mount
would sleep indefinitely on down(), or BUG() in unlock_page().

I've written a tiny SCSI simulator driver for
block devices (non-SCSI; unrelated to all this),
lets call it ``sbs''.

Strangely when the above set up is like so:

 xfs --> LVM --> sbs --> md(raid5) --> scsi disks  (2)

then everything works all right.

The only way I can explain this, is that the transformation
that bh's get along the way down and up somehow confuse
xfs in setup (1), (remember ext2 works all right in (1)),
but when sbs is in between LVM and md then things are ok
for xfs.

This ``solution'', was prompted by other curiosities we found
out here, like when either md or LVM was missing from (1)
with xfs, then things worked ok.

So, to make everyone happy, we put sbs in the middle of
LVM and md (see (2)), so that from xfs's point of view
it looks like LVM is talking to a scsi block device, and
from md's point of view it looks like it (md) is getting
its bh's from the linux block layer.

I hope those curiosities help ppl get some/any clue(s)
as to what could be happening and resolve this -- especially
those who know what fs/xfs/pagebuf/*.c is all about...

Bh's transformation, collecting them back into a page in
xfs from xx_end_io(), etc... where could it be?

-- 
Luben

P.S. No sensible performance penalties for putting sbs in
the middle. But that's irrelevant.




More information about the linux-lvm mailing list