[linux-lvm] lvm2 *TEMPORARY* PV failure - what happens?

Jonathan E Brassow jbrassow at redhat.com
Tue Apr 25 20:21:00 UTC 2006

It is simple to play with this type of scenario by doing:

echo offline > /sys/block/<sd dev>/device/state

and later

echo running > /sys/block/<sd dev>/device/state

I know this doesn't answer your question directly.


On Apr 25, 2006, at 2:57 PM, Ming Zhang wrote:

> my 2c. fix me if i am wrong
> either activate the VG partially, and then all LVs on other PVs are
> still accessible. I remember these LVs will only have RO access. Though
> I have no idea why.
> use dm-zero to generate a fake PVs and add to VG, then allow VG to
> activate and access those LV. But i do not know if you access a LV that
> is partially or fully on this PV, what will happen.
> Ming
> On Tue, 2006-04-25 at 13:08 -0600, Ty! Boyack wrote:
>> I've been intrigued by the discussion of what happens when a PV fails,
>> and have begun to wonder what would happen in the case of a transient
>> failure of a PV.
>> The design I'm thinking of is a SAN environment with several
>> multi-terabyte iSCSI arrays as PVs, being grouped together into a 
>> single
>> VG, and then carving LVs out of that.  We plan on using the CLVM tools
>> to fit into a clustered environment.
>> The arrays themselves are robust (RAID 5/6, redundant power supplies,
>> etc.) and I grant that if we lose the actual array (for example, if
>> multiple disks fail), then we are in the situation of a true and
>> possibly total failure of the PV and loss of it's data blocks.
>> But there is always the possiblity that we could lose the CPU, memory,
>> bus, etc. in the iSCSI controller portion of the array, which will 
>> cause
>> downtime, but no true loss of data.  Or someone may hit the wrong 
>> power
>> switch and just reboot the thing, taking it offline for a short time.
>> Yes, that someone would probably be me.  Shame on me.
>> The key point is that the iSCSI disk will come back in a few
>> minutes/hours/days depending on the failure type, and all blocks will 
>> be
>> intact when it comes back up.  I suppose the analagous situation would
>> be using LVM on a group of hot swap drives and pulling one of the 
>> disks,
>> waiting a while, and then re-inserting it.
>> Can someone please walk me through the resulting steps that would 
>> happen
>> within LVM2 (or a GFS filesystem on top of that LV) in this situation?
>> Thanks,
>> -Ty!
> _______________________________________________
> linux-lvm mailing list
> linux-lvm at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

More information about the linux-lvm mailing list