[dm-devel] Re: raid failure and LVM volume group availability

Goswin von Brederlow goswin-v-b at web.de
Thu May 28 18:48:10 UTC 2009


Neil Brown <neilb at suse.de> writes:

> On Tuesday May 26, goswin-v-b at web.de wrote:
>> hank peng <pengxihan at gmail.com> writes:
>> 
>> > Only one of disks in this RAID1failed, it should continue to work with
>> > degraded state.
>> > Why LVM complained with I/O errors??
>> 
>> That is because the last drive in a raid1 can not fail:
>> 
>> md9 : active raid1 ram1[1] ram0[2](F)
>>       65472 blocks [2/1] [_U]
>> 
>> # mdadm --fail /dev/md9 /dev/ram1
>> mdadm: set /dev/ram1 faulty in /dev/md9
>> 
>> md9 : active raid1 ram1[1] ram0[2](F)
>>       65472 blocks [2/1] [_U]
>> 
>> See, still marked working.
>> 
>> MfG
>>         Goswin
>> 
>> PS: Why doesn't mdadm or kernel give a message about not failing?
>
> -ENOPATCH :-)
>
> You would want to rate limit any such message from the kernel, but it
> might make sense to have it.
>
> NeilBrown

No rate risk in mdadm --fail reporting a failure to fail the device.

MfG
        Goswin




More information about the dm-devel mailing list