[linux-lvm] lvm2 raid volumes
Steve Dainard
sdainard at spd1.com
Tue Aug 2 22:49:44 UTC 2016
Hello,
What are the methods for checking/monitoring a RAID LV?
The Cpy%Sync field seems promising here:
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log
Cpy%Sync Convert
raid1 test rwi-aor--- 100.00m
100.00
raid6 test rwi-aor--- 108.00m
100.00
# pvs
PV VG Fmt Attr PSize PFree
/dev/vdb test lvm2 a-- 1020.00m 876.00m
/dev/vdc test lvm2 a-- 1020.00m 876.00m
/dev/vdd test lvm2 a-- 1020.00m 980.00m
/dev/vde test lvm2 a-- 1020.00m 980.00m
/dev/vdf test lvm2 a-- 1020.00m 980.00m
But testing in a VM by removing a disk does not change the output of lvs:
# pvs
WARNING: Device for PV S5xFZ7-mLaH-GNQP-ujWh-Zbkt-Ww3u-J0aKUJ not found
or rejected by a filter.
PV VG Fmt Attr PSize PFree
/dev/vdb test lvm2 a-- 1020.00m 876.00m
/dev/vdc test lvm2 a-- 1020.00m 876.00m
/dev/vdd test lvm2 a-- 1020.00m 980.00m
/dev/vde test lvm2 a-- 1020.00m 980.00m
unknown device test lvm2 a-m 1020.00m 980.00m
# lvs
WARNING: Device for PV S5xFZ7-mLaH-GNQP-ujWh-Zbkt-Ww3u-J0aKUJ not found
or rejected by a filter.
LV VG Attr LSize Pool Origin Data% Meta% Move Log
Cpy%Sync Convert
raid1 test rwi-aor--- 100.00m
100.00
raid6 test rwi-aor-p- 108.00m
100.00
My end goal is to write a nagios check to monitor for disk failures.
Thanks,
Steve
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20160802/d929d366/attachment.htm>
More information about the linux-lvm
mailing list