[linux-lvm] LVM2 [lvcreate --type raid5] issue
Oliver Rath
rath at mglug.de
Mon Feb 3 10:17:24 UTC 2014
Hi James,
i was able to reproduce this on a Ubuntu 12.04.3 machine with lvm
2.02.105+ (compiled from git). It seems, that the -s 4kib parameter
causes this error. Normally, if not set explicitly, it es a 4MB size
assumed, so that you choosed a really tiny size (1/1000th of standard
size). Omitting this all seems running fine.
Im not a developer of this, but i could imagine, that based on this tiny
size the metadata is running out of space. Setting this to at least
64kib seems running fine. I think, smaller than 64kib doesnt make sense,
because this is the smallest reading size of most harddisks.
Hth
Oliver
On 02.02.2014 20:53, James Woolliscroft wrote:
>
> Hi
>
>
>
> New to Linux, not sure if this is a legitimate fault, or if I am doing
> something wrong. *_Please can you advise on which release of LVM2 (if
> any) fixes this issue._*
>
>
>
> Initially tested using SL6.4, now testing with SL6.5. I have used both
> LiveDVD and basic "Minimal Desktop" Installations on the target
> computer and inside a VirtualBox Virtual Machine.
>
>
>
> I am preparing to install SL6.5 on a small system with 3x750GB Disks.
> As a pre-requisite, I wish to testing the performance of the disks
> using different chunk/block sizes using LVM2 RAID5.
>
>
>
> It is a preference to use LVM2 RAID5 rather than mdraid and LVM2 to
> reduce the complexity of setting up and managing the system.
>
>
>
> However I am experiencing a problem when creating the Logical Volume.
> This issue occurs with both SL6.4 and SL6.5:
>
>
>
> * /dev/vg_system1/lv_system1: read failed after 0 of 4096 at 0:
> Input/output error*
>
> * /dev/vg_system1/lv_system1: write failed after 0 of 4096 at 0:
> Input/output error*
>
>
>
> There was one additional issue with SL6.4 which also appears to be
> partially resolved in SL6.5. Certain commands function without an
> issue, however once the issue manifests, it appears in most or all
> commands:
>
>
>
> [root at localhost ~]# pvscan -vd
>
> Wiping cache of LVM-capable devices
>
> Wiping internal VG cache
>
> Walking through all physical volumes
>
> * /dev/vg_system1/lv_system1: read failed after 0 of 4096 at
> 5368643584: Input/output error*
>
> * /dev/vg_system1/lv_system1: read failed after 0 of 4096 at
> 5368700928: Input/output error*
>
> * /dev/vg_system1/lv_system1: read failed after 0 of 4096 at 0:
> Input/output error*
>
> * /dev/vg_system1/lv_system1: read failed after 0 of 4096 at 4096:
> Input/output error*
>
> * /dev/vg_system1/lv_system1: read failed after 0 of 4096 at 0:
> Input/output error*
>
> PV /dev/sdb1 VG vg_system1 lvm2 [16.00 GiB / 13.50 GiB free]
>
> PV /dev/sdc1 VG vg_system1 lvm2 [16.00 GiB / 13.50 GiB free]
>
> PV /dev/sdd1 VG vg_system1 lvm2 [16.00 GiB / 13.50 GiB free]
>
> PV /dev/sda2 VG VolGroup lvm2 [11.51 GiB / 0 free]
>
> Total: 4 [59.50 GiB] / in use: 4 [59.50 GiB] / in no VG: 0 [0 ]
>
>
>
> I am at a loss to explain this, I cannot find any information relating
> to these errors.
>
>
>
> If I cannot resolve this within a reasonable period of time, I will be
> forced to use mdraid which is much less flexible -- i.e. if I wish to
> use different RAID modes, I have to use separate partitions/volume
> groups for example I would consider using unraided partitions for
> /boot, RAID 0 for SWAP and /tmp and RAID 5 for the rest of the system.
> Separating these into different partitions means the system will have
> to span further across the disk at times.
>
>
>
> I am considering possibly testing a later revision of LVM2 as I
> understand that SL6.5 has *_LVM2-2.02.100-8_* while *_subversions 101
> -- 105_* are available. I understand that to do this I may have to
> download the source code of this and dependant packages and compile
> it. This is not something I can easily do, but am prepared to try if I
> can get some guidance.
>
>
>
> It may be useful to test this in RHEL 6.5 and CentOS 6.5 though I
> could only do this on the latter myself.
>
>
>
> Below are the scripts to destroy the test LV/VG/PV and a test pass,
> and a script to create the PV/VG/LV as executed on a test system. Each
> attempt I execute the destroy script prior to running the create
> script. There is also a dump of the partition tables.
>
>
>
> I hope you can point me in the right direction!
>
>
>
> Thanks and Regards,
>
>
>
> J Woolliscroft
>
> [root at localhost ~]# cat d_lv
>
> lvremove /dev/vg_system1/lv_system1
>
> vgremove vg_system1
>
> pvremove /dev/sdb1 /dev/sdc1 /dev/sdd1
>
> =====
>
> [root at localhost ~]# . ./d_lv
>
> Do you really want to remove active logical volume lv_system1? [y/n]: y
>
> Logical volume "lv_system1" successfully removed
>
> Volume group "vg_system1" successfully removed
>
> Labels on physical volume "/dev/sdb1" successfully wiped
>
> Labels on physical volume "/dev/sdc1" successfully wiped
>
> Labels on physical volume "/dev/sdd1" successfully wipeda
>
>
>
> [root at localhost ~]# cat c_lv
>
> parted -l
>
>
>
> pvcreate -M2 -v /dev/sdb1 /dev/sdc1 /dev/sdd1
>
>
>
> pvs -v -d
>
> pvscan -v -d
>
>
>
> vgcreate -s 4kib -v vg_system1 /dev/sdb1 /dev/sdc1 /dev/sdd1
>
>
>
> vgs -v -d
>
> vgscan -v -d
>
>
>
> lvcreate --type raid5 -i 2 -I 4 -L 5G -ay -Zy -v -n lv_system1 vg_system1
>
>
>
> lvs -v -d
>
> lvscan -v -d
>
> [root at localhost ~]# . ./c_lv
>
> Model: ATA VBOX HARDDISK (scsi)
>
> Disk /dev/sda: 12.9GB
>
> Sector size (logical/physical): 512B/512B
>
> Partition Table: msdos
>
>
>
> Number Start End Size Type File system Flags
>
> 1 1049kB 525MB 524MB primary ext4 boot
>
> 2 525MB 12.9GB 12.4GB primary lvm
>
>
>
>
>
> Model: ATA VBOX HARDDISK (scsi)
>
> Disk /dev/sdb: 17.2GB
>
> Sector size (logical/physical): 512B/512B
>
> Partition Table: gpt
>
>
>
> Number Start End Size File system Name Flags
>
> 1 1049kB 17.2GB 17.2GB primary lvm
>
>
>
>
>
> Model: ATA VBOX HARDDISK (scsi)
>
> Disk /dev/sdc: 17.2GB
>
> Sector size (logical/physical): 512B/512B
>
> Partition Table: gpt
>
>
>
> Number Start End Size File system Name Flags
>
> 1 1049kB 17.2GB 17.2GB primary lvm
>
>
>
>
>
> Model: ATA VBOX HARDDISK (scsi)
>
> Disk /dev/sdd: 17.2GB
>
> Sector size (logical/physical): 512B/512B
>
> Partition Table: gpt
>
>
>
> Number Start End Size File system Name Flags
>
> 1 1049kB 17.2GB 17.2GB primary lvm
>
>
>
>
>
> Model: Linux device-mapper (linear) (dm)
>
> Disk /dev/mapper/VolGroup-lv_swap: 2147MB
>
> Sector size (logical/physical): 512B/512B
>
> Partition Table: loop
>
>
>
> Number Start End Size File system Flags
>
> 1 0.00B 2147MB 2147MB linux-swap(v1)
>
>
>
>
>
> Model: Linux device-mapper (linear) (dm)
>
> Disk /dev/mapper/VolGroup-lv_root: 10.2GB
>
> Sector size (logical/physical): 512B/512B
>
> Partition Table: loop
>
>
>
> Number Start End Size File system Flags
>
> 1 0.00B 10.2GB 10.2GB ext4
>
>
>
>
>
> Set up physical volume for "/dev/sdb1" with 33552351 available sectors
>
> Zeroing start of device /dev/sdb1
>
> Writing physical volume data to disk "/dev/sdb1"
>
> Physical volume "/dev/sdb1" successfully created
>
> Set up physical volume for "/dev/sdc1" with 33552351 available sectors
>
> Zeroing start of device /dev/sdc1
>
> Writing physical volume data to disk "/dev/sdc1"
>
> Physical volume "/dev/sdc1" successfully created
>
> Set up physical volume for "/dev/sdd1" with 33552351 available sectors
>
> Zeroing start of device /dev/sdd1
>
> Writing physical volume data to disk "/dev/sdd1"
>
> Physical volume "/dev/sdd1" successfully created
>
> Scanning for physical volume names
>
> Wiping cache of LVM-capable devices
>
> PV VG Fmt Attr PSize PFree DevSize PV
> UUID
>
> /dev/sda2 VolGroup lvm2 a-- 11.51g 0 11.51g
> JjV0kz-rW98-zTxe-X1jm-T9h3-i6SG-Frd325
>
> /dev/sdb1 lvm2 a-- 16.00g 16.00g 16.00g
> k7kxWU-W1bh-Eqda-xVj8-aBva-efN4-Et5B94
>
> /dev/sdc1 lvm2 a-- 16.00g 16.00g 16.00g
> 9M29V5-qldE-mSVl-Z2UY-h9nM-1F3i-AEsNYu
>
> /dev/sdd1 lvm2 a-- 16.00g 16.00g 16.00g
> zPpXzZ-ptwp-qxok-ubNq-asmz-Ifcj-QtRvfv
>
> Wiping cache of LVM-capable devices
>
> Wiping internal VG cache
>
> Walking through all physical volumes
>
> PV /dev/sda2 VG VolGroup lvm2 [11.51 GiB / 0 free]
>
> PV /dev/sdb1 lvm2 [16.00 GiB]
>
> PV /dev/sdc1 lvm2 [16.00 GiB]
>
> PV /dev/sdd1 lvm2 [16.00 GiB]
>
> Total: 4 [59.50 GiB] / in use: 1 [11.51 GiB] / in no VG: 3 [48.00 GiB]
>
> Wiping cache of LVM-capable devices
>
> Wiping cache of LVM-capable devices
>
> Adding physical volume '/dev/sdb1' to volume group 'vg_system1'
>
> Adding physical volume '/dev/sdc1' to volume group 'vg_system1'
>
> Adding physical volume '/dev/sdd1' to volume group 'vg_system1'
>
> Archiving volume group "vg_system1" metadata (seqno 0).
>
> Creating volume group backup "/etc/lvm/backup/vg_system1" (seqno 1).
>
> Volume group "vg_system1" successfully created
>
> Finding all volume groups
>
> Finding volume group "vg_system1"
>
> Finding volume group "VolGroup"
>
> VG Attr Ext #PV #LV #SN VSize VFree VG
> UUID VProfile
>
> VolGroup wz--n- 4.00m 1 2 0 11.51g 0
> oaJmPd-ElY9-a0E9-hUlu-SLxE-XTje-LfpO31
>
> vg_system1 wz--n- 4.00k 3 0 0 47.99g 47.99g
> hZTC0w-k0bJ-6M2A-Fmky-hHLo-hzHM-k8gdPl
>
> Wiping cache of LVM-capable devices
>
> Wiping internal VG cache
>
> Reading all physical volumes. This may take a while...
>
> Finding all volume groups
>
> Finding volume group "vg_system1"
>
> Found volume group "vg_system1" using metadata type lvm2
>
> Finding volume group "VolGroup"
>
> Found volume group "VolGroup" using metadata type lvm2
>
> Setting logging type to disk
>
> Finding volume group "vg_system1"
>
> Archiving volume group "vg_system1" metadata (seqno 1).
>
> Creating logical volume lv_system1
>
> Creating logical volume lv_system1_rimage_0
>
> Creating logical volume lv_system1_rmeta_0
>
> Creating logical volume lv_system1_rimage_1
>
> Creating logical volume lv_system1_rmeta_1
>
> Creating logical volume lv_system1_rimage_2
>
> Creating logical volume lv_system1_rmeta_2
>
> activation/volume_list configuration setting not defined: Checking
> only host tags for vg_system1/lv_system1_rmeta_0
>
> Creating vg_system1-lv_system1_rmeta_0
>
> Loading vg_system1-lv_system1_rmeta_0 table (253:2)
>
> Resuming vg_system1-lv_system1_rmeta_0 (253:2)
>
> Clearing metadata area of vg_system1/lv_system1_rmeta_0
>
> Clearing start of logical volume "lv_system1_rmeta_0"
>
> Removing vg_system1-lv_system1_rmeta_0 (253:2)
>
> activation/volume_list configuration setting not defined: Checking
> only host tags for vg_system1/lv_system1_rmeta_1
>
> Creating vg_system1-lv_system1_rmeta_1
>
> Loading vg_system1-lv_system1_rmeta_1 table (253:2)
>
> Resuming vg_system1-lv_system1_rmeta_1 (253:2)
>
> Clearing metadata area of vg_system1/lv_system1_rmeta_1
>
> Clearing start of logical volume "lv_system1_rmeta_1"
>
> Removing vg_system1-lv_system1_rmeta_1 (253:2)
>
> activation/volume_list configuration setting not defined: Checking
> only host tags for vg_system1/lv_system1_rmeta_2
>
> Creating vg_system1-lv_system1_rmeta_2
>
> Loading vg_system1-lv_system1_rmeta_2 table (253:2)
>
> Resuming vg_system1-lv_system1_rmeta_2 (253:2)
>
> Clearing metadata area of vg_system1/lv_system1_rmeta_2
>
> Clearing start of logical volume "lv_system1_rmeta_2"
>
> Removing vg_system1-lv_system1_rmeta_2 (253:2)
>
> Creating volume group backup "/etc/lvm/backup/vg_system1" (seqno 3).
>
> activation/volume_list configuration setting not defined: Checking
> only host tags for vg_system1/lv_system1
>
> Creating vg_system1-lv_system1_rmeta_0
>
> Loading vg_system1-lv_system1_rmeta_0 table (253:2)
>
> Resuming vg_system1-lv_system1_rmeta_0 (253:2)
>
> Creating vg_system1-lv_system1_rimage_0
>
> Loading vg_system1-lv_system1_rimage_0 table (253:3)
>
> Resuming vg_system1-lv_system1_rimage_0 (253:3)
>
> Creating vg_system1-lv_system1_rmeta_1
>
> Loading vg_system1-lv_system1_rmeta_1 table (253:4)
>
> Resuming vg_system1-lv_system1_rmeta_1 (253:4)
>
> Creating vg_system1-lv_system1_rimage_1
>
> Loading vg_system1-lv_system1_rimage_1 table (253:5)
>
> Resuming vg_system1-lv_system1_rimage_1 (253:5)
>
> Creating vg_system1-lv_system1_rmeta_2
>
> Loading vg_system1-lv_system1_rmeta_2 table (253:6)
>
> Resuming vg_system1-lv_system1_rmeta_2 (253:6)
>
> Creating vg_system1-lv_system1_rimage_2
>
> Loading vg_system1-lv_system1_rimage_2 table (253:7)
>
> Resuming vg_system1-lv_system1_rimage_2 (253:7)
>
> Creating vg_system1-lv_system1
>
> Loading vg_system1-lv_system1 table (253:8)
>
> Resuming vg_system1-lv_system1 (253:8)
>
> Monitoring vg_system1/lv_system1
>
> Clearing start of logical volume "lv_system1"
>
> * /dev/vg_system1/lv_system1: read failed after 0 of 4096 at 0:
> Input/output error*
>
> * /dev/vg_system1/lv_system1: write failed after 0 of 4096 at 0:
> Input/output error*
>
> Creating volume group backup "/etc/lvm/backup/vg_system1" (seqno 3).
>
> Logical volume "lv_system1" created
>
> Finding all logical volumes
>
> LV VG #Seg Attr LSize Maj Min KMaj KMin Pool
> Origin Data% Meta% Move Cpy%Sync Log Convert LV
> UUID LProfile
>
> lv_root VolGroup 1 -wi-ao---- 9.51g -1 -1 253
> 0
> BG77rR-C2lS-UXzm-V4KI-ptcG-1BZu-0yqVcV
>
> lv_swap VolGroup 1 -wi-ao---- 2.00g -1 -1 253
> 1
> i6uo0r-9T2P-OW60-Z1i7-6o7y-id1Q-1R7Lli
>
> lv_system1 vg_system1 1 rwi-a-r-r- 5.00g -1 -1 253
> 8 100.00
> GM0hKc-BHNV-CUAR-y0US-5Iw9-Ihi0-GSFD1V
>
> Finding all logical volumes
>
> ACTIVE '/dev/vg_system1/lv_system1' [5.00 GiB] inherit
>
> ACTIVE '/dev/VolGroup/lv_root' [9.51 GiB] inherit
>
> ACTIVE '/dev/VolGroup/lv_swap' [2.00 GiB] inherit
>
> [root at localhost ~]#
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20140203/03f671fe/attachment.htm>
More information about the linux-lvm
mailing list