[linux-lvm] Snapshot behavior on classic LVM vs ThinLVM

Gionatan Danti g.danti at assyoma.it
Wed Apr 26 13:37:37 UTC 2017


On 26/04/2017 13:23, Zdenek Kabelac wrote:
>
> You need to use 'direct' write more - otherwise you are just witnessing
> issues related with 'page-cache' flushing.
>
> Every update of file means update of journal - so you surely can lose
> some data in-flight - but every good software needs to the flush before
> doing next transaction - so with correctly working transaction software
> no data could be lost.

I used "oflag=sync" for this very reason - to avoid async writes, 
However, let's retry with "oflat=direct,sync".

This is the thinpool before filling:

[root at blackhole mnt]# lvs
   LV       VG        Attr       LSize  Pool     Origin Data%  Meta% 
Move Log Cpy%Sync Convert
   thinpool vg_kvm    twi-aot---  1.00g                 87.66  12.01 

   thinvol  vg_kvm    Vwi-aot---  2.00g thinpool        43.83 

   root     vg_system -wi-ao---- 50.00g 

   swap     vg_system -wi-ao----  7.62g

[root at blackhole storage]# mount | grep thinvol
/dev/mapper/vg_kvm-thinvol on /mnt/storage type ext4 
(rw,relatime,seclabel,errors=remount-ro,stripe=32,data=ordered)


Fill the thin volume (note that errors are raised immediately due to 
--errorwhenfull=y):

[root at blackhole mnt]# dd if=/dev/zero of=/mnt/storage/test.2 bs=1M 
count=300 oflag=direct,sync
dd: error writing ‘/mnt/storage/test.2’: Input/output error
127+0 records in
126+0 records out
132120576 bytes (132 MB) copied, 14.2165 s, 9.3 MB/s

 From syslog:

Apr 26 15:26:24 localhost lvm[897]: WARNING: Thin pool 
vg_kvm-thinpool-tpool data is now 96.84% full.
Apr 26 15:26:27 localhost kernel: device-mapper: thin: 253:4: reached 
low water mark for data device: sending event.
Apr 26 15:26:27 localhost kernel: device-mapper: thin: 253:4: switching 
pool to out-of-data-space (error IO) mode
Apr 26 15:26:34 localhost lvm[897]: WARNING: Thin pool 
vg_kvm-thinpool-tpool data is now 100.00% full.

Despite write errors, the filesystem is not in read-only mode:

[root at blackhole mnt]#  touch /mnt/storage/test.txt; sync; ls -al 
/mnt/storage
total 948248
drwxr-xr-x. 3 root root      4096 26 apr 15.27 .
drwxr-xr-x. 6 root root        51 20 apr 15.23 ..
drwx------. 2 root root     16384 26 apr 15.24 lost+found
-rw-r--r--. 1 root root 838860800 26 apr 15.25 test.1
-rw-r--r--. 1 root root 132120576 26 apr 15.26 test.2
-rw-r--r--. 1 root root         0 26 apr 15.27 test.txt

I can even recover free space via fstrim:

[root at blackhole mnt]# rm /mnt/storage/test.1; sync
rm: remove regular file ‘/mnt/storage/test.1’? y
[root at blackhole mnt]# fstrim -v /mnt/storage/
/mnt/storage/: 828 MiB (868204544 bytes) trimmed
[root at blackhole mnt]# lvs
   LV       VG        Attr       LSize  Pool     Origin Data%  Meta% 
Move Log Cpy%Sync Convert
   thinpool vg_kvm    twi-aot---  1.00g                 21.83  3.71
   thinvol  vg_kvm    Vwi-aot---  2.00g thinpool        10.92
   root     vg_system -wi-ao---- 50.00g
   swap     vg_system -wi-ao----  7.62g

 From syslog:
Apr 26 15:34:15 localhost kernel: device-mapper: thin: 253:4: switching 
pool to write mode

To me, it seems that metadata updates completed because they hit the 
already-allocated disk space, not triggering the remount-ro code. I am 
missing something?

Regards.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti at assyoma.it - info at assyoma.it
GPG public key ID: FF5F32A8




More information about the linux-lvm mailing list