[linux-lvm] Snapshots and disk re-use
Jonathan Tripathy
jonnyt at abpni.co.uk
Tue Apr 5 22:52:56 UTC 2011
On 05/04/2011 23:42, James Hawtin wrote:
> On 05/04/2011 21:36, Jonathan Tripathy wrote:
>> Hi James,
>>
>> Interesting, didn't know you could do that! However, how do I know
>> that the PEs aren't being used by LVs? Also, could you please explain
>> the syntax? Normally to create a snapshot, I would do:
>>
>> lvcreate -L20G -s -n backup /dev/vg0/customerID
>>
>
> Hmmm well you have two options, you could use pvdisplay --map or
> lvdisplay --map to work out exactly which PEs have been used to build
> you snapshot cow and then use that information to allow you to create
> a blanking PV in the same place or you could do it the easy way :-
>
> 1 hog the space to specific PEs
> 2 delete the hog
> 3 create the snapshot on same PEs
> 4 backup
> 5 delete the snapshot
> 6 create the hog on the same PEs
> 7 zero the hog
>
> This has the advantage that the creation commands will fail if the PEs
> you want are not available the problem with it is you probably need
> more space for snapshots. As its less flexible in space use. Below i
> have illustrated all the commands, you need to do this. you don;t need
> all the display commands but they are there to prove to you this has
> worked, and the lvs are in the same place.
>
> #pvdisplay --map /dev/cciss/c0d1p1
> --- Physical volume ---
> PV Name /dev/cciss/c0d1p1
> VG Name test_vg
> PV Size 683.51 GB / not usable 5.97 MB
> Allocatable yes
> PE Size (KByte) 131072
> Total PE 5468
> Free PE 4332
> Allocated PE 1136
> PV UUID YXjplf-EfLh-8Jkr-2utT-gmi5-gjAH-UOCcN0
>
> --- Physical Segments ---
> Physical extent 0 to 15:
> Logical volume /dev/test_vg/test_lv
> Logical extents 0 to 15
> Physical extent 16 to 815:
> Logical volume /dev/test_vg/mail_lv
> Logical extents 0 to 799
> Physical extent 816 to 975:
> Logical volume /dev/test_vg/data_lv
> Logical extents 0 to 159
> Physical extent 976 to 2255:
> FREE
> Physical extent 2256 to 2335:
> Logical volume /dev/test_vg/srv_lv
> Logical extents 0 to 79
> Physical extent 2336 to 2415:
> Logical volume /dev/test_vg/data_lv
> Logical extents 160 to 239
> Physical extent 2416 to 5467:
> FREE
>
> #lvcreate -l 20 -n hog_lv test_vg /dev/cciss/c0d1p1:5448-5467
>
> #pvdisplay --map /dev/cciss/c0d1p1
> --- Physical volume ---
> PV Name /dev/cciss/c0d1p1
> VG Name test_vg
> PV Size 683.51 GB / not usable 5.97 MB
> Allocatable yes
> PE Size (KByte) 131072
> Total PE 5468
> Free PE 4312
> Allocated PE 1156
> PV UUID YXjplf-EfLh-8Jkr-2utT-gmi5-gjAH-UOCcN0
>
> --- Physical Segments ---
> Physical extent 0 to 15:
> Logical volume /dev/test_vg/test_lv
> Logical extents 0 to 15
> Physical extent 16 to 815:
> Logical volume /dev/test_vg/mail_lv
> Logical extents 0 to 799
> Physical extent 816 to 975:
> Logical volume /dev/test_vg/data_lv
> Logical extents 0 to 159
> Physical extent 976 to 2255:
> FREE
> Physical extent 2256 to 2335:
> Logical volume /dev/test_vg/srv_lv
> Logical extents 0 to 79
> Physical extent 2336 to 2415:
> Logical volume /dev/test_vg/data_lv
> Logical extents 160 to 239
> Physical extent 2416 to 5447:
> FREE
> Physical extent 5448 to 5467:
> Logical volume /dev/test_vg/hog_lv
> Logical extents 0 to 19
>
> #lvremove /dev/test_vg/hog_lv
> Do you really want to remove active logical volume hog_lv? [y/n]: y
> Logical volume "hog_lv" successfully removed
> #lvcreate -l 20 -s -n data_snap /dev/test_vg/data_lv
> /dev/cciss/c0d1p1:5448-5467
> Logical volume "data_snap" created
> #pvdisplay --map /dev/cciss/c0d1p1
> --- Physical volume ---
> PV Name /dev/cciss/c0d1p1
> VG Name test_vg
> PV Size 683.51 GB / not usable 5.97 MB
> Allocatable yes
> PE Size (KByte) 131072
> Total PE 5468
> Free PE 4312
> Allocated PE 1156
> PV UUID YXjplf-EfLh-8Jkr-2utT-gmi5-gjAH-UOCcN0
>
> --- Physical Segments ---
> Physical extent 0 to 15:
> Logical volume /dev/test_vg/restricted_lv
> Logical extents 0 to 15
> Physical extent 16 to 815:
> Logical volume /dev/test_vg/mail_lv
> Logical extents 0 to 799
> Physical extent 816 to 975:
> Logical volume /dev/test_vg/data_lv
> Logical extents 0 to 159
> Physical extent 976 to 2255:
> FREE
> Physical extent 2256 to 2335:
> Logical volume /dev/test_vg/srv_lv
> Logical extents 0 to 79
> Physical extent 2336 to 2415:
> Logical volume /dev/test_vg/data_lv
> Logical extents 160 to 239
> Physical extent 2416 to 5447:
> FREE
> Physical extent 5448 to 5467:
> Logical volume /dev/test_vg/data_snap
> Logical extents 0 to 19
>
>
> #lvdisplay /dev/test_vg/data_snap
> --- Logical volume ---
> LV Name /dev/test_vg/data_snap
> VG Name test_vg
> LV UUID bdqB77-f0vb-ZucS-Ka1l-pCr3-Ebeq-kOchmk
> LV Write Access read/write
> LV snapshot status active destination for /dev/test_vg/data_lv
> LV Status available
> # open 0
> LV Size 30.00 GB
> Current LE 240
> COW-table size 2.50 GB
> COW-table LE 20
> Allocated to snapshot 0.00%
> Snapshot chunk size 4.00 KB
> Segments 1
> Allocation inherit
> Read ahead sectors auto
> - currently set to 256
> Block device 253:5
>
> #lvdisplay --map /dev/test_vg/data_snap
> --- Logical volume ---
> LV Name /dev/test_vg/data_snap
> VG Name test_vg
> LV UUID IBBvOq-Bg0U-c69v-p7fQ-tR63-T8UV-gM1Ncu
> LV Write Access read/write
> LV snapshot status active destination for /dev/test_vg/data_lv
> LV Status available
> # open 0
> LV Size 30.00 GB
> Current LE 240
> COW-table size 2.50 GB
> COW-table LE 20
> Allocated to snapshot 0.00%
> Snapshot chunk size 4.00 KB
> Segments 1
> Allocation inherit
> Read ahead sectors auto
> - currently set to 256
> Block device 253:5
>
> --- Segments ---
> Logical extent 0 to 19:
> Type linear
> Physical volume /dev/cciss/c0d1p1
> Physical extents 5448 to 5467
>
> <NOW BACKUP>
>
> #lvremove /dev/test_vg/data_snap
> Do you really want to remove active logical volume data_snap? [y/n]: y
> Logical volume "data_snap" successfully removed
>
> #lvcreate -l 20 -n hog_lv test_vg /dev/cciss/c0d1p1:5448-5467
> Logical volume "hog_lv" created
>
> #pvdisplay --map /dev/cciss/c0d1p1
> --- Physical volume ---
> PV Name /dev/cciss/c0d1p1
> VG Name test_vg
> PV Size 683.51 GB / not usable 5.97 MB
> Allocatable yes
> PE Size (KByte) 131072
> Total PE 5468
> Free PE 4312
> Allocated PE 1156
> PV UUID YXjplf-EfLh-8Jkr-2utT-gmi5-gjAH-UOCcN0
>
> --- Physical Segments ---
> Physical extent 0 to 15:
> Logical volume /dev/test_vg/restricted_lv
> Logical extents 0 to 15
> Physical extent 16 to 815:
> Logical volume /dev/test_vg/mail_lv
> Logical extents 0 to 799
> Physical extent 816 to 975:
> Logical volume /dev/test_vg/data_lv
> Logical extents 0 to 159
> Physical extent 976 to 2255:
> FREE
> Physical extent 2256 to 2335:
> Logical volume /dev/test_vg/srv_lv
> Logical extents 0 to 79
> Physical extent 2336 to 2415:
> Logical volume /dev/test_vg/data_lv
> Logical extents 160 to 239
> Physical extent 2416 to 5447:
> FREE
> Physical extent 5448 to 5467:
> Logical volume /dev/test_vg/hog_lv
> Logical extents 0 to 19
>
> #dd if=/dev/zero of=/dev/hog_lv
>
> #lvremove /dev/test_vg/hog_lv
> Do you really want to remove active logical volume hog_lv? [y/n]: y
> Logical volume "hog_lv" successfully removed
>
> Enjoy
>
> James
>
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
James,
That's fantastic! Thanks very much! I have a couple of questions:
1) If I wanted to create a script that backed up lots of customer-data
LVs, could I just do one zero at the end (and still have no data leakage)?
2) On average, each of my data LVs are 20GB each, and if I were to
create a snapshot of 20GB, this would take about 20 mins to erase. If I
made the snapshot only 1GB, that means it would be quick to erase at the
end (however only 1GB of data could be created on the respect origin,
correct?)
Thanks
More information about the linux-lvm
mailing list