[dm-devel] device mapper and the BLKFLSBUF ioctl

Mike Snitzer snitzer at redhat.com
Fri Oct 21 20:00:22 UTC 2016


On Fri, Oct 21 2016 at  2:33pm -0400,
Mikulas Patocka <mpatocka at redhat.com> wrote:

> Hi
> 
> I found a bug in dm regarding the BLKFLSBUF ioctl.
> 
> The BLKFLSBUF ioctl can be called on a block device and it flushes the 
> buffer cache. There is one exception - when it is called on ramdisk, it 
> actually destroys all ramdisk data (it works like a discard on the full 
> device).
> 
> The device mapper passes this ioctl down to the underlying device, so when 
> the ioctl is called on a logical volume, it can be used to destroy the 
> underlying volume group if the volume group is on ramdisk.
> 
> For example:
> # modprobe brd rd_size=1048576
> # pvcreate /dev/ram0
> # vgcreate ram_vg /dev/ram0
> # lvcreate -L 16M -n ram_lv ram_vg
> # blockdev --flushbufs /dev/ram_vg/ram_lv
> 	--- and now the whole volume group is gone, all data on the 
> 		ramdisk were replaced with zeroes
> 
> The BLKFLSBUF ioctl is only allowed with CAP_SYS_ADMIN, so there shouldn't 
> be security implications with this.
> 
> Whan to do with it? The best thing would be to drop this special ramdisk 
> behavior and make the BLKFLSBUF ioctl flush the buffer cache on ramdisk 
> like on all other block devices. But there may be many users having 
> scripts that depend on this special behavior.
> 
> Another possibility is to stop the device mapper from passing the 
> BLKFLSBUF ioctl down.

If anything DM is being consistent with what the underlying device is
meant to do.

brd_ioctl() destroys the data in response to BLKFLSBUF.. I'm missing why
this is a DM-specific problem.




More information about the dm-devel mailing list