[libvirt] NFS over RDMA small block DIRECT_IO bug

Avi Kivity avi at redhat.com
Wed Sep 5 14:02:31 UTC 2012


On 09/04/2012 03:04 PM, Myklebust, Trond wrote:
> On Tue, 2012-09-04 at 11:31 +0200, Andrew Holway wrote:
>> Hello.
>> 
>> # Avi Kivity avi(a)redhat recommended I copy kvm in on this. It would also seem relevent to libvirt. #
>> 
>> I have a Centos 6.2 server and Centos 6.2 client.
>> 
>> [root at store ~]# cat /etc/exports 
>> /dev/shm				10.149.0.0/16(rw,fsid=1,no_root_squash,insecure)    (I have tried with non tempfs targets also)
>> 
>> 
>> [root at node001 ~]# cat /etc/fstab 
>> store.ibnet:/dev/shm             /mnt                 nfs          rdma,port=2050,defaults 0 0
>> 
>> 
>> I wrote a little for loop one liner that dd'd the centos net install image to a file called 'hello' then checksummed that file. Each iteration uses a different block size.
>> 
>> Non DIRECT_IO seems to work fine. DIRECT_IO with 512byte, 1K and 2K block sizes get corrupted.
> 
> 
> That is expected behaviour. DIRECT_IO over RDMA needs to be page aligned
> so that it can use the more efficient RDMA READ and RDMA WRITE memory
> semantics (instead of the SEND/RECEIVE channel semantics).

Shouldn't subpage requests fail then?  O_DIRECT block requests fail for
subsector writes, instead of corrupting your data.

Hopefully this is documented somewhere.

-- 
error compiling committee.c: too many arguments to function




More information about the libvir-list mailing list