[libvirt] [Gluster-devel] [RFC PATCH v1 0/2] Qemu/Gluster support in Libvirt

Yin Yin maillistofyinyin at gmail.com
Thu Aug 30 02:57:11 UTC 2012


Hi, Harsh:
  I make some break in glusterd , and can gdb the qemu-kvm forked from
libvirtd.

break in glusterd:

(gdb) i b
Num     Type           Disp Enb Address            What
1       breakpoint     keep y   0x00007f903ef1a0a0 in server_getspec at
glusterd-handshake.c:122
2       breakpoint     keep y   0x00000034f4607070 in rpcsvc_program_actor
at rpcsvc.c:137
breakpoint already hit 2 times
3       breakpoint     keep y   0x00007f903ef199f0 in
glusterd_set_clnt_mgmt_program at glusterd-handshake.c:359
4       breakpoint     keep y   0x00007f903ef1a0a0 in server_getspec at
glusterd-handshake.c:122

in rpcsvc_handle_rpc_call fun, it call rpcsvc_program_actor and return
right.
(gdb) p *actor
$13 = {procname = "GETSPEC", '\000' <repeats 24 times>, procnum = 2, actor
= 0x7f903ef1a0a0 <server_getspec>, vector_sizer = 0, unprivileged =
_gf_false}

but in

     if (0 == svc->allow_insecure && unprivileged && !actor->unprivileged) {
                        /* Non-privileged user, fail request */
                        gf_log ("glusterd", GF_LOG_ERROR,
                                "Request received from non-"
                                "privileged port. Failing request");
                        rpcsvc_request_destroy (req);
                        return -1;
        }
so the server_getspec on server not be called, which cause qemu-kvm
progress failed.

my question:
1.(0 == svc->allow_insecure && unprivileged && !actor->unprivileged) which
one wrong here ?

Best Regards,
Yin Yin

On Thu, Aug 30, 2012 at 9:14 AM, Yin Yin <maillistofyinyin at gmail.com> wrote:

> Hi, Harsh:
>       I've try your patch, but can't boot the vm.
> [root at yinyin qemu-glusterfs]# virsh create gluster-libvirt.xml
> 错误:从 gluster-libvirt.xml 创建域失败
> 错误:Unable to read from monitor: Connection reset by peer
>
> the libvirt build the qemu/gluster command correctly, the qemu-kvm try to
> run, but faile after a while, that cause the libvirt monitor connect failed.
>
> the /var/libvirt/qemu/gluster-vm.log follow:
>
> 2012-08-30 01:03:08.418+0000: starting up
> LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=spice
> /usr/libexec/qemu-kvm -S -M rhel6.2.0 -enable-kvm -m 512 -smp
> 1,sockets=1,cores=1,threads=1 -name gluster-vm -uuid
> f65bd812-45fb-cc2d-75fd-84206248e026  -nodefconfig -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/gluster-vm.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
> -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=gluster://
> 10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native-device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -device usb-tablet,id=input0 -spice
> port=30038,addr=0.0.0.0,disable-ticketing -vga cirrus -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
> 2012-08-30 01:03:08.423+0000: 4452: debug : virCommandHook:2041 : Run hook
> 0x48f160 0x7f433ba0e570
> 2012-08-30 01:03:08.423+0000: 4452: debug : qemuProcessHook:2475 :
> Obtaining domain lock
> 2012-08-30 01:03:08.423+0000: 4452: debug : virDomainLockManagerNew:123 :
> plugin=0x7f43300b7980 dom=0x7f43240022b0 withResources=1
> 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerNew:291 :
> plugin=0x7f43300b7980 type=0 nparams=4 params=0x7f433ba0d9d0 flags=0
> 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerLogParams:98 :
> key=uuid type=uuid value=f65bd812-45fb-cc2d-75fd-84206248e026
> 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerLogParams:94 :
> key=name type=string value=gluster-vm
> 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerLogParams:82 :
> key=id type=uint value=1
> 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerLogParams:82 :
> key=pid type=uint value=4452
> 2012-08-30 01:03:08.423+0000: 4452: debug : virDomainLockManagerNew:135 :
> Adding leases
> 2012-08-30 01:03:08.423+0000: 4452: debug : virDomainLockManagerNew:140 :
> Adding disks
> 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerAcquire:337 :
> lock=0x7f4324001ba0 state='(null)' flags=3 fd=0x7f433ba0db3c
> 2012-08-30 01:03:08.423+0000: 4452: debug : virLockManagerFree:374 :
> lock=0x7f4324001ba0
> 2012-08-30 01:03:08.423+0000: 4452: debug : qemuProcessHook:2500 : Moving
> process to cgroup
> 2012-08-30 01:03:08.423+0000: 4452: debug : virCgroupNew:603 : New group
> /libvirt/qemu/gluster-vm
> 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected
> mount/mapping 0:cpu at /cgroup/cpu in
> 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected
> mount/mapping 1:cpuacct at /cgroup/cpuacct in
> 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected
> mount/mapping 2:cpuset at /cgroup/cpuset in
> 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected
> mount/mapping 3:memory at /cgroup/memory in
> 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected
> mount/mapping 4:devices at /cgroup/devices in
> 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected
> mount/mapping 5:freezer at /cgroup/freezer in
> 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupDetect:262 : Detected
> mount/mapping 6:blkio at /cgroup/blkio in
> 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:524 : Make
> group /libvirt/qemu/gluster-vm
> 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make
> controller /cgroup/cpu/libvirt/qemu/gluster-vm/
> 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make
> controller /cgroup/cpuacct/libvirt/qemu/gluster-vm/
> 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make
> controller /cgroup/cpuset/libvirt/qemu/gluster-vm/
> 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make
> controller /cgroup/memory/libvirt/qemu/gluster-vm/
> 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make
> controller /cgroup/devices/libvirt/qemu/gluster-vm/
> 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make
> controller /cgroup/freezer/libvirt/qemu/gluster-vm/
> 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupMakeGroup:546 : Make
> controller /cgroup/blkio/libvirt/qemu/gluster-vm/
> 2012-08-30 01:03:08.424+0000: 4452: debug : virCgroupSetValueStr:320 : Set
> value '/cgroup/cpu/libvirt/qemu/gluster-vm/tasks' to '4452'
> 2012-08-30 01:03:08.426+0000: 4452: debug : virCgroupSetValueStr:320 : Set
> value '/cgroup/cpuacct/libvirt/qemu/gluster-vm/tasks' to '4452'
> 2012-08-30 01:03:08.429+0000: 4452: debug : virCgroupSetValueStr:320 : Set
> value '/cgroup/cpuset/libvirt/qemu/gluster-vm/tasks' to '4452'
> 2012-08-30 01:03:08.432+0000: 4452: debug : virCgroupSetValueStr:320 : Set
> value '/cgroup/memory/libvirt/qemu/gluster-vm/tasks' to '4452'
> 2012-08-30 01:03:08.435+0000: 4452: debug : virCgroupSetValueStr:320 : Set
> value '/cgroup/devices/libvirt/qemu/gluster-vm/tasks' to '4452'
> 2012-08-30 01:03:08.437+0000: 4452: debug : virCgroupSetValueStr:320 : Set
> value '/cgroup/freezer/libvirt/qemu/gluster-vm/tasks' to '4452'
> 2012-08-30 01:03:08.439+0000: 4452: debug : virCgroupSetValueStr:320 : Set
> value '/cgroup/blkio/libvirt/qemu/gluster-vm/tasks' to '4452'
> 2012-08-30 01:03:08.442+0000: 4452: debug :
> qemuProcessInitCpuAffinity:1731 : Setting CPU affinity
> 2012-08-30 01:03:08.443+0000: 4452: debug :
> qemuProcessInitCpuAffinity:1760 : Set CPU affinity with specified cpuset
> 2012-08-30 01:03:08.443+0000: 4452: debug : qemuProcessHook:2512 : Setting
> up security labelling
> 2012-08-30 01:03:08.443+0000: 4452: debug :
> virSecurityDACSetProcessLabel:637 : Dropping privileges of DEF to 107:107
> 2012-08-30 01:03:08.443+0000: 4452: debug : qemuProcessHook:2519 : Hook
> complete ret=0
> 2012-08-30 01:03:08.443+0000: 4452: debug : virCommandHook:2043 : Done
> hook 0
> 2012-08-30 01:03:08.443+0000: 4452: debug : virCommandHook:2056 :
> Notifying parent for handshake start on 24
> 2012-08-30 01:03:08.443+0000: 4452: debug : virCommandHook:2077 : Waiting
> on parent for handshake complete on 25
> 2012-08-30 01:03:08.495+0000: 4452: debug : virCommandHook:2093 : Hook is
> done 0
> Gluster connection failed for server=10.1.81.111 port=24007 volume=dht
> image=windows7-32-DoubCards-iotest-qcow2.img transport=socket
> qemu-kvm: -drive file=gluster://
> 10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native:
> could not open disk image gluster://
> 10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img: No data
> available
> 2012-08-30 01:03:11.565+0000: shutting down
>
> I can boot the vm with the command:
> LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=spice
> /usr/libexec/qemu-kvm -S -M rhel6.2.0 -enable-kvm -m 512 -smp
> 1,sockets=1,cores=1,threads=1 -name gluster-vm -uuid
> f65bd812-45fb-cc2d-75fd-84206248e026  -nodefconfig -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/gluster-vm.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
> -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=gluster://
> 10.1.81.111:24007/dht/windows7-32-DoubCards-iotest-qcow2.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native-device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -device usb-tablet,id=input0 -spice
> port=30038,addr=0.0.0.0,disable-ticketing -vga cirrus -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
>
> my question:
> 1.what's the libvirt hook funciton? could it affect the qemu-kvm command?
> 2.It's hard to debug the qemu-kvm progress from libvirt, I try to hang the
> glusterd for a moment, then to gdb the qemu-kvm, do your have better
> methods?
>
> Best Regards,
> Yin Yin
>
> You have new mail in /var/spool/mail/root
> On Fri, Aug 24, 2012 at 5:44 PM, Deepak C Shetty <
> deepakcs at linux.vnet.ibm.com> wrote:
>
>> On 08/24/2012 12:22 PM, Harsh Bora wrote:
>>
>>> On 08/24/2012 12:05 PM, Daniel Veillard wrote:
>>>
>>>> On Thu, Aug 23, 2012 at 04:31:50PM +0530, Harsh Prateek Bora wrote:
>>>>
>>>>> This patchset provides support for Gluster protocol based network
>>>>> disks.
>>>>> It is based on the proposed gluster support in Qemu on qemu-devel:
>>>>> http://lists.gnu.org/archive/**html/qemu-devel/2012-08/**msg01539.html<http://lists.gnu.org/archive/html/qemu-devel/2012-08/msg01539.html>
>>>>>
>>>>
>>>>    Just to be clear, that qemu feature didn't make the deadline for 1.2,
>>>> right ? I don't think we can add support at the libvirt level until
>>>> the patches are commited in QEmu, but that doesn't prevent reviewing
>>>> them in advance . Right now we are in freeze for 0.10.0,
>>>>
>>>
>>>
>> I am working on enabling oVirt/VDSM to be able to exploit this, using
>> harsh's RFC patches.
>> VDSM patch @ http://gerrit.ovirt.org/#/c/**6856/<http://gerrit.ovirt.org/#/c/6856/>
>>
>> An early feedback would help me, especially on the xml spec posted here.
>> My VDSM patch
>> depends on it.
>>
>> thanx,
>> deepak
>>
>>
>>
>> ______________________________**_________________
>> Gluster-devel mailing list
>> Gluster-devel at nongnu.org
>> https://lists.nongnu.org/**mailman/listinfo/gluster-devel<https://lists.nongnu.org/mailman/listinfo/gluster-devel>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/libvir-list/attachments/20120830/707716d1/attachment-0001.htm>


More information about the libvir-list mailing list