[libvirt] Using Restore in another host.

Marcela Castro León mcastrol at gmail.com
Tue Apr 5 15:20:31 UTC 2011


Hello
OK, this is the new log. Now, the old error appeared again...
*error: Failed to restore domain from XX
error: monitor socket did not show up.: Connection refused
*

Regards.

2011/4/5 Michal Novotny <minovotn at redhat.com>

> Hi Marcela,
> I was investigating the log file and it seems like the image file cannot
> be opened on the remote host.
>
> According to the lost you're doing the restore on the host named
> rionegro so not the localhost. This seems like the saved guest image is
> not accessible from the rionegro system. Could you please try to connect
> to rionegro system using SSH and then connect to the default system
> hypervisor using:
>
> # virsh restore <image>
>
> with no specification of remote system to connect to the default
> hypervisor (default is qemu:///system under root account).
>
> Also, what may be causing issues is the colon character (':') AFAIK so
> try renaming the image from sv-chubut-2011-04-04-17:38 to some other
> name without spaces and colon characters, e.g. to
> sv-chubut-2011-04-04-17-38 and try to restore this way.
>
> Since according to the code it's about opening file error I guess the
> remote system is not having access to the file.
>
> Michal
>
> On 04/05/2011 04:54 PM, Marcela Castro León wrote:
> > Hello
> > This is the log I got doing the restore. It's says that it coun't get
> > the image, but the image is ok, because I can startup the guest.
> > Neither I can migrate the guest, so I suppose I've a problem in my
> > configuration.
> > Thank you very much in advance.
> > Marcela.
> >
> > 2011/4/5 Michal Novotny <minovotn at redhat.com <mailto:minovotn at redhat.com
> >>
> >
> >     Hi Marcela,
> >     is any other guest on the host that cannot restore this VM working
> >     fine ?
> >
> >     You could also try running the:
> >
> >     */# LIBVIRT_DEBUG=1 virsh restore sv-chubut-2011-04-04-17:38 2>
> >     virsh-restore.log
> >
> >     /*command which would enable the libvirt logging and output the debug
> >     log into the virsh-restore.log file. This file could be sent to
> >     the list
> >     for analysis what's wrong.
> >
> >     Thanks,
> >     Michal
> >
> >     On 04/05/2011 11:57 AM, Marcela Castro León wrote:
> >     > Hello Daniel
> >     > Thank you for all your information, but I still didn't solve the
> >     > problem. I tried the option you mention, with two differents guest
> >     > into two differents host, but all the cases I've got:
> >     >
> >     > */virsh # restore sv-chubut-2011-04-04-17:38/*
> >     > */error: Failed to restore domain from sv-chubut-2011-04-04-17:38/*
> >     > */error: monitor socket did not show up.: Connection refused/*
> >     >
> >     > I cannot get any useful information (at least form me) on the
> >     log you
> >     > mention.
> >     > I'd appreciate a lot a new suggestion.
> >     > Thanks
> >     > Marcela
> >     >
> >     >
> >     >
> >     >
> >     > 2011/4/4 Daniel P. Berrange <berrange at redhat.com
> >     <mailto:berrange at redhat.com>
> >     > <mailto:berrange at redhat.com <mailto:berrange at redhat.com>>>
> >     >
> >     >     On Sun, Apr 03, 2011 at 10:43:45AM +0200, Marcela Castro
> >     León wrote:
> >     >     > Hello:
> >     >     > I need to know if I can use the restore operation (virsh o
> the
> >     >     equivalent in
> >     >     > libvirt) to recover a previous state of a guest, but
> recovered
> >     >     previously in
> >     >     > another host.
> >     >     > I did a test, but I got an error:
> >     >     >
> >     >     > The exactly sequence using virsh I testes is:
> >     >     > On [HOST SOURCE]: Using virsh
> >     >     > 1) save [domain] [file]
> >     >     > 2) restore file
> >     >     > 3) destroy [domain]
> >     >     >
> >     >     > On [HOST SOURCE] using ubuntu sh
> >     >     > 4) cp [guest.img] [guest.xml] [file] to HOST2
> >     >     >
> >     >     > On [HOST TARGET] using virsh
> >     >     > 5) define [guest.xml] (using image on destination in HOST2)
> >     >     > 6) restore [file]
> >     >
> >     >     As a general rule you should only ever 'restore' from a
> >     >     file *once*. This is because after the first restore
> >     >     operation, the guest may have made writes to its disk.
> >     >     Restoring a second time the guest OS will likely have
> >     >     an inconsistent view of the disk & will cause filesystem
> >     >     corruption.
> >     >
> >     >     If you want to be able to restore from a saved image
> >     >     multiple times, you need to also take a snapshot of
> >     >     the disk image at the same time, and restore that
> >     >     snapshot when restoring the memory image.
> >     >
> >     >
> >     >     That aside, saving on one host & restoring on a
> >     >     different host is fine. So if you leave out steps
> >     >     2+3 in your example above, then your data would
> >     >     still be safe.
> >     >
> >     >     > The restore troughs the following message:
> >     >     > *virsh # restore sv-chubut-2011-04-01-09:58
> >     >     > error: Failed to restore domain from
> >     sv-chubut-2011-04-01-09:58
> >     >     > error: monitor socket did not show up.: Connection refused*
> >     >
> >     >     There is probably some configuration difference on your 2nd
> host
> >     >     that prevented the VM from starting up. If you're lucky the
> file
> >     >     /var/log/libvirt/qemu/$NAME.log will tell you more
> >     >
> >     >     Daniel
> >     >     --
> >     >     |: http://berrange.com      -o-
> >     >      http://www.flickr.com/photos/dberrange/ :|
> >     >     |: http://libvirt.org              -o-
> >     >     http://virt-manager.org :|
> >     >     |: http://autobuild.org       -o-
> >     >     http://search.cpan.org/~danberr/
> >     <http://search.cpan.org/%7Edanberr/>
> >     >     <http://search.cpan.org/%7Edanberr/> :|
> >     >     |: http://entangle-photo.org       -o-
> >     >     http://live.gnome.org/gtk-vnc :|
> >     >
> >     >
> >     >
> >     > --
> >     > libvir-list mailing list
> >     > libvir-list at redhat.com <mailto:libvir-list at redhat.com>
> >     > https://www.redhat.com/mailman/listinfo/libvir-list
> >
> >
> >     --
> >     Michal Novotny <minovotn at redhat.com <mailto:minovotn at redhat.com>>,
> >     RHCE
> >     Virtualization Team (xen userspace), Red Hat
> >
> >
>
>
> --
> Michal Novotny <minovotn at redhat.com>, RHCE
> Virtualization Team (xen userspace), Red Hat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/libvir-list/attachments/20110405/3cc9acfd/attachment-0001.htm>
-------------- next part --------------
radic at rionegro:~/discoguest/mvdata/imagenes$ virsh
17:16:32.636: debug : virInitialize:336 : register drivers
17:16:32.636: debug : virRegisterDriver:837 : registering Test as driver 0
17:16:32.636: debug : virRegisterNetworkDriver:675 : registering Test as network driver 0
17:16:32.636: debug : virRegisterInterfaceDriver:706 : registering Test as interface driver 0
17:16:32.636: debug : virRegisterStorageDriver:737 : registering Test as storage driver 0
17:16:32.636: debug : virRegisterDeviceMonitor:768 : registering Test as device driver 0
17:16:32.636: debug : virRegisterSecretDriver:799 : registering Test as secret driver 0
17:16:32.637: debug : virRegisterDriver:837 : registering Xen as driver 1
17:16:32.637: debug : virRegisterDriver:837 : registering OPENVZ as driver 2
17:16:32.637: debug : vboxRegister:109 : VBoxCGlueInit failed, using dummy driver
17:16:32.637: debug : virRegisterDriver:837 : registering VBOX as driver 3
17:16:32.637: debug : virRegisterNetworkDriver:675 : registering VBOX as network driver 1
17:16:32.637: debug : virRegisterStorageDriver:737 : registering VBOX as storage driver 1
17:16:32.637: debug : virRegisterDriver:837 : registering remote as driver 4
17:16:32.637: debug : virRegisterNetworkDriver:675 : registering remote as network driver 2
17:16:32.637: debug : virRegisterInterfaceDriver:706 : registering remote as interface driver 1
17:16:32.637: debug : virRegisterStorageDriver:737 : registering remote as storage driver 2
17:16:32.637: debug : virRegisterDeviceMonitor:768 : registering remote as device driver 1
17:16:32.637: debug : virRegisterSecretDriver:799 : registering remote as secret driver 1
17:16:32.637: debug : virConnectOpenAuth:1337 : name=qemu:///system, auth=0x7ffcdc643b80, flags=0
17:16:32.637: debug : do_open:1106 : name "qemu:///system" to URI components:
  scheme qemu
  opaque (null)
  authority (null)
  server (null)
  user (null)
  port 0
  path /system

17:16:32.637: debug : do_open:1116 : trying driver 0 (Test) ...
17:16:32.637: debug : do_open:1122 : driver 0 Test returned DECLINED
17:16:32.637: debug : do_open:1116 : trying driver 1 (Xen) ...
17:16:32.637: debug : do_open:1122 : driver 1 Xen returned DECLINED
17:16:32.637: debug : do_open:1116 : trying driver 2 (OPENVZ) ...
17:16:32.637: debug : do_open:1122 : driver 2 OPENVZ returned DECLINED
17:16:32.637: debug : do_open:1116 : trying driver 3 (VBOX) ...
17:16:32.637: debug : do_open:1122 : driver 3 VBOX returned DECLINED
17:16:32.637: debug : do_open:1116 : trying driver 4 (remote) ...
17:16:32.637: debug : doRemoteOpen:564 : proceeding with name = qemu:///system
17:16:32.637: debug : remoteIO:8455 : Do proc=66 serial=0 length=28 wait=(nil)
17:16:32.637: debug : remoteIO:8517 : We have the buck 66 0x7ffcdc896010 0x7ffcdc896010
17:16:32.638: debug : remoteIODecodeMessageLength:7939 : Got length, now need 64 total (60 more)
17:16:32.638: debug : remoteIOEventLoop:8381 : Giving up the buck 66 0x7ffcdc896010 (nil)
17:16:32.638: debug : remoteIO:8548 : All done with our call 66 (nil) 0x7ffcdc896010
17:16:32.638: debug : remoteIO:8455 : Do proc=1 serial=1 length=56 wait=(nil)
17:16:32.638: debug : remoteIO:8517 : We have the buck 1 0x10798e0 0x10798e0
17:16:32.638: debug : remoteIODecodeMessageLength:7939 : Got length, now need 56 total (52 more)
17:16:32.638: debug : remoteIOEventLoop:8381 : Giving up the buck 1 0x10798e0 (nil)
17:16:32.638: debug : remoteIO:8548 : All done with our call 1 (nil) 0x10798e0
17:16:32.638: debug : doRemoteOpen:917 : Adding Handler for remote events
17:16:32.638: debug : doRemoteOpen:924 : virEventAddHandle failed: No addHandleImpl defined. continuing without events.
17:16:32.638: debug : do_open:1122 : driver 4 remote returned SUCCESS
17:16:32.638: debug : do_open:1142 : network driver 0 Test returned DECLINED
17:16:32.638: debug : do_open:1142 : network driver 1 VBOX returned DECLINED
17:16:32.638: debug : do_open:1142 : network driver 2 remote returned SUCCESS
17:16:32.638: debug : do_open:1161 : interface driver 0 Test returned DECLINED
17:16:32.638: debug : do_open:1161 : interface driver 1 remote returned SUCCESS
17:16:32.638: debug : do_open:1181 : storage driver 0 Test returned DECLINED
17:16:32.638: debug : do_open:1181 : storage driver 1 VBOX returned DECLINED
17:16:32.638: debug : do_open:1181 : storage driver 2 remote returned SUCCESS
17:16:32.638: debug : do_open:1201 : node driver 0 Test returned DECLINED
17:16:32.638: debug : do_open:1201 : node driver 1 remote returned SUCCESS
17:16:32.638: debug : do_open:1228 : secret driver 0 Test returned DECLINED
17:16:32.638: debug : do_open:1228 : secret driver 1 remote returned SUCCESS
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # restore XX
17:16:36.271: debug : virDomainRestore:2283 : conn=0x1074070, from=XX
17:16:36.271: debug : remoteIO:8455 : Do proc=54 serial=2 length=76 wait=(nil)
17:16:36.271: debug : remoteIO:8517 : We have the buck 54 0x1094fc0 0x1094fc0
17:17:06.724: debug : remoteIODecodeMessageLength:7939 : Got length, now need 192 total (188 more)
17:17:06.724: debug : remoteIOEventLoop:8381 : Giving up the buck 54 0x1094fc0 (nil)
17:17:06.724: debug : remoteIO:8548 : All done with our call 54 (nil) 0x1094fc0
error: Failed to restore domain from XX
error: monitor socket did not show up.: Connection refused

virsh # 



More information about the libvir-list mailing list