[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[libvirt] [PATCH v2] iohelper: fsync() at the end

Currently, when we are doing (managed) save, we insert the
iohelper between the qemu and OS. The pipe is created, the
writing end is passed to qemu and the reading end to the
iohelper. It reads data and write them into given file. However,
with write() being asynchronous data may still be in OS
caches and hence in some (corner) cases, all migration data
may have been read and written (not physically though). So
qemu will report success, as well as iohelper. However, with
some non local filesystems, where ENOSPACE is polled every X
time units, we may get into situation where all operations
succeeded but data hasn't reached the disk. And in fact will
never do. Therefore we ought sync caches to make sure data
has reached the block device on remote host.

For more information follow:

 src/util/iohelper.c |    9 +++++++++
 1 files changed, 9 insertions(+), 0 deletions(-)

diff --git a/src/util/iohelper.c b/src/util/iohelper.c
index c6542ed..0d356c2 100644
--- a/src/util/iohelper.c
+++ b/src/util/iohelper.c
@@ -40,6 +40,7 @@
 #include "virterror_internal.h"
 #include "configmake.h"
 #include "virrandom.h"
+#include "storage_file.h"
@@ -179,6 +180,14 @@ runIO(const char *path, int fd, int oflags, unsigned long long length)
+    /* If we are on shared FS ensure all data is written as some
+     * FSs may do writeback caching or polling for ENOSPC or any
+     * other magic that local FS does not.*/
+    if (virStorageFileIsSharedFS(fdoutname) && (fdatasync(fdout) < 0)) {
+        virReportSystemError(errno, _("unable to fsync %s"), fdoutname);
+        goto cleanup;
+    }
     ret = 0;

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]