[Cluster-devel] [PATCH] fs: Do not check for valid page->mapping in page_cache_pipe_buf_confirm

Abhijith Das adas at redhat.com
Tue Jun 28 14:23:50 UTC 2016


----- Original Message -----
> From: "Abhi Das" <adas at redhat.com>
> To: cluster-devel at redhat.com, linux-kernel at vger.kernel.org, linux-fsdevel at vger.kernel.org
> Cc: "Abhi Das" <adas at redhat.com>, "Miklos Szeredi" <mszeredi at redhat.com>, "Jens Axboe" <axboe at fb.com>, "Al Viro"
> <viro at zeniv.linux.org.uk>
> Sent: Wednesday, May 25, 2016 10:24:45 PM
> Subject: [PATCH] fs: Do not check for valid page->mapping in page_cache_pipe_buf_confirm
> 
> If the page is truncated after being spliced into the pipe, it's
> probably not invalid.
> 
> For filesystems that invalidate pages, we used to return -ENODATA
> even though the data is there, it's just possibly different from
> what was spliced into the pipe. We shouldn't have to throw away
> the buffer or return error in this case.
> 
> Signed-off-by: Abhi Das <adas at redhat.com>
> CC: Miklos Szeredi <mszeredi at redhat.com>
> CC: Jens Axboe <axboe at fb.com>
> CC: Al Viro <viro at zeniv.linux.org.uk>
> ---
>  fs/splice.c | 9 ---------
>  1 file changed, 9 deletions(-)
> 
> diff --git a/fs/splice.c b/fs/splice.c
> index dd9bf7e..b9899b99 100644
> --- a/fs/splice.c
> +++ b/fs/splice.c
> @@ -106,15 +106,6 @@ static int page_cache_pipe_buf_confirm(struct
> pipe_inode_info *pipe,
>  		lock_page(page);
>  
>  		/*
> -		 * Page got truncated/unhashed. This will cause a 0-byte
> -		 * splice, if this is the first page.
> -		 */
> -		if (!page->mapping) {
> -			err = -ENODATA;
> -			goto error;
> -		}
> -
> -		/*
>  		 * Uh oh, read-error from disk.
>  		 */
>  		if (!PageUptodate(page)) {
> --
> 2.4.3
> 
> 

Hi Jens/Al,

Any comments on this patch? FWIW, I've done some testing on GFS2 w/ this patch
and it seems to be holding up ok. The test is designed to verify writes using
splice reads and even if the pages were truncated, the data read in from the
spliced pipe looks consistent with what was expected.

Cheers!
--Abhi




More information about the Cluster-devel mailing list