[Cluster-devel] cluster/gfs-kernel/src/gfs ops_inode.c

wcheng at sourceware.org wcheng at sourceware.org
Sun Oct 15 07:25:14 UTC 2006


CVSROOT:	/cvs/cluster
Module name:	cluster
Changes by:	wcheng at sourceware.org	2006-10-15 07:25:14

Modified files:
	gfs-kernel/src/gfs: ops_inode.c 

Log message:
	Just found 2.6.18 kernel has something called down_read_non_onwer for
	rwsemaphore. If we can implement a similar function that does something
	like "up_write_if_owner", then we can put i_alloc_sem back to correct
	state. Correct the comment and mark this possibility.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/gfs-kernel/src/gfs/ops_inode.c.diff?cvsroot=cluster&r1=1.13&r2=1.14

--- cluster/gfs-kernel/src/gfs/ops_inode.c	2006/10/15 06:32:06	1.13
+++ cluster/gfs-kernel/src/gfs/ops_inode.c	2006/10/15 07:25:09	1.14
@@ -1349,14 +1349,13 @@
 	 * To avoid this to happen, i_alloc_sem must be dropped and trust
 	 * be put into glock that it can carry the same protection. 
 	 *
-	 * One issue with dropping i_alloc_sem is gfs_setattr() can be 
-	 * called from other code path without this sempaphore. Since linux
-	 * semaphore implementation doesn't include owner id, we have no way 
-	 * to reliably decide whether the following "up" is a correct reset. 
-	 * This implies if i_alloc_sem is ever used by non-direct_IO code 
-	 * path in the future, this hack will fall apart. In short, with this 
-	 * change, i_alloc_sem has become a meaningless lock within GFS and 
-	 * don't expect its counter representing any correct state. 
+	 * One issue with dropping i_alloc_sem is that the gfs_setattr() 
+	 * can be invoked from other code path without this sempaphore. 
+	 * We'll need a new rwsem function that can "up" the semaphore 
+	 * only when it is needed. Before that happens (will research the 
+	 * possibility), i_alloc_sem (now) is a meaningless lock within 
+	 * GFS. If it is ever been used by other non-directIO code, this
+	 * hack will fall apart.
 	 *
 	 * wcheng at redhat.com 10/14/06  
 	 */ 




More information about the Cluster-devel mailing list