[Cluster-devel] Cluster Project branch, RHEL46, updated. cman-kernel_2_6_9_54-6-g0413a20
cfeist at sourceware.org
cfeist at sourceware.org
Wed Apr 9 19:30:08 UTC 2008
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "Cluster Project".
http://sources.redhat.com/git/gitweb.cgi?p=cluster.git;a=commitdiff;h=0413a207c89b6b4d440b4f8bc27545ec4ee76528
The branch, RHEL46 has been updated
via 0413a207c89b6b4d440b4f8bc27545ec4ee76528 (commit)
from c22322ad43d192cdc2de77c6ceca782f5e35412b (commit)
Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.
- Log -----------------------------------------------------------------
commit 0413a207c89b6b4d440b4f8bc27545ec4ee76528
Author: Benjamin Marzinski <bmarzins at redhat.com>
Date: Tue Jan 29 22:21:45 2008 +0000
Fix for bz #419391. gfs_glock_dq was traversing the gl_holders list without
holding the gl_spin spinlock, this was causing a problem when the list item
it was currently looking at got removed from the list. The solution is to
not traverse the list, because it is unncessary. Unfortunately, there is also
a bug in this section of code, where you can't guarantee that you will not
cache a glock held with GL_NOCACHE. Fixing this issue requires significantly
more work.
4.6.z bz#441747
-----------------------------------------------------------------------
Summary of changes:
gfs-kernel/src/gfs/glock.c | 15 ++++++---------
1 files changed, 6 insertions(+), 9 deletions(-)
diff --git a/gfs-kernel/src/gfs/glock.c b/gfs-kernel/src/gfs/glock.c
index 570ee8f..1014ea2 100644
--- a/gfs-kernel/src/gfs/glock.c
+++ b/gfs-kernel/src/gfs/glock.c
@@ -1608,8 +1608,6 @@ gfs_glock_dq(struct gfs_holder *gh)
struct gfs_sbd *sdp = gl->gl_sbd;
struct gfs_glock_operations *glops = gl->gl_ops;
struct list_head *pos;
- struct gfs_holder *tmp_gh = NULL;
- int count = 0;
atomic_inc(&gl->gl_sbd->sd_glock_dq_calls);
@@ -1620,14 +1618,13 @@ gfs_glock_dq(struct gfs_holder *gh)
set_bit(GLF_SYNC, &gl->gl_flags);
/* Don't cache glock; request demote to unlock at inter-node scope */
- if (gh->gh_flags & GL_NOCACHE) {
- list_for_each(pos, &gl->gl_holders) {
- tmp_gh = list_entry(pos, struct gfs_holder, gh_list);
- ++count;
- }
- if (tmp_gh == gh && count == 1)
+ if (gh->gh_flags & GL_NOCACHE && gl->gl_holders.next == &gh->gh_list &&
+ gl->gl_holders.prev == &gh->gh_list)
+ /* There's a race here. If there are two holders, and both
+ * are dq'ed at almost the same time, you can't guarantee that
+ * you will call handle_callback. Fixing this will require
+ * some refactoring */
handle_callback(gl, LM_ST_UNLOCKED);
- }
lock_on_glock(gl);
hooks/post-receive
--
Cluster Project
More information about the Cluster-devel
mailing list