[Linux-cluster] scsi reservation issue with GFS

Alan A alan.zg at gmail.com
Thu Feb 19 17:07:12 UTC 2009


Thank you Ryan. This was totally correct - yet it will not let me remove the
reservation. Here is the output of several commands I tried:

[root at fendev04 ~]# sg_persist -o -C -K 0xb0b40001 /dev/gfs_acct61/acct61
  HP        OPEN-V            6002
  Peripheral device type: disk
persistent reserve out: scsi status: Reservation Conflict
PR out: command failed

---------

[root at fendev04 ~]# sg_persist -i -k /dev/gfs_acct61/acct61
  HP        OPEN-V            6002
  Peripheral device type: disk
  PR generation=0x2, 2 registered reservation keys follow:
    0xb0b40001
    0xb0b40003
[root at fendev04 ~]# sg_persist -i -r /dev/gfs_acct61/acct61
  HP        OPEN-V            6002
  Peripheral device type: disk
  PR generation=0x2, Reservation follows:
    Key=0xb0b40001
    scope: LU_SCOPE,  type: Write Exclusive, registrants only








On Thu, Feb 19, 2009 at 10:29 AM, Ryan O'Hara <rohara at redhat.com> wrote:

>
> On Wed, Feb 18, 2009 at 04:30:04PM -0600, Alan A wrote:
> > Hello all!
> > Thanks for the help. I have removed sg3_utils package, and rebooted all
> the
> > nodes. As well I removed any SCSI fencing entry from cluster.conf file.
> >
> > I still have a problem getting GFS up on one of the nodes. I checked the
> > chkconfig and made sure scsi_reserve is off.
>
> I suggest that you check to see if any scsi reservations exist on your
> disk. Eventhough you disbled scsi_reserve, the reservations could stll
> exist and cause problems.
>
> sg_persist -i -r <device>
>
> This will show you any existing reservations. If any exist, they can
> be cleared using this command:
>
> sg_persist -o -C -K <key> <device>
>
> where key is the reservation key listed by running the first command.
>
>
>
> > This is the output of service gfs start - it hangs (cman and clvmd work
> just
> > fine):
> > [root at fendev04 ~]# service gfs start
> > Mounting GFS filesystems:
> >
> > >From the /var/log/messages:
> > Feb 18 16:25:46 fendev04 kernel: dlm: account61: group leave failed -512
> 0
> > Feb 18 16:25:46 fendev04 dlm_controld[29545]: open
> > "/sys/kernel/dlm/account61/control" error -1 2
> > Feb 18 16:25:46 fendev04 kernel: GFS: fsid=test1_cluster:account61.0:
> > withdrawn
> > Feb 18 16:25:46 fendev04 kernel:  [<f911fb3e>] gfs_lm_withdraw+0x76/0x82
> > [gfs]
> > Feb 18 16:25:46 fendev04 kernel:  [<f9135db6>]
> gfs_io_error_bh_i+0x2c/0x31
> > [gfs]
> > Feb 18 16:25:46 fendev04 dlm_controld[29545]: open
> > "/sys/kernel/dlm/account61/event_done" error -1 2
> > Feb 18 16:25:46 fendev04 kernel:  [<f910ed07>] gfs_logbh_wait+0x43/0x62
> > [gfs]
> > Feb 18 16:25:46 fendev04 kernel:  [<f91227a1>] disk_commit+0x4a6/0x69a
> [gfs]
> > Feb 18 16:25:46 fendev04 kernel:  [<f9122f6c>] gfs_log_dump+0x2aa/0x364
> > [gfs]
> > Feb 18 16:25:46 fendev04 kernel:  [<f9134354>] gfs_make_fs_rw+0xeb/0x113
> > [gfs]
> > Feb 18 16:25:46 fendev04 kernel:  [<f9129fd4>] init_journal+0x230/0x2fe
> > [gfs]
> > Feb 18 16:25:46 fendev04 kernel:  [<f912a928>] fill_super+0x402/0x576
> [gfs]
> > Feb 18 16:25:46 fendev04 kernel:  [<c04787fa>] get_sb_bdev+0xc6/0x110
> > Feb 18 16:25:46 fendev04 gfs_controld[29551]: mount_client_dead ci 8 no
> > sysfs entry for fs
> > Feb 18 16:25:46 fendev04 dlm_controld[29545]: open
> > "/sys/kernel/dlm/gfs_web/control" error -1 2
> > Feb 18 16:25:46 fendev04 kernel:  [<c045af57>] __alloc_pages+0x57/0x297
> > Feb 18 16:25:46 fendev04 gfs_controld[29551]: mount_client_dead ci 6 no
> > sysfs entry for fs
> > Feb 18 16:25:46 fendev04 dlm_controld[29545]: open
> > "/sys/kernel/dlm/gfs_web/event_done" error -1 2
> > Feb 18 16:25:46 fendev04 kernel:  [<f9129be1>] gfs_get_sb+0x12/0x16 [gfs]
> > Feb 18 16:25:46 fendev04 gfs_controld[29551]: mount_client_dead ci 7 no
> > sysfs entry for fs
> > Feb 18 16:25:46 fendev04 dlm_controld[29545]: open
> > "/sys/kernel/dlm/cati_gfs/id" error -1 2
> > Feb 18 16:25:46 fendev04 kernel:  [<f912a526>] fill_super+0x0/0x576 [gfs]
> > Feb 18 16:25:46 fendev04 dlm_controld[29545]: open
> > "/sys/kernel/dlm/cati_gfs/control" error -1 2
> > Feb 18 16:25:46 fendev04 kernel:  [<c04782bf>] vfs_kern_mount+0x7d/0xf2
> > Feb 18 16:25:46 fendev04 dlm_controld[29545]: open
> > "/sys/kernel/dlm/cati_gfs/event_done" error -1 2
> > Feb 18 16:25:46 fendev04 kernel:  [<c0478366>] do_kern_mount+0x25/0x36
> > Feb 18 16:25:46 fendev04 kernel:  [<c048b381>] do_mount+0x5f5/0x665
> > Feb 18 16:25:46 fendev04 kernel:  [<c0434627>]
> > autoremove_wake_function+0x0/0x2d
> > Feb 18 16:25:46 fendev04 kernel:  [<c05aaf08>] do_sock_read+0xae/0xb7
> > Feb 18 16:25:46 fendev04 kernel:  [<c05ab49b>] sock_aio_read+0x53/0x61
> > Feb 18 16:25:46 fendev04 kernel:  [<c05aee2d>]
> sock_def_readable+0x31/0x5b
> > Feb 18 16:25:47 fendev04 kernel:  [<c045ac63>]
> > get_page_from_freelist+0x96/0x333
> > Feb 18 16:25:47 fendev04 kernel:  [<c048a273>]
> copy_mount_options+0x26/0x109
> > Feb 18 16:25:47 fendev04 kernel:  [<c048b45e>] sys_mount+0x6d/0xa5
> > Feb 18 16:25:47 fendev04 kernel:  [<c0404f17>] syscall_call+0x7/0xb
> > Feb 18 16:25:47 fendev04 kernel:  =======================
> > Feb 18 16:25:47 fendev04 kernel: dlm: gfs_web: group join failed -512 0
> > Feb 18 16:25:47 fendev04 kernel: lock_dlm: dlm_new_lockspace error -512
> > Feb 18 16:25:47 fendev04 kernel: can't mount proto=lock_dlm,
> > table=test1_cluster:gfs_web, hostdata=jid=0:id=327682:first=1
> > Feb 18 16:25:47 fendev04 kernel: dlm: cati_gfs: group join failed -512 0
> > Feb 18 16:25:47 fendev04 kernel: lock_dlm: dlm_new_lockspace error -512
> > Feb 18 16:25:47 fendev04 kernel: can't mount proto=lock_dlm,
> > table=test1_cluster:cati_gfs, hostdata=jid=0:id=393218:first=1
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>



-- 
Alan A.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20090219/155a5ead/attachment.htm>


More information about the Linux-cluster mailing list