From andrew.kerber at gmail.com Mon Jun 6 21:37:01 2016 From: andrew.kerber at gmail.com (Andrew Kerber) Date: Mon, 6 Jun 2016 16:37:01 -0500 Subject: [Linux-cluster] Fencing Question Message-ID: I am doing some experimentation with Linux clustering, and still fairly new on it. I have built a cluster as a proof of concept running a PostgreSQL 9.5 database on gfs2 using VMware workstation 12.0 and RHEL7. GFS2 requires a fencing resource, which I have managed to create using fence_virsh. And the clustering software thinks the fencing is working. However, it will not actually shut down a node, and I have not been able to figure out the appropriate parameters for VMware workstation to get it to work. I tried fence-scsi also, but that doesnt seem to work with a shared vmdk, Has anyone figured out a fencing agent that will work with VMware workstation? Failing that, is there a comprehensive set of instructions for creating my own fencing agent? -- Andrew W. Kerber 'If at first you dont succeed, dont take up skydiving.' -------------- next part -------------- An HTML attachment was scrubbed... URL: From emi2fast at gmail.com Mon Jun 6 22:05:52 2016 From: emi2fast at gmail.com (emmanuel segura) Date: Tue, 7 Jun 2016 00:05:52 +0200 Subject: [Linux-cluster] Fencing Question In-Reply-To: References: Message-ID: do you configured virsh to manager your vmware vms? fence_virsh is an I/O Fencing agent which can be used with the virtual machines managed by libvirt. It logs via ssh to a dom0 and there run virsh command, which does all work. By default, virsh needs root account to do properly work. So you must allow ssh login in your sshd_config. fence_virsh accepts options on the command line as well as from stdin. Fenced sends parameters through stdin when it execs the agent. fence_virsh can be run by itself with command line options. This is useful for testing and for turning outlets on or off from scripts. 2016-06-06 23:37 GMT+02:00 Andrew Kerber : > I am doing some experimentation with Linux clustering, and still fairly new > on it. I have built a cluster as a proof of concept running a PostgreSQL 9.5 > database on gfs2 using VMware workstation 12.0 and RHEL7. GFS2 requires a > fencing resource, which I have managed to create using fence_virsh. And the > clustering software thinks the fencing is working. However, it will not > actually shut down a node, and I have not been able to figure out the > appropriate parameters for VMware workstation to get it to work. I tried > fence-scsi also, but that doesnt seem to work with a shared vmdk, Has > anyone figured out a fencing agent that will work with VMware workstation? > > Failing that, is there a comprehensive set of instructions for creating my > own fencing agent? > > > -- > Andrew W. Kerber > > 'If at first you dont succeed, dont take up skydiving.' > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster -- .~. /V\ // \\ /( )\ ^`~'^ From lists at alteeve.ca Mon Jun 6 22:22:50 2016 From: lists at alteeve.ca (Digimer) Date: Mon, 6 Jun 2016 18:22:50 -0400 Subject: [Linux-cluster] Fencing Question In-Reply-To: References: Message-ID: <1ddc23b7-938a-83bd-7fb0-0a1f997ecbeb@alteeve.ca> On 06/06/16 05:37 PM, Andrew Kerber wrote: > I am doing some experimentation with Linux clustering, and still fairly > new on it. I have built a cluster as a proof of concept running a > PostgreSQL 9.5 database on gfs2 using VMware workstation 12.0 and > RHEL7. GFS2 requires a fencing resource, which I have managed to create > using fence_virsh. And the clustering software thinks the fencing is > working. However, it will not actually shut down a node, and I have not > been able to figure out the appropriate parameters for VMware > workstation to get it to work. I tried fence-scsi also, but that doesnt > seem to work with a shared vmdk, Has anyone figured out a fencing > agent that will work with VMware workstation? > > Failing that, is there a comprehensive set of instructions for creating > my own fencing agent? > > > -- > Andrew W. Kerber > > 'If at first you dont succeed, dont take up skydiving.' The 'fence_vmware' (and helpers) are designed specifically for vmware. I've not used them myself, but I've heard of many people using them successfully. Side note; GFS2, for all it's greatness, is not fast (nothing using cluster locking will be). Be sure to performance test before production. If you find the performance is not good, consider active/passive on a standard FS. -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? From andrew.kerber at gmail.com Mon Jun 6 22:55:29 2016 From: andrew.kerber at gmail.com (Andrew Kerber) Date: Mon, 6 Jun 2016 17:55:29 -0500 Subject: [Linux-cluster] Fencing Question In-Reply-To: References: Message-ID: <30436E50-CACB-4FDE-AC09-33B3FF07F90C@gmail.com> I think you have identified the information I am missing. Is there documentation on configuring virsh to manage VMware workstation VMs? For all of my research, I have not seen such documentation. Sent from my iPad > On Jun 6, 2016, at 5:05 PM, emmanuel segura wrote: > > do you configured virsh to manager your vmware vms? > > fence_virsh is an I/O Fencing agent which can be used with the virtual > machines managed by libvirt. It logs via ssh to a dom0 and there run > virsh command, which does all work. > > By default, virsh needs root account to do properly work. So you must > allow ssh login in your sshd_config. > > fence_virsh accepts options on the command line as well as from stdin.syh > Fenced sends parameters through stdin when it execs the agent. > fence_virsh can be run by itself with command line options. This is > useful for testing and for turning outlets on or off from scripts. > > > > > > 2016-06-06 23:37 GMT+02:00 Andrew Kerber : >> I am doing some experimentation with Linux clustering, and still fairly new >> on it. I have built a cluster as a proof of concept running a PostgreSQL 9.5 >> database on gfs2 using VMware workstation 12.0 and RHEL7. GFS2 requires a >> fencing resource, which I have managed to create using fence_virsh. And the >> clustering software thinks the fencing is working. However, it will not >> actually shut down a node, and I have not been able to figure out the >> appropriate parameters for VMware workstation to get it to work. I tried >> fence-scsi also, but that doesnt seem to work with a shared vmdk, Has >> anyone figured out a fencing agent that will work with VMware workstation? >> >> Failing that, is there a comprehensive set of instructions for creating my >> own fencing agent? >> >> >> -- >> Andrew W. Kerber >> >> 'If at first you dont succeed, dont take up skydiving.' >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster > > > > -- > .~. > /V\ > // \\ > /( )\ > ^`~'^ > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From anprice at redhat.com Tue Jun 7 11:04:43 2016 From: anprice at redhat.com (Andrew Price) Date: Tue, 7 Jun 2016 12:04:43 +0100 Subject: [Linux-cluster] gfs2-utils 3.1.9 released Message-ID: <4a0793ea-27d1-2c6b-4e54-f7b269283895@redhat.com> Hi all, I am happy to announce the 3.1.9 release of gfs2-utils. This release includes the following notable changes: * fsck.gfs2 now uses less memory * Improvements and fixes to fsck.gfs2's xattr and resource group checking * mkfs.gfs2 reports progress so that you can tell it's still alive during a long mkfs * mkfs.gfs2's -t option now accepts a longer cluster name and fs name * A udev helper script is now installed to suspend the device on withdraw, preventing hangs * Support for the de_rahead and de_cookie dirent fields has been added * gfs2_edit savemeta performance improvements * The glocktop utility has been added to help analyze locking-related performance problems * The mkfs.gfs2(8) man page has been overhauled * The rgrplbv and loccookie mount options have been added to the gfs2(5) man page * Fixes for out-of-tree builds and testing * Various other fixes, cleanups and enhancements See below for a complete list of changes. The source tarball is available from: https://fedorahosted.org/released/gfs2-utils/gfs2-utils-3.1.9.tar.gz Please report bugs against the gfs2-utils component of Fedora rawhide: https://bugzilla.redhat.com/enter_bug.cgi?product=Fedora&component=gfs2-utils&version=rawhide Regards, Andy Changes since version 3.1.8: Abhi Das (2): fsck.gfs2: replace recent i_goal fixes with simple logic gfs2-utils: Fix hang on withdraw Andreas Gruenbacher (4): libgfs2: Add support for dirent.de_rahead gfs2_edit: Include dirent.de_rahead in directory listings gfs2: Fix printf format errors on x86 gfs2_edit: Add attribute printf for print_gfs2 Andrew Price (49): scripts: rename gfs2_wd_udev.sh to gfs2_withdraw_helper scripts: install the withdraw helper script scripts: install the withdraw udev rules script gfs2-utils: Add a check for the de_rahead field gfs2_edit savemeta: speed up is_block_in_per_node() fsck.gfs2: Really be quiet in quiet mode fsck.gfs2: Improve read ahead in pass1_process_bitmap libgfs2: Return the rgrp count in lgfs2_rgrps_plan() mkfs.gfs2: Always validate the locktable option gfs2-utils: Add the glocktop utility gfs2_edit: Don't use the global block variable in block_is_a_journal gfs2_edit: Don't use the global block variable in block_is_in_per_node gfs2_edit: Don't use the global block variable in block_is_jindex gfs2_edit: Don't use the global block variable in block_is_inum_file gfs2_edit: Don't use global block variable in block_is_statfs_file gfs2_edit: Don't use global block variable in block_is_quota_file gfs2_edit: Don't use global block variable in block_is_rindex gfs2_edit: Don't use the global block variable in block_is_per_node gfs2_edit: Don't use the global block variable in block_is_systemfile gfs2_edit: Don't use the global block variable in block_is_rgtree gfs2_edit: Don't use the global block variable in block_is_journals gfs2_edit: Only declare the block variable where needed gfs2_edit: Don't use the global block variable in save_block gfs2_edit: Don't use the global block variable in save_indirect_blocks gfs2_edit: Don't use the global block variable in save_inode_data gfs2_edit: Don't use the global block variable in get_gfs_struct_info gfs2_edit: Don't use the global block variable in save_ea_block gfs2_edit: Don't use the global block variable in save_allocated gfs2_edit: Don't use the global block variable in savemeta gfs2_edit: Don't use the global bh variable in display_block_type gfs2_edit: Don't use the global block variable in savemeta.c gfs2_edit: Don't export bh gfs2_edit: Remove block_in_mem and fix a memory leak libgfs2: Support the new dirent de_cookie field mkfs.gfs2(8) man page improvements glocktop: Fix a tight loop under nohup libgfs2: New function lgfs2_rindex_read_one() gfs2-utils tests: Add helper program to damage rgrps gfs2-utils tests: Add fsck.gfs2 rgrp/rindex repair tests mkfs.gfs2: Remove unnecessary externs from progress functions gfs2_edit: Don't hijack bh->b_data in read_master_dir mkfs.gfs2: Open the target device with O_EXCL fsck.gfs2: Fix a potential memory leak in pass3 gfs2_edit savemeta: Fix use of uninitialized 'blk' gfs2-utils/po: Update translation template gfs2-utils: Don't build the testsuite script in srcdir gfs2-utils tests: Provide 'installcheck' with the path to gfs2l gfs2-utils tests: Remove the testsuite script in distclean gfs2-utils/po: Update translation template Benjamin Marzinski (1): gfs2(5): add rgrplbv and loccookie info the the manpage Bob Peterson (78): fsck.gfs2: Change duptree structure to have generic flags fsck.gfs2: Detect, fix and clone duplicate block refs within a dinode libgfs2: Check block range when inserting into rgrp tree libgfs2: Check rgd->bits before referencing it fsck.gfs2: Add check for gfs1 invalid inode refs in dentry fsck.gfs2: Make debug messages more succinct wrt extended attributes fsck.gfs2: Break up funtion handle_dup_blk fsck.gfs2: Only preserve the _first_ acceptable inode reference fsck.gfs2: Don't just assume the remaining EA reference is good fsck.gfs2: Don't delete inode for duplicate reference in EA fsck.gfs2: Don't traverse EAs that belong to another inode fsck.gfs2: Refactor function check_indirect_eattr fsck.gfs2: Once an indirect ea error is found, flag all that follow fsck.gfs2: Always restore saved value for di_eattr fsck.gfs2: Remove redundancy in add_duplicate_ref fsck.gfs2: Don't remove duplicate eattr blocks fsck.gfs2: Refactor check_eattr_entries and add error messages fsck.gfs2: remove bad EAs at the end, not as-you-go fsck.gfs2: Combine remove_inode_eattr with its only caller fsck.gfs2: Print debug message to dilineate metadata blocks fsck.gfs2: Remove pass1c in favor of processing in pass1 fsck.gfs2: Clone duplicate data block pointers gfs2_edit: Log descriptor continuation blocks print wrong info libgfs2: Backport rbtree fixes from upstream libgfs2: Check for obvious corruption reading rindex libgfs2: Change rgrp counts to be uint64_t gfs2_edit: Don't reference an empty rgrp tree fsck.gfs2: Read jindex before making rindex repairs fsck.gfs2: better reporting of false positive rgrp identification fsck.gfs2: Minor reformatting fsck.gfs2: Ditch variable rgcount_from_index fsck.gfs2: Add ability to fix rindex file size fsck.gfs2: Do not try to overrun the max rgrps that fit inside rindex fsck.gfs2: Detect multiple rgrp grow segments fsck.gfs2: Move pass5 to immediately follow pass1 fsck.gfs2: Convert block_type to bitmap_type after pass1 and 5 fsck.gfs2: Change bitmap_type variables to int fsck.gfs2: Use di_entries to determine if lost+found was created fsck.gfs2: pass1b shouldn't complain about non-bitmap blocks fsck.gfs2: Change all fsck_blockmap_set to fsck_bitmap_set fsck.gfs2: Move set_ip_blockmap to pass1 fsck.gfs2: Remove unneeded parameter instree from set_ip_blockmap fsck.gfs2: Move leaf repair to pass2 fsck.gfs2: Eliminate astate code fsck.gfs2: Move reprocess code to pass1 fsck.gfs2: Separate out functions that may only be done after pass1 fsck.gfs2: Divest check_metatree from fsck_blockmap_set fsck.gfs2: eliminate fsck_blockmap_set from check_eattr_entries fsck.gfs2: Move blockmap stuff to pass1.c fsck: make pass1 call bitmap reconciliation AKA pass5 fsck.gfs2: make blockmap global variable only to pass1 fsck.gfs2: Add wrapper function pass1_check_metatree fsck.gfs2: pass counted_links into fix_link_count in pass4 fsck.gfs2: refactor pass4 function scan_inode_list fsck.gfs2: More refactoring of pass4 function scan_inode_list fsck.gfs2: Fix white space problems fsck.gfs2: move link count info for directories to directory tree fsck.gfs2: Use bitmaps instead of linked list for inodes w/nlink == 1 fsck.gfs2: Refactor check_n_fix_bitmap to make it more readable fsck.gfs2: adjust rgrp inode count when fixing bitmap fsck.gfs2: blocks cannot be UNLINKED in pass1b or after that fsck.gfs2: Add error checks to get_next_leaf fsck.gfs2: re-add a non-allocating repair_leaf to pass1 libgfs2: Allocate new GFS1 metadata as type 3, not type 1 fsck.gfs2: Undo partially done metadata records fsck.gfs2: Eliminate redundant code in _fsck_bitmap_set fsck.gfs2: Fix inode counting bug fsck.gfs2: Adjust bitmap for lost+found after adding to dirtree fsck.gfs2: Add initialization checks for GFS1 used metadata fsck.gfs2: Use BLKST constants to make pass5 more clear fsck.gfs2: Fix GFS1 "used meta" accounting bug fsck.gfs2: pass1b is too noisy wrt gfs1 non-dinode metadata fsck.gfs2: Fix rgrp dinode accounting bug fsck.gfs2: Fix rgrp accounting in check_n_fix_bitmap gfs2_edit: Make 'j' jump to journaled block on log descriptors fsck.gfs2: shorten bitmap state variable names fsck.gfs2: Speed up fsck.gfs2 with a new inode rgrp pointer fsck.gfs2: Further performance gains with function valid_block_ip Paul Evans (3): mkfs.gfs2: Allow longer cluster names mkfs.gfs2: Add a progress indicator to mkfs.gfs2 mkfs.gfs2: print message about BKLDISCARD ioctl taking a long time Shane Bradley (1): gfs2_lockcapture: Fix condition where dlm lockspaces parsing failed