From jpokorny at redhat.com Thu Mar 2 23:12:15 2017 From: jpokorny at redhat.com (Jan =?utf-8?Q?Pokorn=C3=BD?=) Date: Fri, 3 Mar 2017 00:12:15 +0100 Subject: [Linux-cluster] Pagure.io as legacy codebases/distribution files/documentation hosting (Was: Moving cluster project) In-Reply-To: <20170228021803.GE10310@redhat.com> References: <35cd1538-b803-b70d-7b9e-3b2f25d74a56@redhat.com> <966ceedb-f76e-5113-4f1f-ec23f90259e2@redhat.com> <20170117195831.GA925@redhat.com> <6290aea0-152b-6603-72ec-2b77a4b18956@redhat.com> <20170117212757.GB925@redhat.com> <20170228021803.GE10310@redhat.com> Message-ID: <20170302231215.GC10549@redhat.com> [I've realized I should give a heads up to linux-cluster as well, beside cluster-devel and developers-at-clusterlabs lists, especially since I am reusing it's name for a projects group/namespace in pagure.io] On 28/02/17 03:18 +0100, Jan Pokorn? wrote: > On 17/01/17 22:27 +0100, Jan Pokorn? wrote: >> On 17/01/17 21:14 +0000, Andrew Price wrote: >>> On 17/01/17 19:58, Jan Pokorn? wrote: >>>> So I think we should arrange for a move to pagure.io for this cluster >>>> project as well if possible, if only to retain the ability to change >>>> something should there be a need. >>> >>> Good plan. >>> >>>> I can pursuit this if there are no complaints. Just let me know >>>> (off-list) who aspires to cluster-maint group (to be created) >>>> membership. >>> >>> Could you give the gfs2-utils-maint group push access to the cluster project >>> once it's been set up? (It is possible to add many groups to a project.) I >>> think that would be the most logical way to do it. >> >> Sure and thanks for a cumulative access assignment tip. >> >> I'll proceed on Friday or early next week, then. > > Well, scheduler of mine didn't get to it until now, so sorry > to anyone starting to worry. > > So what's been done: > > - git repo moved over to https://pagure.io/linux-cluster/cluster > + granted commit rights for gfs2-utils-maint group > (and will add some more folks to linux-cluster group, > feel free to bug me off-list about that) > + mass-committed an explanation change to every branch at > the discontinued fedorahosted.org (fh.o) provider I could, > as some are already frozen > (https://git.fedorahosted.org/cgit/cluster.git/) > . I've decided to use a namespace (because there are possibly > more projects to be migrated under that label), Actually, there are quite a few legacy project copied over, some merely for plain archival bits-preserving: https://pagure.io/group/linux-cluster [did I miss anything? AFAIK, gfs-utils and dlm components migrated on their own, and corosync is on GitHub for years] Actually also some components otherwise found under ClusterLabs label (note that *-agents are common to both worlds) are affected, and for that I created a separate ClusterLabs group on pagure.io: https://pagure.io/group/ClusterLabs The respective projects there are just envelopes that I used for uploading distribution files and/or documentation that were so far served by fedorahosted.org [*], not used for active code hosting (at this time, anyway). [*] locations like: https://fedorahosted.org/releases/q/u/quarterback/ https://fedorahosted.org/releases/f/e/fence-agents/ > and have stuck with linux-cluster referring to the mailing list > of the same name that once actively served to discuss the > cluster stack in question (and is quite abandoned nowadays) [i.e. this very list that I somehow originally ignored, sigh] > - quickly added backup location links at > https://fedorahosted.org/cluster/ and > https://fedorahosted.org/cluster/wiki/FenceAgentAPI, I've converted the latter to Markdown and exposed at https://docs.pagure.org/ClusterLabs.fence-agents/FenceAgentAPI.md The maintenance or just source access should be as simple as cloning from ssh://git at pagure.io/docs/ClusterLabs/fence-agents.git or https://pagure.io/docs/ClusterLabs/fence-agents.git, respectively. > i.e., the pages that seem most important to me, to allow for > smooth "forward compatibility"; the links currently refer to vain > stubs at ClusterLabs wiki, but that can be solved later on -- I am > still unsure if trac wikis at fh.o will be served in the next > phase or shut down right away and apparently this measure will > help only in the former case > > What to do: > - move releases over to pagure.io as well: > https://fedorahosted.org/releases/c/l/cluster/ Done for cluster: http://releases.pagure.org/linux-cluster/cluster/ Tarballs for split components from here will eventually be uploaded to respective release directories for the particular projects, e.g., http://releases.pagure.org/ClusterLabs/fence-agents/, it's a WIP. > - possibly migrate some original wiki content to proper > "doc pages" exposed directly through pagure.io So far I am just collecting the cluster wiki texts for possible later ressurecting. > - resolve the question of the linked wiki stubs and > cross-linking as such > > Any comments? Ideas? -- Jan (Poki) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From amjadcsu at gmail.com Fri Mar 3 09:56:17 2017 From: amjadcsu at gmail.com (Amjad Syed) Date: Fri, 3 Mar 2017 12:56:17 +0300 Subject: [Linux-cluster] Active passive cluster with shared storage without LVM Message-ID: Hello, We are using RHEL 7.2 to create an active/passive cluster using pacemaker. The cluster will have shared storage. Our servers have standard partition. Not LVM . We are using only 2 servers for this cluster. Can it be possible to create shared stroage resource in pacemaker without using LVM or CLVM? -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at fuhrmann.homedns.org Fri Mar 3 14:05:07 2017 From: stefan at fuhrmann.homedns.org (Stefan Fuhrmann) Date: Fri, 03 Mar 2017 15:05:07 +0100 Subject: [Linux-cluster] Active passive cluster with shared storage without LVM In-Reply-To: References: Message-ID: <5918121.FuO44AeqPb@stefan-ubu> Hi Have a look to drbd http://www.drbd.org/en/ At active/passive setup you can use ext filesystem. Stefan Am Freitag, 3. M?rz 2017, 12:56:17 CET schrieb Amjad Syed: > Hello, > > We are using RHEL 7.2 to create an active/passive cluster using pacemaker. > The cluster will have shared storage. > > Our servers have standard partition. Not LVM . > > We are using only 2 servers for this cluster. > > Can it be possible to create shared stroage resource in pacemaker without > using LVM or CLVM? From amjadcsu at gmail.com Wed Mar 22 07:11:34 2017 From: amjadcsu at gmail.com (Amjad Syed) Date: Wed, 22 Mar 2017 10:11:34 +0300 Subject: [Linux-cluster] Active/passive cluster between physical and VM Message-ID: Hello, We are planning to build a 2 node Active/passive cluster using pacemaker. Can the cluster be build between one physical and one VM machine in Centos 7.x? If yes, what can be used as fencing agent ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at alteeve.ca Wed Mar 22 08:13:22 2017 From: lists at alteeve.ca (Digimer) Date: Wed, 22 Mar 2017 04:13:22 -0400 Subject: [Linux-cluster] Active/passive cluster between physical and VM In-Reply-To: References: Message-ID: <21445efc-3640-2437-34a3-65a2e087100b@alteeve.ca> On 22/03/17 03:11 AM, Amjad Syed wrote: > Hello, > > We are planning to build a 2 node Active/passive cluster using pacemaker. > Can the cluster be build between one physical and one VM machine in > Centos 7.x? > If yes, what can be used as fencing agent ? So long as the traffic between the nodes is not molested, it should work fine. As for fencing, it depends on your hardware and hypervisor... Using a generic example, you could use fence_ipmilan to fence the hardware node and fence_virsh to fence a KVM/qemu based VM. PS - I've cc'ed clusterlabs - users ML. This list is deprecated, so please switch over to there (http://lists.clusterlabs.org/mailman/listinfo/users). -- Digimer Papers and Projects: https://alteeve.com/w/ "I am, somehow, less interested in the weight and convolutions of Einstein?s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops." - Stephen Jay Gould From anprice at redhat.com Tue Mar 28 16:11:42 2017 From: anprice at redhat.com (Andrew Price) Date: Tue, 28 Mar 2017 17:11:42 +0100 Subject: [Linux-cluster] gfs2-utils 3.1.10 released Message-ID: <4200a0d5-5a88-b580-da55-cfe934c041ff@redhat.com> Hi all, I am happy to announce the 3.1.10 release of gfs2-utils. This is the first release of gfs2-utils since the project was moved to Pagure git hosting and is a relatively small release, mainly adding polish and stability improvements over 3.1.9. This release includes the following notable changes: * Various fsck.gfs2 checking and performance improvements * Better handling of odd block device geometry in mkfs.gfs2 * gfs2_edit savemeta leaf chain block handling fixes * Now builds with -D_FORTIFY_SOURCE=2 * libuuid is now used to handle UUIDs instead of custom functions * New --enable-gprof configure option for profiling * Unaligned access-related fixes for sparc64 * Documentation improvements See below for a complete list of changes. The source tarball is available from: https://releases.pagure.org/gfs2-utils/gfs2-utils-3.1.10.tar.gz Please report bugs against the gfs2-utils component of Fedora rawhide: https://bugzilla.redhat.com/enter_bug.cgi?product=Fedora&component=gfs2-utils&version=rawhide Regards, Andy Changes since version 3.1.9: Andrew Price (30): fsck.gfs2: Initialize removed_lastmeta in delete_block_if_notdup glocktop: Check the return value of fgets() libgfs2: Propagate the value of gfs2_rindex.__pad mkfs.gfs2: Tidy up are_you_sure() gfs2-utils: Add --enable-gprof to configure gfs2_convert: Fix misleading indentation warning gfs2-utils: Build with -D_FORTIFY_SOURCE=2 gfs2-utils: Add a configure check for libuuid gfs2-utils: Use libuuid for uuid handling gfs2-utils tests: Disable timeout for check_rgrp fsck.gfs2: Handle gfs2_readi() errors in check_statfs() gfs2_edit savemeta: Factor out the bh saving code gfs2_edit savemeta: Don't read rgrp blocks twice gfs2_edit savemeta: Follow lf_next gfs2_edit: Fix unaligned access in restore_init() gfs2_edit: Fix unaligned accesses due to saved_metablock size gfs2_edit savemeta: Split out the rgrp saving code gfs2_edit savemeta: Save corrupt rgrp headers gfs2-utils tests: Re-enable a fsck.gfs2 test gfs2-utils: Rename README.build to README gfs2-utils: README file improvements mkfs.gfs2: Add an extended option for device topology testing mkfs.gfs2: Warn when device is misaligned mkfs.gfs2: Disable rgrp alignment when dev topology is unsuitable mkfs.gfs2: Disregard device topology if zero-sized sectors reported gfs2-utils tests: Add tests for device topology handling in mkfs.gfs2 gfs2-utils README: List libuuid as a dependency gfs2-utils docs: Update some URLs gfs2_edit: Clarify savemeta output file type in docs gfs2-utils/po: Update translation template Bob Peterson (10): fsck.gfs2: "undo" functions can stop too early on duplicates fsck.gfs2: link count checking wrong inode's formal inode number fsck.gfs2: check formal inode number when links go from 1 to 2 gfs2_edit: Add rgrepair option libgfs2: Make rg repair do better sanity checks on rindex fsck.gfs2: Prevent rgrp segment overflow fsck.gfs2: allow rindex recovery when too many segments fsck.gfs2: Repair rindex entries that were destroyed fsck.gfs2: Remember the previous rgrp pointer for speed fsck.gfs2: Make pass2 go by directory rbtree for performance From saffroy at gmail.com Fri Mar 31 22:46:45 2017 From: saffroy at gmail.com (Jean-Marc Saffroy) Date: Sat, 1 Apr 2017 00:46:45 +0200 (CEST) Subject: [Linux-cluster] missing tags on dlm git repo at pagure.io Message-ID: Hi, I just noticed that git tags are gone on the repo at pagure.io. In case someone wants to restore them, in my older checkout the tags are: $ git log --no-walk --tags --pretty="%h %d %s" --decorate=full e9302c0 (tag: refs/tags/dlm-4.0.7, refs/remotes/origin/master, refs/remotes/origin/HEAD) release 4.0.7 5822d89 (tag: refs/tags/dlm-4.0.6) release 4.0.6 8ab4292 (tag: refs/tags/dlm-4.0.5) release 4.0.5 78979b0 (tag: refs/tags/dlm-4.0.4) release 4.0.4 2811cdf (tag: refs/tags/dlm-4.0.3) release 4.0.3 ccb5e40 (tag: refs/tags/dlm-4.0.2) release 4.0.2 2b69766 (HEAD, tag: refs/tags/dlm-4.0.1) release 4.0.1 499cb28 (tag: refs/tags/dlm-4.0.0) release 4.0.0 d38ab51 (tag: refs/tags/dlm-3.99.5) release 3.99.5 88e1465 (tag: refs/tags/3.99.4) release 3.99.4 213749e (tag: refs/tags/dlm-3.99.3) release 3.99.3 9832cc8 (tag: refs/tags/dlm-3.99.2) release 3.99.2 dae8d81 (tag: refs/tags/dlm-3.99.1) release 3.99.1 abb3da4 (tag: refs/tags/dlm-3.99.0) release 3.99.0 Cheers, Jean-Marc -- saffroy at gmail.com