[Linux-cluster] Linux-cluster Digest, Vol 72, Issue 9

John Wong j_w_usa at yahoo.com
Fri Apr 9 17:58:12 UTC 2010


I have tried the following:
 
nfs-exported the gfs2 file system, and a nfs client mounted it. It worked.
 
john
 
------------------------------

Message: 4
Date: Fri, 09 Apr 2010 10:01:51 +0100
From: Steven Whitehouse <swhiteho at redhat.com>
To: linux clustering <linux-cluster at redhat.com>
Subject: Re: [Linux-cluster] mounting gfs volumes outside a cluster?
Message-ID: <1270803711.2753.4.camel at localhost>
Content-Type: text/plain; charset="UTF-8"

Hi,

On Fri, 2010-04-09 at 10:28 +0200, Matthias Leopold wrote:
> hi,
> 
> is it possible to mount a gfs volume readonly from outside the cluster
> (while the cluster is up and all nodes do I/O)?
> 

No. There is a "spectator" mount where a read only node can mount
without having a journal assigned to it, but it must still be part of
the cluster,

Steve.




------------------------------


--- On Fri, 4/9/10, linux-cluster-request at redhat.com <linux-cluster-request at redhat.com> wrote:


From: linux-cluster-request at redhat.com <linux-cluster-request at redhat.com>
Subject: Linux-cluster Digest, Vol 72, Issue 9
To: linux-cluster at redhat.com
Date: Friday, April 9, 2010, 9:00 AM


Send Linux-cluster mailing list submissions to
    linux-cluster at redhat.com

To subscribe or unsubscribe via the World Wide Web, visit
    https://www.redhat.com/mailman/listinfo/linux-cluster
or, via email, send a message with subject or body 'help' to
    linux-cluster-request at redhat.com

You can reach the person managing the list at
    linux-cluster-owner at redhat.com

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Linux-cluster digest..."


Today's Topics:

   1. Re: GFS2 and D state HTTPD processes (Ricardo Arg?ello)
   2. mounting gfs volumes outside a cluster? (Matthias Leopold)
   3. Re: "openais[XXXX]" [TOTEM] Retransmit List: XXXXX"    in
      /var/log/messages (Bernard Chew)
   4. Re: mounting gfs volumes outside a cluster? (Steven Whitehouse)
   5. Re: mounting gfs volumes outside a cluster? (Bernard Chew)
   6. Re: mounting gfs volumes outside a cluster? (Matthias Leopold)
   7. Clustering and Cluster-Storage channels (frank)
   8. Cluster 3.0.10 stable release (Fabio M. Di Nitto)
   9. Re: Clustering and Cluster-Storage channels (Carlos Maiolino)


----------------------------------------------------------------------

Message: 1
Date: Fri, 9 Apr 2010 00:02:37 -0500
From: Ricardo Arg?ello <ricardo at fedoraproject.org>
To: linux clustering <linux-cluster at redhat.com>
Subject: Re: [Linux-cluster] GFS2 and D state HTTPD processes
Message-ID:
    <u2q500f0ca01004082202kfea51f14ved10273ba3c005e0 at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

Looks like this bug:

GFS2 - probably lost glock call back
https://bugzilla.redhat.com/show_bug.cgi?id=498976

This is fixed in the kernel included in RHEL 5.5.
Do a "yum update" to fix it.

Ricardo Arguello

On Tue, Mar 2, 2010 at 6:10 AM, Emilio Arjona <emilio.ah at gmail.com> wrote:
> Thanks for your response, Steve.
>
> 2010/3/2 Steven Whitehouse <swhiteho at redhat.com>:
>> Hi,
>>
>> On Fri, 2010-02-26 at 16:52 +0100, Emilio Arjona wrote:
>>> Hi,
>>>
>>> we are experiencing some problems commented in an old thread:
>>>
>>> http://www.mail-archive.com/linux-cluster@redhat.com/msg07091.html
>>>
>>> We have 3 clustered servers under Red Hat 5.4 accessing a GFS2 resource.
>>>
>>> fstab options:
>>> /dev/vg_cluster/lv_cluster /opt/datacluster gfs2
>>> defaults,noatime,nodiratime,noquota 0 0
>>>
>>> GFS options:
>>> plock_rate_limit="0"
>>> plock_ownership=1
>>>
>>> httpd processes run into D status sometimes and the only solution is
>>> hard reset the affected server.
>>>
>>> Can anyone give me some hints to diagnose the problem?
>>>
>>> Thanks :)
>>>
>> Can you give me a rough idea of what the actual workload is and how it
>> is distributed amoung the director(y/ies) ?
>
> We had problems with php sessions in the past but we fixed it by
> configuring php to store the sessions in the database instead of in
> the GFS filesystem. Now, we're having problems with files and
> directories in the "data" folder of Moodle LMS.
>
> "lsof -p" returned a i/o operation over the same folder in 2/3 nodes,
> we did a hard reset of these nodes but some hours after the CPU load
> grew up again, specially in the node that wasn't rebooted. We decided
> to reboot (v?a ssh) this node, then the CPU load went down to normal
> values in all nodes.
>
> I don't think the system's load is high enough to produce concurrent
> access problems. It's more likely to be some misconfiguration, in
> fact, we changed some GFS2 options to non default values to increase
> performance (http://www.linuxdynasty.org/howto-increase-gfs2-performance-in-a-cluster.html).
>
>>
>> This is often down to contention on glocks (one per inode) and maybe
>> because there is a process of processes writing a file or directory
>> which is in use (either read-only or writable) by other processes.
>>
>> If you are using php, then you might have to strace it to find out what
>> it is really doing,
>
> Ok, we will try to strace the D processes and post the results. Hope
> we find something!!
>
>>
>> Steve.
>>
>>> --
>>>
>>> Emilio Arjona.
>>>
>>> --
>>> Linux-cluster mailing list
>>> Linux-cluster at redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>
>
>
> --
> Emilio Arjona.
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>



------------------------------

Message: 2
Date: Fri, 09 Apr 2010 10:28:34 +0200
From: Matthias Leopold <matthias at aic.at>
To: linux clustering <linux-cluster at redhat.com>
Subject: [Linux-cluster] mounting gfs volumes outside a cluster?
Message-ID: <4BBEE532.2090408 at aic.at>
Content-Type: text/plain; charset=ISO-8859-15

hi,

is it possible to mount a gfs volume readonly from outside the cluster
(while the cluster is up and all nodes do I/O)?

-- 
Mit freundlichen Gr?ssen

Matthias Leopold
System & Network Administration

Streams Telecommunications GmbH
Universitaetsstrasse 10/7, 1090 Vienna, Austria

tel: +43 1 40159113
fax: +43 1 40159300
------------------------------------------------



------------------------------

Message: 3
Date: Fri, 9 Apr 2010 16:51:52 +0800
From: Bernard Chew <bernardchew at gmail.com>
To: sdake at redhat.com, linux clustering <linux-cluster at redhat.com>
Subject: Re: [Linux-cluster] "openais[XXXX]" [TOTEM] Retransmit List:
    XXXXX"    in /var/log/messages
Message-ID:
    <l2v95994e3c1004090151g98af9de7q2746aefd0022db04 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

> On Thu, Apr 8, 2010 at 12:58 AM, Steven Dake <sdake at redhat.com> wrote:
> On Wed, 2010-04-07 at 18:52 +0800, Bernard Chew wrote:
>> Hi all,
>>
>> I noticed "openais[XXXX]" [TOTEM] Retransmit List: XXXXX" repeated
>> every few hours in /var/log/messages. What does the message mean and
>> is it normal? Will this cause fencing to take place eventually?
>>
> This means your network environment dropped packets and totem is
> recovering them. ?This is normal operation, and in future versions such
> as corosync no notification is printed when recovery takes place.
>
> There is a bug, however, fixed in revision 2122 where if the last packet
> in the order is lost, and no new packets are unlost after it, the
> processor will enter a failed to receive state and trigger fencing.
>
> Regards
> -steve
>> Thank you in advance.
>>
>> Regards,
>> Bernard Chew
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>

Thank you for the reply Steve!

The cluster was running fine until last week where 3 nodes restarted
suddenly. I suspect fencing took place since all 3 servers restarted
at the same time but I couldn't find any fence related entries in the
log. I am guessing we hit the bug you mentioned? Will the log indicate
fencing has taken place with regards to the bug you mentioned?

Also I noticed the message "kernel: clustat[28328]: segfault at
0000000000000024 rip 0000003b31c75bc0 rsp 00007fff955cb098 error 4"
occasionally; is this related to the TOTEM message or they indicate
another problem?

Regards,
Bernard Chew



------------------------------

Message: 4
Date: Fri, 09 Apr 2010 10:01:51 +0100
From: Steven Whitehouse <swhiteho at redhat.com>
To: linux clustering <linux-cluster at redhat.com>
Subject: Re: [Linux-cluster] mounting gfs volumes outside a cluster?
Message-ID: <1270803711.2753.4.camel at localhost>
Content-Type: text/plain; charset="UTF-8"

Hi,

On Fri, 2010-04-09 at 10:28 +0200, Matthias Leopold wrote:
> hi,
> 
> is it possible to mount a gfs volume readonly from outside the cluster
> (while the cluster is up and all nodes do I/O)?
> 

No. There is a "spectator" mount where a read only node can mount
without having a journal assigned to it, but it must still be part of
the cluster,

Steve.




------------------------------

Message: 5
Date: Fri, 9 Apr 2010 17:04:33 +0800
From: Bernard Chew <bernardchew at gmail.com>
To: linux clustering <linux-cluster at redhat.com>
Subject: Re: [Linux-cluster] mounting gfs volumes outside a cluster?
Message-ID:
    <i2z95994e3c1004090204qba01e4b7p77a8983132e4e7f1 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

> On Fri, Apr 9, 2010 at 4:28 PM, Matthias Leopold <matthias at aic.at> wrote:
> hi,
>
> is it possible to mount a gfs volume readonly from outside the cluster
> (while the cluster is up and all nodes do I/O)?
>
> --
> Mit freundlichen Gr?ssen
>
> Matthias Leopold
> System & Network Administration
>
> Streams Telecommunications GmbH
> Universitaetsstrasse 10/7, 1090 Vienna, Austria
>
> tel: +43 1 40159113
> fax: +43 1 40159300
> ------------------------------------------------
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>

Hi Matthias,

I am not the expert here but how about exporting the GFS volume using NFS?

Regards,
Bernard



------------------------------

Message: 6
Date: Fri, 09 Apr 2010 12:31:55 +0200
From: Matthias Leopold <matthias at aic.at>
To: linux clustering <linux-cluster at redhat.com>
Subject: Re: [Linux-cluster] mounting gfs volumes outside a cluster?
Message-ID: <4BBF021B.3090205 at aic.at>
Content-Type: text/plain; charset=ISO-8859-1

Bernard Chew schrieb:
>> On Fri, Apr 9, 2010 at 4:28 PM, Matthias Leopold <matthias at aic.at> wrote:
>> hi,
>>
>> is it possible to mount a gfs volume readonly from outside the cluster
>> (while the cluster is up and all nodes do I/O)?
>>
>> --
>> Mit freundlichen Gr?ssen
>>
>> Matthias Leopold
>> System & Network Administration
>>
>> Streams Telecommunications GmbH
>> Universitaetsstrasse 10/7, 1090 Vienna, Austria
>>
>> tel: +43 1 40159113
>> fax: +43 1 40159300
>> ------------------------------------------------
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
> 
> Hi Matthias,
> 
> I am not the expert here but how about exporting the GFS volume using NFS?
> 
> Regards,
> Bernard
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster

that's an nice idea,thx
at second glance this indeed seems to be a viable solution

regards,
matthias




------------------------------

Message: 7
Date: Fri, 09 Apr 2010 13:39:21 +0200
From: frank <frank at si.ct.upc.edu>
To: linux-cluster at redhat.com
Subject: [Linux-cluster] Clustering and Cluster-Storage channels
Message-ID: <4BBF11E9.803 at si.ct.upc.edu>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi,
we have several machines with RH 5.4 and we use Cluster and 
Cluster-Storage (because we use GFS).
We also have troubles in updates because we don't know how to subscribe 
out machines to that channels. From RHN, in "Software Channel 
Subscriptions" part, we see:

Release Channels for Red Hat Enterprise Linux 5 for x86_64
     RHEL FasTrack (v. 5 for 64-bit x86_64) (Channel Details)     
Consumes a regular entitlement (13 available)
     RHEL Optional Productivity Apps (v. 5 for 64-bit x86_64) (Channel 
Details)     Consumes a regular entitlement (13 available)
     RHEL Supplementary (v. 5 for 64-bit x86_64) (Channel Details)     
Consumes a regular entitlement (13 available)
     RHEL Virtualization (v. 5 for 64-bit x86_64) (Channel Details)     
Consumes a regular entitlement (9 available)
     Red Hat Network Tools for RHEL Server (v.5 64-bit x86_64) (Channel 
Details)     Consumes a regular entitlement (10 available)
BETA Channels for Red Hat Enterprise Linux 5 for x86_64
     RHEL Optional Productivity Apps (v. 5 for 64-bit x86_64) Beta 
(Channel Details)     Consumes a regular entitlement (13 available)
     RHEL Supplementary (v. 5 for 64-bit x86_64) Beta (Channel Details) 
     Consumes a regular entitlement (13 available)
     RHEL Virtualization (v. 5 for 64-bit x86_64) Beta (Channel Details) 
     Consumes a regular entitlement (13 available)
     Red Hat Enterprise Linux (v. 5 for 64-bit x86_64) Beta (Channel 
Details)     Consumes a regular entitlement (13 available)
Additional Services Channels for Red Hat Enterprise Linux 5 for x86_64
     RHEL Hardware Certification (v. 5 for 64-bit x86_64) (Channel 
Details)     Consumes a regular entitlement (13 available)
Additional Services BETA Channels for Red Hat Enterprise Linux 5 for x86_64
     RHEL Cluster-Storage (v. 5 for 64-bit x86_64) Beta (Channel 
Details)     Consumes a regular entitlement (10 available)
     RHEL Clustering (v. 5 for 64-bit x86_64) Beta (Channel Details)     
Consumes a regular entitlement (10 available)
     RHEL Hardware Certification (v. 5 for 64-bit x86_64) Beta (Channel 
Details)     Consumes a regular entitlement (13 available)

Thera are cluster beta channels, but not the release ones. How can we 
subscribe systems to them?

Thanks and Regards.

Frank

-- 
Aquest missatge ha estat analitzat per MailScanner
a la cerca de virus i d'altres continguts perillosos,
i es considera que est? net.



------------------------------

Message: 8
Date: Fri, 09 Apr 2010 14:05:27 +0200
From: "Fabio M. Di Nitto" <fdinitto at redhat.com>
To: linux clustering <linux-cluster at redhat.com>,    cluster-devel
    <cluster-devel at redhat.com>
Subject: [Linux-cluster] Cluster 3.0.10 stable release
Message-ID: <4BBF1807.6030004 at redhat.com>
Content-Type: text/plain; charset=ISO-8859-1

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

The cluster team and its community are proud to announce the 3.0.10
stable release from the STABLE3 branch.

This release contains a few major bug fixes. We strongly recommend
people to update their clusters.

In order to build/run the 3.0.10 release you will need:

- - corosync 1.2.1
- - openais 1.1.2
- - linux kernel 2.6.31 (only for GFS1 users)

The new source tarball can be downloaded here:

https://fedorahosted.org/releases/c/l/cluster/cluster-3.0.10.tar.bz2

To report bugs or issues:

   https://bugzilla.redhat.com/

Would you like to meet the cluster team or members of its community?

   Join us on IRC (irc.freenode.net #linux-cluster) and share your
   experience  with other sysadministrators or power users.

Thanks/congratulations to all people that contributed to achieve this
great milestone.

Happy clustering,
Fabio

Under the hood (from 3.0.9):

Abhijith Das (5):
      gfs2_quota: Fix gfs2_quota to handle boundary conditions
      gfs2_convert: gfs2_convert segfaults when converting filesystems
of blocksize 512 bytes
      gfs2_convert: gfs2_convert uses too much memory for jdata conversion
      gfs2_convert: Fix conversion of gfs1 CDPNs
      gfs2_convert: Doesn't convert indirectly-pointed extended
attributes correctly

Bob Peterson (3):
      gfs2: GFS2 utilities should make use of exported device topology
      cman: gfs_controld dm suspend hangs withdrawn GFS file system
      GFS2: fsck.gfs2 segfault - osi_tree "each_safe" patch

Christine Caulfield (3):
      cman: Add improved cluster_id hash function
      cman: move fnv hash function into its own file
      ccs: Remove non-existant commands from ccs_tool man page.

David Teigland (7):
      dlm_controld/libdlmcontrol/dlm_tool: separate plock debug buffer
      dlm_controld: add more fs_notified debugging
      dlm_controld/gfs_controld: avoid full plock unlock when no
resource exists
      dlm_controld: add plock checkpoint signatures
      dlm_controld: set last_plock_time for ownership operations
      dlm_controld: don't skip unlinking checkpoint
      gfs_controld: set last_plock_time for ownership operations

Fabio M. Di Nitto (1):
      dlm: bump libdlmcontrol sominor

Jan Friesse (1):
      fencing: SNMP fence agents don't fail

Lon Hohberger (4):
      config: Add hash_cluster_id to schema
      rgmanager: Fix 2+ simultaneous relocation crash
      rgmanager: Fix memory leaks during relocation
      rgmanager: Fix tiny memory leak during reconfig

Marek 'marx' Grac (1):
      fencing: Remove 'ipport' option from WTI fence agent

cman/daemon/Makefile                 |    3 +-
cman/daemon/cman-preconfig.c         |   32 +++-
cman/daemon/fnvhash.c                |   93 +++++++++
cman/daemon/fnvhash.h                |    1 +
config/plugins/ldap/99cluster.ldif   |   10 +-
config/plugins/ldap/ldap-base.csv    |    3 +-
config/tools/man/ccs_tool.8          |   15 +--
config/tools/xml/cluster.rng.in      |    3 +
dlm/libdlmcontrol/Makefile           |    2 +
dlm/libdlmcontrol/libdlmcontrol.h    |    1 +
dlm/libdlmcontrol/main.c             |    5 +
dlm/man/dlm_tool.8                   |    4 +
dlm/tool/main.c                      |   25 +++-
doc/COPYRIGHT                        |    6 +
fence/agents/lib/fencing_snmp.py.py  |   13 +-
fence/agents/wti/fence_wti.py        |    2 +-
gfs2/convert/gfs2_convert.c          |  377
++++++++++++++++++++++++++++------
gfs2/fsck/link.c                     |    2 -
gfs2/fsck/pass1b.c                   |   10 +-
gfs2/fsck/pass3.c                    |    5 +-
gfs2/fsck/pass4.c                    |    5 +-
gfs2/libgfs2/device_geometry.c       |   38 ++++-
gfs2/libgfs2/fs_ops.c                |    1 +
gfs2/libgfs2/libgfs2.h               |   10 +
gfs2/mkfs/main_mkfs.c                |   69 ++++++-
gfs2/quota/check.c                   |    3 +-
gfs2/quota/main.c                    |  104 +++-------
group/dlm_controld/cpg.c             |   62 ++++--
group/dlm_controld/dlm_controld.h    |    1 +
group/dlm_controld/dlm_daemon.h      |   44 +++-
group/dlm_controld/main.c            |   60 +++++-
group/dlm_controld/plock.c           |  241 ++++++++++++++--------
group/gfs_controld/plock.c           |   12 +-
group/gfs_controld/util.c            |    3 +-
rgmanager/src/daemons/event_config.c |    2 +
rgmanager/src/daemons/rg_state.c     |    2 +
rgmanager/src/daemons/rg_thread.c    |    4 +-
37 files changed, 967 insertions(+), 306 deletions(-)

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQIcBAEBAgAGBQJLvxgDAAoJEFA6oBJjVJ+OOzUP/3Wl1ChlhmcjVvClhDyZhI4q
aPSTnChG1b40WB7sh7UQsVcD0mwAsPPsgDaZZUlybhWl2LylxZ5xEwu7VWoL8SwJ
8Q4aYT1Svp6jFfvqdmoRFmJfjp+vc3y7Gllx3NP6kLmf62TbTROgbc3X++72IFkf
14DPEonWao2FzKx7MaoZCSttc0djuILd+UNh7EEgqC2lyR2r3tatmCa1i/eT2Pfy
fwISqy4ioNie5i5SMO7fS9y4NCLnognMgeuH5iS5EJDUViougWyQSSorI8SQq36f
ZRyrrUwuUivT2ylXyz3TgfuojGpRuFy2AC1oBxRsiDOVyMrVFHX4NaP5E18J4qs1
0acYMULOpZYcwgKaLMy6haiYWwfvjFvI71zs4mKijmsWvuPbGTyVx7yxDJJco8SM
OQBF5holEHqOo4FVekFa6De0GUMjfgmpGhfPTtuw04/ww5pbNp84Y4TzEOsRA9dd
H6ak9yLwN4chjyDWRQxHsDnxCf67oqYDZJL5t1QlMauxruGYdXU3xIZRC9E4oYbW
+vu+DTbkMGg70xg2MbXH3E7EkGHeJ9EWgiuEh5l4pavrEo14rf80O0dtf+myn8t7
HosKmXjjdnjaVfYNimUH7/0mnISxX2YOO9uzBD6A/X9bqxrxC1Ky6TdI6tFN80dz
nH3IJrLomvkmnadhFRqg
=Z/pM
-----END PGP SIGNATURE-----



------------------------------

Message: 9
Date: Fri, 9 Apr 2010 10:00:51 -0300
From: Carlos Maiolino <cmaiolino at redhat.com>
To: linux clustering <linux-cluster at redhat.com>
Subject: Re: [Linux-cluster] Clustering and Cluster-Storage channels
Message-ID: <20100409130051.GA31186 at andromeda.usersys.redhat.com>
Content-Type: text/plain; charset=iso-8859-1

On Fri, Apr 09, 2010 at 01:39:21PM +0200, frank wrote:
> Hi,
> we have several machines with RH 5.4 and we use Cluster and
> Cluster-Storage (because we use GFS).
> We also have troubles in updates because we don't know how to
> subscribe out machines to that channels. From RHN, in "Software
> Channel Subscriptions" part, we see:
> 
> Release Channels for Red Hat Enterprise Linux 5 for x86_64
>     RHEL FasTrack (v. 5 for 64-bit x86_64) (Channel Details)
> Consumes a regular entitlement (13 available)
>     RHEL Optional Productivity Apps (v. 5 for 64-bit x86_64)
> (Channel Details)     Consumes a regular entitlement (13 available)
>     RHEL Supplementary (v. 5 for 64-bit x86_64) (Channel Details)
> Consumes a regular entitlement (13 available)
>     RHEL Virtualization (v. 5 for 64-bit x86_64) (Channel Details)
> Consumes a regular entitlement (9 available)
>     Red Hat Network Tools for RHEL Server (v.5 64-bit x86_64)
> (Channel Details)     Consumes a regular entitlement (10 available)
> BETA Channels for Red Hat Enterprise Linux 5 for x86_64
>     RHEL Optional Productivity Apps (v. 5 for 64-bit x86_64) Beta
> (Channel Details)     Consumes a regular entitlement (13 available)
>     RHEL Supplementary (v. 5 for 64-bit x86_64) Beta (Channel
> Details)     Consumes a regular entitlement (13 available)
>     RHEL Virtualization (v. 5 for 64-bit x86_64) Beta (Channel
> Details)     Consumes a regular entitlement (13 available)
>     Red Hat Enterprise Linux (v. 5 for 64-bit x86_64) Beta (Channel
> Details)     Consumes a regular entitlement (13 available)
> Additional Services Channels for Red Hat Enterprise Linux 5 for x86_64
>     RHEL Hardware Certification (v. 5 for 64-bit x86_64) (Channel
> Details)     Consumes a regular entitlement (13 available)
> Additional Services BETA Channels for Red Hat Enterprise Linux 5 for x86_64
>     RHEL Cluster-Storage (v. 5 for 64-bit x86_64) Beta (Channel
> Details)     Consumes a regular entitlement (10 available)
>     RHEL Clustering (v. 5 for 64-bit x86_64) Beta (Channel Details)
> Consumes a regular entitlement (10 available)
>     RHEL Hardware Certification (v. 5 for 64-bit x86_64) Beta
> (Channel Details)     Consumes a regular entitlement (13 available)
> 
> Thera are cluster beta channels, but not the release ones. How can
> we subscribe systems to them?
> 
> Thanks and Regards.
> 
> Frank
> 
> -- 
> Aquest missatge ha estat analitzat per MailScanner
> a la cerca de virus i d'altres continguts perillosos,
> i es considera que est? net.
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster

Hello Frank.

I guess is better you to contact Red Hat support, once this looks like a subscription problem than a thecnical problem.

see you ;)

-- 
---

Best Regards

Carlos Eduardo Maiolino



------------------------------

--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

End of Linux-cluster Digest, Vol 72, Issue 9
********************************************



      
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20100409/21b5207a/attachment.htm>


More information about the Linux-cluster mailing list