[rhn-users] Re: rhn-users Digest, Vol 15, Issue 36
jimalif at alifantis.com
jimalif at alifantis.com
Tue May 24 17:32:04 UTC 2005
> Send rhn-users mailing list submissions to
> rhn-users at redhat.com
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://www.redhat.com/mailman/listinfo/rhn-users
> or, via email, send a message with subject or body 'help' to
> rhn-users-request at redhat.com
>
> You can reach the person managing the list at
> rhn-users-owner at redhat.com
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of rhn-users digest..."
>
>
> Today's Topics:
>
> 1. RE: Ulimit and cron scheduler - too many open files error
> (Serge Bianda)
> 2. Re: fatal unpack error during up2date -u (Jim Pelton)
> 3. login hanging (Clinton Fernandes)
> 4. Re: login hanging (Pete Masse)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 24 May 2005 12:17:11 -0400
> From: "Serge Bianda" <serge.bianda at appiancorp.com>
> Subject: RE: [rhn-users] Ulimit and cron scheduler - too many open
> files error
> To: "Red Hat Network Users List" <rhn-users at redhat.com>
> Cc: Christoph.Doerbeck at FMR.COM
> Message-ID:
> <E3911EDB54DAA743A952BD04C1466E208DD84C at EXCHANGE2.appiancorp.com>
> Content-Type: text/plain; charset="us-ascii"
>
> # /etc/security/limits.conf
>
> #
>
> #Each line describes a limit for a user in the form:
>
> #
>
> #<domain> <type> <item> <value>
>
> #
>
> #Where:
>
> #<domain> can be:
>
> # - an user name
>
> # - a group name, with @group syntax
>
> # - the wildcard *, for default entry
>
> #
>
> #<type> can have the two values:
>
> # - "soft" for enforcing the soft limits
>
> # - "hard" for enforcing hard limits
>
> #
>
> #<item> can be one of the following:
>
> # - core - limits the core file size (KB)
>
> # - data - max data size (KB)
>
> # - fsize - maximum filesize (KB)
>
> # - memlock - max locked-in-memory address space (KB)
>
> # - nofile - max number of open files
>
> # - rss - max resident set size (KB)
>
> # - stack - max stack size (KB)
>
> # - cpu - max CPU time (MIN)
>
> # - nproc - max number of processes
>
> # - as - address space limit
>
> # - maxlogins - max number of logins for this user
>
> # - priority - the priority to run user process with
>
> # - locks - max number of file locks the user can hold
>
> #
>
> #<domain> <type> <item> <value>
>
> #
>
>
>
> * soft nofile 63536
>
> * hard nofile 63536
>
> testuser1 hard nofile 63536
>
> testuser1 soft nofile 63536
>
> # End of file
>
>
>
> ________________________________
>
> From: rhn-users-bounces at redhat.com [mailto:rhn-users-bounces at redhat.com]
> On Behalf Of Doerbeck, Christoph
> Sent: Tuesday, May 24, 2005 10:38 AM
> To: Red Hat Network Users List
> Subject: RE: [rhn-users] Ulimit and cron scheduler - too many open files
> error
>
>
>
> please post your limits.conf
>
>
>
>
>
>
>
> Christoph
>
>
>
> -----Original Message-----
> From: Serge Bianda [mailto:serge.bianda at appiancorp.com]
> Sent: Tuesday, May 24, 2005 10:28 AM
> To: rhn-users at redhat.com
> Subject: [rhn-users] Ulimit and cron scheduler - too many open files
> error
>
>
>
> We are trying to run a script using the cron scheduler. For the
> script to run, we had to increase the number of open file descriptors
> using "ulimit -n" to 63000 from 1024. We increased the file descriptor
> value in /etc/security/limits.conf. But crontab scheduler does not read
> the new increased value for the file descriptors and always errors out
> throwing - "too many open files" error. We tried to increase the value
> in the script that is scheduled using the cron job, but it looks like
> cron does not pick the new value even if it is set in the script. Are
> you aware of any global setting for the cron scheduler which will force
> it to have more open file descriptors? Note that when the script is ran
> from the command line apart from the cron scheduler it works fine and
> does not throw out errors stating "too many open files".
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> https://www.redhat.com/archives/rhn-users/attachments/20050524/90c78887/attachment.htm
>
> ------------------------------
>
> Message: 2
> Date: Tue, 24 May 2005 10:28:31 -0600
> From: "Jim Pelton" <Jim.Pelton at noaa.gov>
> Subject: Re: [rhn-users] fatal unpack error during up2date -u
> To: Red Hat Network Users List <rhn-users at redhat.com>
> Message-ID: <4293562F.9000004 at noaa.gov>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Christoph,
>
> Thanks for the reply, however the problem is not that the rpm
> database and package profile on the satellite are un-synced. We did do
> an up2date -p and up2date -l but with no resolve. The problem is that
> the packages that where not installed, but only downloaded, completely
> disappeared from the rpm database on the client system! The database
> does not seem corrupted, but incomplete or rather just missing about 30
> entries. Again it is sinked with the package profile on the satellite
> system. So I guess my question is, does up2date remove packages from
> the client rpm database as soon as they are downloaded to the client
> system, then re-add them after/during rpm installation?
>
> Thanks again!
>
> --Jim
>
> Doerbeck, Christoph wrote:
>
>>You might try "up2date -p" which (I think) should refresh the system
>>profile on the rhn server. Follow-up with a "up2date --list" to see
>>if the missing packages are now flagged as requiring updates.
>>
>>What version of RHN proxy are you using?
>>
>>Christoph
>>
>>-----Original Message-----
>>From: Jim Pelton [mailto:Jim.Pelton at noaa.gov]
>>Sent: Monday, May 23, 2005 6:33 PM
>>To: rhn-users at redhat.com
>>Subject: [rhn-users] fatal unpack error during up2date -u
>>
>>
>>Hello List!
>>
>>Here's a tricky one for ya! We've got a RHN proxy server, through which
>>
>>we communicate with a RHN satellite server. Now the proxy is working
>>great. It seems to be communicating correctly with the satellite server
>>
>>and downloading rpm patches to its database. I have a test machine
>>running which applies updates downloaded from the proxy via up2date.
>>
>>The issue is this: we ran up2date -u (from the command line) on a
>>client machine and during the install phase of the update, the process
>>died completely. This error was issued: "There was a rpm unpack error
>>installing the package: gnome-applets-2.2.2-2.1E." Normally one would
>>restart the process, assuming that it would pick up where it left off.
>>RPM is supposed to be quite robust correct? Well, up2date -u (as well
>>as up2date -l) claims that the system is up to date but something like
>>30 packages where not installed--they where d/led however as they where
>>confirmed in /var/spool/up2date. If we log onto the satellite server
>>the system is claimed to be up2date there as well. The output of "rpm
>>-qa | nl" shows 507 packages, the same as the satellite server. The
>>interesting thing is that if we "grep" the output of "rpm -qa" with a
>>package name that was not installed before the fatal rpm unpack error,
>>not even the old package is shown as being installed!
>>
>>We wiped out /var/spool/up2date in an effort to "trick" up2date into
>>downloading and installing the packages again but we now know that
>>up2date does not even look there for confirmation of what it has
>>downloaded.
>>So what could the problem be?
>>
>>Thanks is advance, I understand this is probably more of an RPM issue
>>but since it does involve a lot of RHN stuff I thought I'd post here
>>anyway.
>>
>>--Jim
>>
>>_______________________________________________
>>rhn-users mailing list
>>rhn-users at redhat.com
>>https://www.redhat.com/mailman/listinfo/rhn-users
>>
>>_______________________________________________
>>rhn-users mailing list
>>rhn-users at redhat.com
>>https://www.redhat.com/mailman/listinfo/rhn-users
>>
>>
>
>
>
> ------------------------------
>
> Message: 3
> Date: Tue, 24 May 2005 09:37:03 -0700
> From: Clinton Fernandes <cfernand at utm.utoronto.ca>
> Subject: [rhn-users] login hanging
> To: rhn-users at redhat.com
> Message-ID: <1116952623.4293582f41ed5 at webmail.utm.utoronto.ca>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi everyone,
>
> I'm having a pretty strange problem that will likely turn into a general
> panic
> if I can't resolve it soon.
>
> Over the weekend one of my servers became unlog-into-able.
> When I enter my password the system just hangs and doesn't respond.
>
> I have tried this over SSH as well as at the console, with 3 different
> logins.
>
> This particularly server is an NIS, samba and NFS server, and all three
> services
> are working properly. That is, I can log into my NIS clients just fine,
> and
> access the NFS shares.
>
> I have run out of ideas before I even had any on how to fix this.
>
> Any suggestions?
>
> Thanks a lot,
> Clint.
>
> --
> my tail is dun.
>
>
>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Tue, 24 May 2005 10:52:46 -0600
> From: "Pete Masse" <pete at chemistry.montana.edu>
> Subject: Re: [rhn-users] login hanging
> To: "Red Hat Network Users List" <rhn-users at redhat.com>
> Message-ID: <02ec01c56081$02d3f4a0$ebf25a99 at msu.montana.edu>
> Content-Type: text/plain; format=flowed; charset="iso-8859-1";
> reply-type=original
>
> I've had this problem before. Portmap had crashed. If I could get into
> it,
> restarting portmap solved the problem, otherwise a reboot solved it.
>
> Pete
>
> ----- Original Message -----
> From: "Clinton Fernandes" <cfernand at utm.utoronto.ca>
> To: <rhn-users at redhat.com>
> Sent: Tuesday, May 24, 2005 10:37 AM
> Subject: [rhn-users] login hanging
>
>
> Hi everyone,
>
> I'm having a pretty strange problem that will likely turn into a general
> panic
> if I can't resolve it soon.
>
> Over the weekend one of my servers became unlog-into-able.
> When I enter my password the system just hangs and doesn't respond.
>
> I have tried this over SSH as well as at the console, with 3 different
> logins.
>
> This particularly server is an NIS, samba and NFS server, and all three
> services
> are working properly. That is, I can log into my NIS clients just fine,
> and
> access the NFS shares.
>
> I have run out of ideas before I even had any on how to fix this.
>
> Any suggestions?
>
> Thanks a lot,
> Clint.
>
> --
> my tail is dun.
>
>
>
>
> _______________________________________________
> rhn-users mailing list
> rhn-users at redhat.com
> https://www.redhat.com/mailman/listinfo/rhn-users
>
>
>
>
> ------------------------------
>
> _______________________________________________
> rhn-users mailing list
> rhn-users at redhat.com
> https://www.redhat.com/mailman/listinfo/rhn-users
>
> End of rhn-users Digest, Vol 15, Issue 36
> *****************************************
>
More information about the rhn-users
mailing list