Why is my load ave so high now?

Rick Stevens ricks at nerd.com
Mon Jul 27 18:26:03 UTC 2009


Kevin J. Cummings wrote:
> On 07/27/2009 12:04 PM, Bill Davidsen wrote:
>> Kevin J. Cummings wrote:
>>> Could it be my ivtv0 (PVR-350) board?  Its not supposed to be doing
>>> anything at the moment!  There's nothing plugged into it, and its not
>>> configured under MythTV right now (cable went all digital)....
>>>
>>> I'll try removing the driver module and see if that helps.  At worst,
>>> I'll remove the board entirely.
> 
> I ended up rmmod ivtvt and ivtvfb, and it didn't help.  Yes, the number
> of ints propped noticeably, but the load average remains 10+....
> 
>> Looking at the original 'top' output, all the CPU was going to nice
>> processing, presumable SETI. When you kill that you note the load
>> average is still high, could we see the top few lines again to see the
>> distribution? I note that hi/si are low, and load average indicates
>> runable process (my first guess was the seti went threaded). So 'top'
>> with the 'i' visual option (only show runnable tasks) should show what's
>> running.
> 
> (I learn something new everyday!)
> 
> Sure, here it is for "top -i":
> 
>> top - 12:58:12 up 6 days,  9:02,  4 users,  load average: 11.30, 11.15, 11.10
>> Cpu(s):  0.7%us,  0.7%sy,  0.0%ni, 98.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
>> Mem:   2074172k total,  1961024k used,   113148k free,   199680k buffers
>> Swap:  3911816k total,      412k used,  3911404k free,   935716k cached
>>
>>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                          
>> 14743 root      20   0  2560 1152  836 R  0.7  0.1   0:00.08 top                                                                                              
>>  2506 root      20   0 15068  860  592 R  0.0  0.0   0:32.72 apcupsd                                                                                          
>>  2547 root      15  -5     0    0    0 D  0.0  0.0   0:00.00 nfsd                                                                                             
>>  2548 root      15  -5     0    0    0 D  0.0  0.0   0:00.00 nfsd                                                                                             
>>  2549 root      15  -5     0    0    0 D  0.0  0.0   0:00.00 nfsd                                                                                             
>>  2550 root      15  -5     0    0    0 D  0.0  0.0   0:00.00 nfsd                                                                                             
>>  2551 root      15  -5     0    0    0 D  0.0  0.0   0:00.00 nfsd                                                                                             
>>  2552 root      15  -5     0    0    0 D  0.0  0.0   0:00.00 nfsd                                                                                             
>>  2553 root      15  -5     0    0    0 D  0.0  0.0   0:00.00 nfsd                                                                                             
>>  2554 root      15  -5     0    0    0 D  0.0  0.0   0:00.00 nfsd                                                                                             
>> 17427 root      20   0 77040  72m  752 D  0.0  3.6   0:02.07 clamscan                                                                                         
>> 24904 root      20   0  2492  964  704 D  0.0  0.0   0:01.87 find                                                                                             
>> 28703 root      39  19  1900  652  540 D  0.0  0.0   0:00.00 updatedb                                                                                         
> 
> That's the entire top output....

You see a bunch of NFS-related things in a "D" state and you wonder why
it's slow?

If you have processes in an I/O wait (a.k.a. "D") state, that'll bog
stuff down badly...especially if the NFS mounts are mounted "hard".
----------------------------------------------------------------------
- Rick Stevens, Systems Engineer                      ricks at nerd.com -
- AIM/Skype: therps2        ICQ: 22643734            Yahoo: origrps2 -
-                                                                    -
-     Squawk!  Pieces of Seven!  Pieces of Seven!  Parity Error!     -
----------------------------------------------------------------------




More information about the fedora-list mailing list