I/O Scheduling results in poor responsiveness

Pasi Kärkkäinen pasik at iki.fi
Sun Mar 9 13:03:00 UTC 2008


On Thu, Mar 06, 2008 at 11:36:49AM -0500, Bill Davidsen wrote:
> Nathan Grennan wrote:
> >    Why is the command below all that is needed to bring the system to 
> >it's knees? Why doesn't the io scheduler, CFQ, which is supposed to be 
> >all about fairness starve other processes? Example, if I open a new file 
> >in vim, and hold down "i" while this is running it will pause the 
> >display of new "i"s for seconds, sometimes until the dd write is 
> >completely finished. Another example is applications like firefox, 
> >thunderbird, xchat, and pidgin will stop refreshing for 10+ seconds.
> >
> > dd if=/dev/zero of=test-file bs=2M count=2048
> >
> > I understand the main difference between using oflag=direct or not 
> >relates to if the io scheduler is used, and if the file is cached or 
> >not. I can see this clearly by watching cached rise without 
> >oflag=direct, stay the same with it, and go way down when I delete the 
> >file after running dd without oflag=direct.
> >
> > The system in question is running Fedora 8. It is an E6600, 4gb memory, 
> >and 2x300gb Seagate sata drives. The drives are setup with md raid 1, 
> >and the filesystem is ext3. But I also see this with plenty of other 
> >systems with more cpu, less cpu, less memory, raid, and no raid.
> >
> > I have tried various tweaks to sys.vm settings, tried changing the 
> >scheduler to as or deadline. Nothing seem to get it to behave, other 
> >than oflag=direct.
> >
> Known problem with the io schedulers, and discussed from time to time on 
> the RAID list. The current io schedulers don't split drive access fairly 
> between read and write, so when you get a huge batch of write queued 
> reads suffer. In your case, the vi problem may be an issue of doing a 
> write to the file and that write being at the end of the io queue.
> 
> Note: the optimization is for throughput, not responsiveness, you may 
> see more pleasing results with the deadline scheduler. You also may want 
> to look at using NCQ and setting the queue_depth in /sys. I can't 
> explain it without looking up the details, so there's something for you 
> to check.
> 

Hi!

Do you happen to know if it's possible to check current queue depth "in
use"? Meaning how many commands are currently queued..

-- Pasi




More information about the fedora-list mailing list