On Wed, Dec 31, 2008 at 17:35, Mike McGrath <span dir="ltr"><<a href="mailto:mmcgrath@redhat.com">mmcgrath@redhat.com</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="Ih2E3d">On Wed, 31 Dec 2008, Corey Chandler wrote:<br>
<br>
> Mike McGrath wrote:<br>
> > Lets pool some knowledge together because at this point, I'm missing<br>
> > something.<br>
> ><br>
> > I've been doing all measurements with sar as bonnie, etc, causes builds to<br>
> > timeout.<br>
> ><br>
> > Problem: We're seeing slower then normal disk IO. At least I think we<br>
> > are. This is a PERC5/E and MD1000 array.<br>
> ><br>
><br>
> 1. Are we sure the array hasn't lost a drive?<br>
<br>
</div>I can't physically look at the drive (they're a couple hundred miles away)<br>
but we've seen no reports of it (via the drac anyway). I'll have to get<br>
the raid software on there to be for sure. I'd think a degraded raid<br>
array would affect both direct block access and file level access.<br>
<div class="Ih2E3d"><br>
> 2. What's your scheduler set to? CFQ tends to not work in many applications<br>
> where the deadline scheduler works better...<br>
><br>
<br>
</div>I'd tried other schedulers earlier but they didn't seem to make much of a<br>
difference. Even still, I'll get dealine setup and take a look.<br>
<br>
At least we've got the dd and cat problem figured out. Now to figure out<br>
why there's such a discrepancy between file level reads and block level<br>
reads. Anyone else have an array of this type and size to run those tests<br>
on? I'd be curious to see what others are getting.<br>
<font color="#888888"></font></blockquote><div><br>we are working on a rhel3 to 5 migration at my job. We have 2 primary filesystems. one is large database files and the other is lots of small documents. As we were testing backup software for rhel5 we noticed a 60% decrease in
speed moving from rhel3 to rhel5 with the same file system, but only on the document filesystem, the db file system was perfectly snappy.<br><br>After a lot of troubleshooting it was deemed to be related to the dir_index btree hash. The path was to long before there was a difference in the names of the files, making the index incredibly slow. Removing dir_index recovered a bit of the difference, but didn't resolve the issue. A quick rename of one of
the base directories recovered almost the entire 60%.<br>
<br>Thought I'd at least throw it out there, although I'm not sure that
it is the exact issue, it doesn't hurt to have it floating in the
background.<br><br>-greg/xaeth</div></div><br>