[Crash-utility] [PATCH 1/2] Fix cpu_slab freelist handling on SLUB

Dave Anderson anderson at redhat.com
Mon Apr 18 14:28:18 UTC 2016



----- Original Message -----
> Dave Anderson <anderson at redhat.com> writes:
> 
> > Can you show a before-and-after example of the "kmem -s" and "kmem -S"
> > output of a particular slab where your patch makes a difference?
> 
> diff got about 80k. So attached compressed one.
> 
> Thanks.
> --
> OGAWA Hirofumi <hirofumi at mail.parknet.co.jp>
> 
> 

I ran your patch against about ~CONFIG_SLUB kernels, and note that the major
difference is that your patch shows many caches with a much smaller ALLOCATED
count, and quite often a many slabs show "0" ALLOCATED, which I find hard to
believe is correct. 

Here's an example.  I ran your patch on a live 3.10-based kernel, and 
see these counts on the xfs-based caches:

crash> kmem -s | grep -e OBJ -e xfs_
CACHE            NAME                 OBJSIZE  ALLOCATED     TOTAL  SLABS  SSIZE
ffff880035d54800 xfs_dqtrx                528          0         0      0    16k
ffff880035d54700 xfs_dquot                472          0         0      0    16k
ffff880035d54600 xfs_icr                  144          0         0      0     4k
ffff880035d54500 xfs_ili                  152     362473    364000  14000     4k
ffff880035d54400 xfs_inode               1024     478523    480510  16017    32k
ffff880035d54300 xfs_efd_item             400          0       300     15     8k
ffff880035d54200 xfs_da_state             480          0       272      8    16k
ffff880035d54100 xfs_btree_cur            208          0       312      8     8k
ffff880035d54000 xfs_log_ticket           184          3       682     31     4k
crash>

Note the last 4 caches above, which show ALLOCATED counts of 0, 0, 0 and 3.  
But then I look at /proc/slabinfo:

crash> !cat /proc/slabinfo | grep -e active -e xfs_
# name            <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
xfs_dqtrx              0      0    528   31    4 : tunables    0    0    0 : slabdata      0      0      0
xfs_dquot              0      0    472   34    4 : tunables    0    0    0 : slabdata      0      0      0
xfs_icr                0      0    144   28    1 : tunables    0    0    0 : slabdata      0      0      0
xfs_ili           362887 364000    152   26    1 : tunables    0    0    0 : slabdata  14000  14000      0
xfs_inode         478639 480510   1088   30    8 : tunables    0    0    0 : slabdata  16017  16017      0
xfs_efd_item         180    300    400   20    2 : tunables    0    0    0 : slabdata     15     15      0
xfs_da_state         272    272    480   34    4 : tunables    0    0    0 : slabdata      8      8      0
xfs_btree_cur        312    312    208   39    2 : tunables    0    0    0 : slabdata      8      8      0
xfs_log_ticket       682    682    184   22    1 : tunables    0    0    0 : slabdata     31     31      0
crash> 

which show 180, 272, 312 and 682 active counts.

Can you explain the discrepancy?

Dave




More information about the Crash-utility mailing list