linux/mm
Christoph Lameter f1b2633936 SLUB: faster more efficient slab determination for __kmalloc
kmalloc_index is a long series of comparisons.  The attempt to replace
kmalloc_index with something more efficient like ilog2 failed due to compiler
issues with constant folding on gcc 3.3 / powerpc.

kmalloc_index()'es long list of comparisons works fine for constant folding
since all the comparisons are optimized away.  However, SLUB also uses
kmalloc_index to determine the slab to use for the __kmalloc_xxx functions.
This leads to a large set of comparisons in get_slab().

The patch here allows to get rid of that list of comparisons in get_slab():

1. If the requested size is larger than 192 then we can simply use
   fls to determine the slab index since all larger slabs are
   of the power of two type.

2. If the requested size is smaller then we cannot use fls since there
   are non power of two caches to be considered. However, the sizes are
   in a managable range. So we divide the size by 8. Then we have only
   24 possibilities left and then we simply look up the kmalloc index
   in a table.

Code size of slub.o decreases by more than 200 bytes through this patch.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-17 10:23:01 -07:00
..
allocpercpu.c
backing-dev.c remove mm/backing-dev.c:congestion_wait_interruptible() 2007-07-16 09:05:52 -07:00
bootmem.c
bounce.c
fadvise.c
filemap_xip.c
filemap.c Fix read/truncate race 2007-07-17 10:22:59 -07:00
filemap.h
fremap.c
highmem.c Create the ZONE_MOVABLE zone 2007-07-17 10:22:59 -07:00
hugetlb.c Allow huge page allocations to use GFP_HIGH_MOVABLE 2007-07-17 10:22:59 -07:00
internal.h
Kconfig Merge master.kernel.org:/pub/scm/linux/kernel/git/lethal/sh-2.6 2007-07-16 10:32:02 -07:00
madvise.c speed up madvise_need_mmap_write() usage 2007-07-16 09:05:36 -07:00
Makefile
memory_hotplug.c
memory.c Add __GFP_MOVABLE for callers to flag allocations from high memory that may be migrated 2007-07-17 10:22:59 -07:00
mempolicy.c Allow huge page allocations to use GFP_HIGH_MOVABLE 2007-07-17 10:22:59 -07:00
mempool.c permit mempool_free(NULL) 2007-07-16 09:05:52 -07:00
migrate.c Add __GFP_MOVABLE for callers to flag allocations from high memory that may be migrated 2007-07-17 10:22:59 -07:00
mincore.c
mlock.c do not limit locked memory when RLIMIT_MEMLOCK is RLIM_INFINITY 2007-07-16 09:05:37 -07:00
mmap.c split mmap 2007-07-16 09:05:37 -07:00
mmzone.c
mprotect.c
mremap.c
msync.c
nommu.c nommu: stub expand_stack() for nommu case 2007-07-16 09:05:37 -07:00
oom_kill.c
page_alloc.c Lumpy Reclaim V4 2007-07-17 10:22:59 -07:00
page_io.c
page-writeback.c dirty_writeback_centisecs_handler() cleanup 2007-07-16 09:05:47 -07:00
pdflush.c
prio_tree.c
quicklist.c
readahead.c
rmap.c
shmem_acl.c
shmem.c Add __GFP_MOVABLE for callers to flag allocations from high memory that may be migrated 2007-07-17 10:22:59 -07:00
slab.c Slab allocators: support __GFP_ZERO in all allocators 2007-07-17 10:23:01 -07:00
slob.c Slab allocators: support __GFP_ZERO in all allocators 2007-07-17 10:23:01 -07:00
slub.c SLUB: faster more efficient slab determination for __kmalloc 2007-07-17 10:23:01 -07:00
sparse.c
swap_state.c Add __GFP_MOVABLE for callers to flag allocations from high memory that may be migrated 2007-07-17 10:22:59 -07:00
swap.c
swapfile.c vmscan: fix comments related to shrink_list() 2007-07-16 09:05:35 -07:00
thrash.c
tiny-shmem.c
truncate.c invalidate_mapping_pages(): add cond_resched 2007-07-16 09:05:36 -07:00
util.c Slab allocators: consistent ZERO_SIZE_PTR support and NULL result semantics 2007-07-17 10:23:01 -07:00
vmalloc.c
vmscan.c mm: clean up and kernelify shrinker registration 2007-07-17 10:23:00 -07:00
vmstat.c Create the ZONE_MOVABLE zone 2007-07-17 10:22:59 -07:00