summaryrefslogtreecommitdiffstats
path: root/mm/slub.c
AgeCommit message (Expand)AuthorFilesLines
2007-11-12SLUB: killed the unused "end" variableDenis Cheng1-2/+0
2007-11-05SLUB: Fix memory leak by not reusing cpu_slabChristoph Lameter1-19/+1
2007-10-29missing atomic_read_long() in slub.cAl Viro1-1/+1
2007-10-22memory hotplug: make kmem_cache_node for SLUB on memory online avoid panicYasunori Goto1-0/+118
2007-10-17Slab API: remove useless ctor parameter and reorder parametersChristoph Lameter1-6/+6
2007-10-17SLUB: simplify IRQ off handlingChristoph Lameter1-11/+7
2007-10-16slub: list_locations() can use GFP_TEMPORARYAndrew Morton1-1/+1
2007-10-16SLUB: Optimize cacheline use for zeroingChristoph Lameter1-2/+12
2007-10-16SLUB: Place kmem_cache_cpu structures in a NUMA aware wayChristoph Lameter1-14/+154
2007-10-16SLUB: Avoid touching page struct when freeing to per cpu slabChristoph Lameter1-5/+9
2007-10-16SLUB: Move page->offset to kmem_cache_cpu->offsetChristoph Lameter1-41/+11
2007-10-16SLUB: Do not use page->mappingChristoph Lameter1-2/+0
2007-10-16SLUB: Avoid page struct cacheline bouncing due to remote frees to cpu slabChristoph Lameter1-74/+116
2007-10-16Group short-lived and reclaimable kernel allocationsMel Gorman1-0/+3
2007-10-16Categorize GFP flagsChristoph Lameter1-2/+3
2007-10-16Memoryless nodes: SLUB supportChristoph Lameter1-8/+8
2007-10-16Slab allocators: fail if ksize is called with a NULL parameterChristoph Lameter1-1/+2
2007-10-16{slub, slob}: use unlikely() for kfree(ZERO_OR_NULL_PTR) checkSatyam Sharma1-4/+4
2007-10-16SLUB: direct pass through of page size or higher kmalloc requestsChristoph Lameter1-25/+38
2007-10-16slub.c:early_kmem_cache_node_alloc() shouldn't be __initAdrian Bunk1-2/+2
2007-09-11SLUB: accurately compare debug flags during slab cache mergeChristoph Lameter1-15/+23
2007-08-31slub: do not fail if we cannot register a slab with sysfsChristoph Lameter1-2/+6
2007-08-22SLUB: do not fail on broken memory configurationsChristoph Lameter1-1/+8
2007-08-22SLUB: use atomic_long_read for atomic_long variablesChristoph Lameter1-3/+3
2007-08-09SLUB: Fix dynamic dma kmalloc cache creationChristoph Lameter1-14/+45
2007-08-09SLUB: Remove checks for MAX_PARTIAL from kmem_cache_shrinkChristoph Lameter1-7/+2
2007-07-30slub: fix bug in slub debug supportPeter Zijlstra1-1/+1
2007-07-30slub: add lock debugging checkPeter Zijlstra1-0/+1
2007-07-20mm: Remove slab destructors from kmem_cache_create().Paul Mundt1-3/+1
2007-07-19slub: fix ksize() for zero-sized pointersLinus Torvalds1-1/+1
2007-07-17SLUB: Fix CONFIG_SLUB_DEBUG use for CONFIG_NUMAChristoph Lameter1-0/+4
2007-07-17SLUB: Move sysfs operations outside of slub_lockChristoph Lameter1-13/+15
2007-07-17SLUB: Do not allocate object bit array on stackChristoph Lameter1-14/+25
2007-07-17Slab allocators: Cleanup zeroing allocationsChristoph Lameter1-11/+0
2007-07-17SLUB: Do not use length parameter in slab_alloc()Christoph Lameter1-11/+9
2007-07-17SLUB: Style fix up the loop to disable small slabsChristoph Lameter1-1/+1
2007-07-17mm/slub.c: make code staticAdrian Bunk1-3/+3
2007-07-17SLUB: Simplify dma index -> size calculationChristoph Lameter1-9/+1
2007-07-17SLUB: faster more efficient slab determination for __kmallocChristoph Lameter1-7/+64
2007-07-17SLUB: do proper locking during dma slab creationChristoph Lameter1-2/+9
2007-07-17SLUB: extract dma_kmalloc_cache from get_cache.Christoph Lameter1-30/+36
2007-07-17SLUB: add some more inlines and #ifdef CONFIG_SLUB_DEBUGChristoph Lameter1-6/+7
2007-07-17Slab allocators: support __GFP_ZERO in all allocatorsChristoph Lameter1-9/+15
2007-07-17Slab allocators: consistent ZERO_SIZE_PTR support and NULL result semanticsChristoph Lameter1-13/+16
2007-07-17Slab allocators: consolidate code for krealloc in mm/util.cChristoph Lameter1-37/+0
2007-07-17SLUB Debug: fix initial object debug state of NUMA bootstrap objectsChristoph Lameter1-1/+2
2007-07-17SLUB: ensure that the number of objects per slab stays low for high ordersChristoph Lameter1-2/+19
2007-07-17SLUB slab validation: Move tracking information alloc outside of lockChristoph Lameter1-10/+7
2007-07-17SLUB: use list_for_each_entry for loops over all slabsChristoph Lameter1-38/+13
2007-07-17SLUB: change error reporting format to follow lockdep looselyChristoph Lameter1-123/+154