summaryrefslogtreecommitdiffstats
path: root/lib/find_next_bit.c
AgeCommit message (Collapse)AuthorFilesLines
2011-05-26arch: remove CONFIG_GENERIC_FIND_{NEXT_BIT,BIT_LE,LAST_BIT}Akinobu Mita1-6/+0
By the previous style change, CONFIG_GENERIC_FIND_NEXT_BIT, CONFIG_GENERIC_FIND_BIT_LE, and CONFIG_GENERIC_FIND_LAST_BIT are not used to test for existence of find bitops anymore. Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Acked-by: Greg Ungerer <gerg@uclinux.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-05-26bitops: add #ifndef for each of find bitopsAkinobu Mita1-0/+12
The style that we normally use in asm-generic is to test the macro itself for existence, so in asm-generic, do: #ifndef find_next_zero_bit_le extern unsigned long find_next_zero_bit_le(const void *addr, unsigned long size, unsigned long offset); #endif and in the architectures, write static inline unsigned long find_next_zero_bit_le(const void *addr, unsigned long size, unsigned long offset) #define find_next_zero_bit_le find_next_zero_bit_le This adds the #ifndef for each of the find bitops in the generic header and source files. Suggested-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Greg Ungerer <gerg@uclinux.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-03-23bitops: introduce CONFIG_GENERIC_FIND_BIT_LEAkinobu Mita1-0/+3
This introduces CONFIG_GENERIC_FIND_BIT_LE to tell whether to use generic implementation of find_*_bit_le() in lib/find_next_bit.c or not. For now we select CONFIG_GENERIC_FIND_BIT_LE for all architectures which enable CONFIG_GENERIC_FIND_NEXT_BIT. But m68knommu wants to define own faster find_next_zero_bit_le() and continues using generic find_next_{,zero_}bit(). (CONFIG_GENERIC_FIND_NEXT_BIT and !CONFIG_GENERIC_FIND_BIT_LE) Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: Greg Ungerer <gerg@uclinux.org> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-03-23asm-generic: change little-endian bitops to take any pointer typesAkinobu Mita1-4/+6
This makes the little-endian bitops take any pointer types by changing the prototypes and adding casts in the preprocessor macros. That would seem to at least make all the filesystem code happier, and they can continue to do just something like #define ext2_set_bit __test_and_set_bit_le (or whatever the exact sequence ends up being). Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Mikael Starvik <starvik@axis.com> Cc: David Howells <dhowells@redhat.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Matthew Wilcox <willy@debian.org> Cc: Grant Grundler <grundler@parisc-linux.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Chris Zankel <chris@zankel.net> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-03-23asm-generic: rename generic little-endian bitops functionsAkinobu Mita1-5/+4
As a preparation for providing little-endian bitops for all architectures, This renames generic implementation of little-endian bitops. (remove "generic_" prefix and postfix "_le") s/generic_find_next_le_bit/find_next_bit_le/ s/generic_find_next_zero_le_bit/find_next_zero_bit_le/ s/generic_find_first_zero_le_bit/find_first_zero_bit_le/ s/generic___test_and_set_le_bit/__test_and_set_bit_le/ s/generic___test_and_clear_le_bit/__test_and_clear_bit_le/ s/generic_test_le_bit/test_bit_le/ s/generic___set_le_bit/__set_bit_le/ s/generic___clear_le_bit/__clear_bit_le/ s/generic_test_and_set_le_bit/test_and_set_bit_le/ s/generic_test_and_clear_le_bit/test_and_clear_bit_le/ Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: Andreas Schwab <schwab@linux-m68k.org> Cc: Greg Ungerer <gerg@uclinux.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-29bitops: remove "optimizations"Thomas Gleixner1-12/+10
The mapsize optimizations which were moved from x86 to the generic code in commit 64970b68d2b3ed32b964b0b30b1b98518fde388e increased the binary size on non x86 architectures. Looking into the real effects of the "optimizations" it turned out that they are not used in find_next_bit() and find_next_zero_bit(). The ones in find_first_bit() and find_first_zero_bit() are used in a couple of places but none of them is a real hot path. Remove the "optimizations" all together and call the library functions unconditionally. Boot-tested on x86 and compile tested on every cross compiler I have. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-26x86: generic versions of find_first_(zero_)bit, convert i386Alexander van Heukelum1-0/+58
Generic versions of __find_first_bit and __find_first_zero_bit are introduced as simplified versions of __find_next_bit and __find_next_zero_bit. Their compilation and use are guarded by a new config variable GENERIC_FIND_FIRST_BIT. The generic versions of find_first_bit and find_first_zero_bit are implemented in terms of the newly introduced __find_first_bit and __find_first_zero_bit. This patch does not remove the i386-specific implementation, but it does switch i386 to use the generic functions by setting GENERIC_FIND_FIRST_BIT=y for X86_32. Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-26x86, generic: optimize find_next_(zero_)bit for small constant-size bitmapsAlexander van Heukelum1-16/+9
This moves an optimization for searching constant-sized small bitmaps form x86_64-specific to generic code. On an i386 defconfig (the x86#testing one), the size of vmlinux hardly changes with this applied. I have observed only four places where this optimization avoids a call into find_next_bit: In the functions return_unused_surplus_pages, alloc_fresh_huge_page, and adjust_pool_surplus, this patch avoids a call for a 1-bit bitmap. In __next_cpu a call is avoided for a 32-bit bitmap. That's it. On x86_64, 52 locations are optimized with a minimal increase in code size: Current #testing defconfig: 146 x bsf, 27 x find_next_*bit text data bss dec hex filename 5392637 846592 724424 6963653 6a41c5 vmlinux After removing the x86_64 specific optimization for find_next_*bit: 94 x bsf, 79 x find_next_*bit text data bss dec hex filename 5392358 846592 724424 6963374 6a40ae vmlinux After this patch (making the optimization generic): 146 x bsf, 27 x find_next_*bit text data bss dec hex filename 5392396 846592 724424 6963412 6a40d4 vmlinux [ tglx@linutronix.de: build fixes ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-04-26x86: change x86 to use generic find_next_bitAlexander van Heukelum1-0/+2
The versions with inline assembly are in fact slower on the machines I tested them on (in userspace) (Athlon XP 2800+, p4-like Xeon 2.8GHz, AMD Opteron 270). The i386-version needed a fix similar to 06024f21 to avoid crashing the benchmark. Benchmark using: gcc -fomit-frame-pointer -Os. For each bitmap size 1...512, for each possible bitmap with one bit set, for each possible offset: find the position of the first bit starting at offset. If you follow ;). Times include setup of the bitmap and checking of the results. Athlon Xeon Opteron 32/64bit x86-specific: 0m3.692s 0m2.820s 0m3.196s / 0m2.480s generic: 0m2.622s 0m1.662s 0m2.100s / 0m1.572s If the bitmap size is not a multiple of BITS_PER_LONG, and no set (cleared) bit is found, find_next_bit (find_next_zero_bit) returns a value outside of the range [0, size]. The generic version always returns exactly size. The generic version also uses unsigned long everywhere, while the x86 versions use a mishmash of int, unsigned (int), long and unsigned long. Using the generic version does give a slightly bigger kernel, though. defconfig: text data bss dec hex filename x86-specific: 4738555 481232 626688 5846475 5935cb vmlinux (32 bit) generic: 4738621 481232 626688 5846541 59360d vmlinux (32 bit) x86-specific: 5392395 846568 724424 6963387 6a40bb vmlinux (64 bit) generic: 5392458 846568 724424 6963450 6a40fa vmlinux (64 bit) Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-01-28ext4: Add ext4_find_next_bit()Aneesh Kumar K.V1-0/+43
This function is used by the ext4 multi block allocator patches. Also add generic_find_next_le_bit Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: <linux-ext4@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2006-03-26[PATCH] bitops: generic ↵Akinobu Mita1-0/+73
ext2_{set,clear,test,find_first_zero,find_next_zero}_bit() This patch introduces the C-language equivalents of the functions below: int ext2_set_bit(int nr, volatile unsigned long *addr); int ext2_clear_bit(int nr, volatile unsigned long *addr); int ext2_test_bit(int nr, const volatile unsigned long *addr); unsigned long ext2_find_first_zero_bit(const unsigned long *addr, unsigned long size); unsinged long ext2_find_next_zero_bit(const unsigned long *addr, unsigned long size); In include/asm-generic/bitops/ext2-non-atomic.h This code largely copied from: include/asm-powerpc/bitops.h include/asm-parisc/bitops.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic find_{next,first}{,_zero}_bit()Akinobu Mita1-31/+81
This patch introduces the C-language equivalents of the functions below: unsigned logn find_next_bit(const unsigned long *addr, unsigned long size, unsigned long offset); unsigned long find_next_zero_bit(const unsigned long *addr, unsigned long size, unsigned long offset); unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size); unsigned long find_first_bit(const unsigned long *addr, unsigned long size); In include/asm-generic/bitops/find.h This code largely copied from: arch/powerpc/lib/bitops.c Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-08[PATCH] frv: implement and export various things required by modulesDavid Howells1-0/+3
Export a number of features required to build all the modules. It also implements the following simple features: (*) csum_partial_copy_from_user() for MMU as well as no-MMU. (*) __ucmpdi2(). so that they can be exported too. Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-04-16Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds1-0/+55
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!