summaryrefslogtreecommitdiffstats
path: root/arch/s390/lib/Makefile
AgeCommit message (Collapse)AuthorFilesLines
2016-02-23s390/xor: optimized xor routing using the XC instructionMartin Schwidefsky1-1/+1
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2015-03-25s390: remove "64" suffix from mem64.S and swsusp_asm64.SHeiko Carstens1-1/+1
Rename two more files which I forgot. Also remove the "asm" from the swsusp_asm64.S file, since the ".S" suffix already makes it obvious that this file contains assembler code. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2015-03-25s390: remove 31 bit supportHeiko Carstens1-2/+1
Remove the 31 bit support in order to reduce maintenance cost and effectively remove dead code. Since a couple of years there is no distribution left that comes with a 31 bit kernel. The 31 bit kernel also has been broken since more than a year before anybody noticed. In addition I added a removal warning to the kernel shown at ipl for 5 minutes: a960062e5826 ("s390: add 31 bit warning message") which let everybody know about the plan to remove 31 bit code. We didn't get any response. Given that the last 31 bit only machine was introduced in 1999 let's remove the code. Anybody with 31 bit user space code can still use the compat mode. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-09-25s390/uprobes: common library for kprobes and uprobesJan Willeke1-0/+2
This patch moves common functions from kprobes.c to probes.c. Thus its possible for uprobes to use them without enabling kprobes. Signed-off-by: Jan Willeke <willeke@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-04-03s390/uaccess: rework uaccess code - fix locking issuesHeiko Carstens1-1/+1
The current uaccess code uses a page table walk in some circumstances, e.g. in case of the in atomic futex operations or if running on old hardware which doesn't support the mvcos instruction. However it turned out that the page table walk code does not correctly lock page tables when accessing page table entries. In other words: a different cpu may invalidate a page table entry while the current cpu inspects the pte. This may lead to random data corruption. Adding correct locking however isn't trivial for all uaccess operations. Especially copy_in_user() is problematic since that requires to hold at least two locks, but must be protected against ABBA deadlock when a different cpu also performs a copy_in_user() operation. So the solution is a different approach where we change address spaces: User space runs in primary address mode, or access register mode within vdso code, like it currently already does. The kernel usually also runs in home space mode, however when accessing user space the kernel switches to primary or secondary address mode if the mvcos instruction is not available or if a compare-and-swap (futex) instruction on a user space address is performed. KVM however is special, since that requires the kernel to run in home address space while implicitly accessing user space with the sie instruction. So we end up with: User space: - runs in primary or access register mode - cr1 contains the user asce - cr7 contains the user asce - cr13 contains the kernel asce Kernel space: - runs in home space mode - cr1 contains the user or kernel asce -> the kernel asce is loaded when a uaccess requires primary or secondary address mode - cr7 contains the user or kernel asce, (changed with set_fs()) - cr13 contains the kernel asce In case of uaccess the kernel changes to: - primary space mode in case of a uaccess (copy_to_user) and uses e.g. the mvcp instruction to access user space. However the kernel will stay in home space mode if the mvcos instruction is available - secondary space mode in case of futex atomic operations, so that the instructions come from primary address space and data from secondary space In case of kvm the kernel runs in home space mode, but cr1 gets switched to contain the gmap asce before the sie instruction gets executed. When the sie instruction is finished cr1 will be switched back to contain the user asce. A context switch between two processes will always load the kernel asce for the next process in cr1. So the first exit to user space is a bit more expensive (one extra load control register instruction) than before, however keeps the code rather simple. In sum this means there is no need to perform any error prone page table walks anymore when accessing user space. The patch seems to be rather large, however it mainly removes the the page table walk code and restores the previously deleted "standard" uaccess code, with a couple of changes. The uaccess without mvcos mode can be enforced with the "uaccess_primary" kernel parameter. Reported-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-02-21s390/uaccess: get rid of indirect function callsHeiko Carstens1-2/+1
There are only two uaccess variants on s390 left: the version that is used if the mvcos instruction is available, and the page table walk variant. So there is no need for expensive indirect function calls. By default the mvcos variant will be called. If the mvcos instruction is not available it will call the page table walk variant. For minimal performance impact the "if (mvcos_is_available)" is implemented with a jump label, which will be a six byte nop on machines with mvcos. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/uaccess: always run the kernel in home spaceMartin Schwidefsky1-1/+1
Simplify the uaccess code by removing the user_mode=home option. The kernel will now always run in the home space mode. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-10-24s390/bitops: use generic find bit functions / reimplement _left variantHeiko Carstens1-1/+1
Just like all other architectures we should use out-of-line find bit operations, since the inline variant bloat the size of the kernel image. And also like all other architecures we should only supply optimized variants of the __ffs, ffs, etc. primitives. Therefore this patch removes the inlined s390 find bit functions and uses the generic out-of-line variants instead. The optimization of the primitives follows with the next patch. With this patch also the functions find_first_bit_left() and find_next_bit_left() have been reimplemented, since logically, they are nothing else but a find_first_bit()/find_next_bit() implementation that use an inverted __fls() instead of __ffs(). Also the restriction that these functions only work on machines which support the "flogr" instruction is gone now. This reduces the size of the kernel image (defconfig, -march=z9-109) by 144,482 bytes. Alone the size of the function build_sched_domains() gets reduced from 7 KB to 3,5 KB. We also git rid of unused functions like find_first_bit_le()... Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-04-30Kconfig: consolidate CONFIG_DEBUG_STRICT_USER_COPY_CHECKSStephen Boyd1-1/+0
The help text for this config is duplicated across the x86, parisc, and s390 Kconfig.debug files. Arnd Bergman noted that the help text was slightly misleading and should be fixed to state that enabling this option isn't a problem when using pre 4.4 gcc. To simplify the rewording, consolidate the text into lib/Kconfig.debug and modify it there to be more explicit about when you should say N to this config. Also, make the text a bit more generic by stating that this option enables compile time checks so we can cover architectures which emit warnings vs. ones which emit errors. The details of how an architecture decided to implement the checks isn't as important as the concept of compile time checking of copy_from_user() calls. While we're doing this, remove all the copy_from_user_overflow() code that's duplicated many times and place it into lib/ so that any architecture supporting this option can get the function for free. Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: H. Peter Anvin <hpa@zytor.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Acked-by: Helge Deller <deller@gmx.de> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Chris Metcalf <cmetcalf@tilera.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-09-26s390/string: provide asm lib functions for memcpy and memcmpHeiko Carstens1-1/+2
Our memcpy and memcmp variants were implemented by calling the corresponding gcc builtin variants. However gcc is free to replace a call to __builtin_memcmp with a call to memcmp which, when called, will result in an endless recursion within memcmp. So let's provide asm variants and also fix the variants that are used for uncompressing the kernel image. In addition remove all other occurences of builtin function calls. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2010-03-08[S390] uaccess: make sure copy_from_user_overflow is builtinHeiko Carstens1-1/+2
If there is no in kernel image caller modules will suffer: ERROR: "copy_from_user_overflow" [net/core/pktgen.ko] undefined! ERROR: "copy_from_user_overflow" [net/can/can-raw.ko] undefined! ERROR: "copy_from_user_overflow" [fs/cifs/cifs.ko] undefined! Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2010-02-26[S390] uaccess: implement strict user copy checksHeiko Carstens1-1/+1
Same as on x86 and sparc, besides the fact that enabling the option will just emit compile time warnings instead of errors. Keeps allyesconfig kernels compiling. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-07-07[S390] add __ucmpdi2() helper functionHeiko Carstens1-1/+1
Provide __ucmpdi2() helper function on 31 bit so we don't run again and again in compile errors like this one: kernel/built-in.o: In function `T.689': perf_counter.c:(.text+0x56c86): undefined reference to `__ucmpdi2' Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2008-04-30[S390] remove -traditionalMathieu Desnoyers1-2/+0
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> CC: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2007-07-10[S390] Bogomips calculation for 64 bit.Martin Schwidefsky1-2/+2
The bogomips calculation triggered via reading from /proc/cpuinfo can return incorrect values if the qrnnd assembly is called with a pointer in %r2 with any of the upper 32 bits set. Fix this by using 64 bit division / remainder operation provided by gcc instead of calling the assembly. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2007-04-25[S390]: Fix build on 31-bit.David S. Miller1-1/+1
Allow s390 to properly override the generic __div64_32() implementation by: 1) Using obj-y for div64.o in s390's makefile instead of lib-y 2) Adding the weak attribute to the generic implementation. Signed-off-by: David S. Miller <davem@davemloft.net>
2007-02-05[S390] Calibrate delay and bogomips.Martin Schwidefsky1-1/+1
Preset the bogomips number to the cpu capacity value reported by store system information in SYSIB 1.2.2. This value is constant for a particular machine model and can be used to determine relative performance differences between machines. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2006-12-04[S390] Add dynamic size check for usercopy functions.Gerald Schaefer1-1/+1
Use a wrapper for copy_to/from_user to chose the best usercopy method. The mvcos instruction is better for sizes greater than 256 bytes, if mvcos is not available a page table walk is better for sizes greater than 1024 bytes. Also removed the redundant copy_to/from_user_std_small functions. Signed-off-by: Gerald Schaefer <geraldsc@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2006-09-28[S390] __div64_32 for 31 bit.Martin Schwidefsky1-0/+1
The clocksource infrastructure introduced with commit ad596171ed635c51a9eef829187af100cbf8dcf7 broke 31 bit s390. The reason is that the do_div() primitive for 31 bit always had a restriction: it could only divide an unsigned 64 bit integer by an unsigned 31 bit integer. The clocksource code now uses do_div() with a base value that has the most significant bit set. The result is that clock->cycle_interval has a funny value which causes the linux time to jump around like mad. The solution is "obvious": implement a proper __div64_32 function for 31 bit s390. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2006-09-20[S390] Use alternative user-copy operations for new hardware.Gerald Schaefer1-0/+1
This introduces new user-copy operations which are optimized for copying more than 256 Bytes on new hardware. Signed-off-by: Gerald Schaefer <geraldsc@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2006-09-20[S390] Make user-copy operations run-time configurable.Gerald Schaefer1-2/+1
Introduces a struct uaccess_ops which allows setting user-copy operations at run-time. Signed-off-by: Gerald Schaefer <geraldsc@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2006-02-01[PATCH] s390: Remove CVS generated informationHeiko Carstens1-1/+1
- Remove all CVS generated information like e.g. revision IDs from drivers/s390 and include/asm-s390 (none present in arch/s390). - Add newline at end of arch/s390/lib/Makefile to avoid diff message. Acked-by: Andreas Herrmann <aherrman@de.ibm.com> Acked-by: Frank Pavlic <pavlic@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-14[PATCH] s390: spinlock fixesMartin Schwidefsky1-1/+2
Remove useless spin_retry_counter and fix compilation for UP kernels. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-01-06[PATCH] s390: cleanup KconfigMartin Schwidefsky1-3/+2
Sanitize some s390 Kconfig options. We have ARCH_S390, ARCH_S390X, ARCH_S390_31, 64BIT, S390_SUPPORT and COMPAT. Replace these 6 options by S390, 64BIT and COMPAT. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-27[PATCH] s390: spin lock retryMartin Schwidefsky1-2/+2
Split spin lock and r/w lock implementation into a single try which is done inline and an out of line function that repeatedly tries to get the lock before doing the cpu_relax(). Add a system control to set the number of retries before a cpu is yielded. The reason for the spin lock retry is that the diagnose 0x44 that is used to give up the virtual cpu is quite expensive. For spin locks that are held only for a short period of time the costs of the diagnoses outweights the savings for spin locks that are held for a longer timer. The default retry count is 1000. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-04-16Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds1-0/+9
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!