diff options
author | Paul Mundt <lethal@linux-sh.org> | 2006-09-27 18:36:17 +0900 |
---|---|---|
committer | Paul Mundt <lethal@linux-sh.org> | 2006-09-27 18:36:17 +0900 |
commit | f3c2575818fab45f8609e4aef2e43ab02b3a142e (patch) | |
tree | a4924d7dd8f8df229e36fab24ccccfe12437509b /include/asm-sh/page.h | |
parent | 87b0ef91b6f27c07bf7dcce8584437481f473092 (diff) | |
download | linux-f3c2575818fab45f8609e4aef2e43ab02b3a142e.tar.bz2 |
sh: Calculate shm alignment at runtime.
Set the SHM alignment at runtime, based off of probed cache desc.
Optimize get_unmapped_area() to only colour align shared mappings.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Diffstat (limited to 'include/asm-sh/page.h')
-rw-r--r-- | include/asm-sh/page.h | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/include/asm-sh/page.h b/include/asm-sh/page.h index 3d8dae31a6f6..ca8b26d90475 100644 --- a/include/asm-sh/page.h +++ b/include/asm-sh/page.h @@ -44,6 +44,8 @@ extern void (*clear_page)(void *to); extern void (*copy_page)(void *to, void *from); +extern unsigned long shm_align_mask; + #ifdef CONFIG_MMU extern void clear_page_slow(void *to); extern void copy_page_slow(void *to, void *from); |