diff options
author | Jon Mason <jdmason@us.ibm.com> | 2006-01-23 10:58:20 -0600 |
---|---|---|
committer | Paul Mackerras <paulus@samba.org> | 2006-02-10 16:53:51 +1100 |
commit | 2ef9481e666b4654159ac9f847e6963809e3c470 (patch) | |
tree | 62abb35633702dcc585df1e2ee093aaf0dc6bb07 /drivers/char/hvcs.c | |
parent | 75288c78c69020a574d93770c3a941b785f3d93d (diff) | |
download | linux-2ef9481e666b4654159ac9f847e6963809e3c470.tar.bz2 |
[PATCH] powerpc: trivial: modify comments to refer to new location of files
This patch removes all self references and fixes references to files
in the now defunct arch/ppc64 tree. I think this accomplises
everything wanted, though there might be a few references I missed.
Signed-off-by: Jon Mason <jdmason@us.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Diffstat (limited to 'drivers/char/hvcs.c')
-rw-r--r-- | drivers/char/hvcs.c | 9 |
1 files changed, 5 insertions, 4 deletions
diff --git a/drivers/char/hvcs.c b/drivers/char/hvcs.c index 831eb4e8d9d3..f7ac31856572 100644 --- a/drivers/char/hvcs.c +++ b/drivers/char/hvcs.c @@ -118,7 +118,7 @@ * the hvcs_final_close() function in order to get it out of the spinlock. * Rearranged hvcs_close(). Cleaned up some printks and did some housekeeping * on the changelog. Removed local CLC_LENGTH and used HVCS_CLC_LENGTH from - * arch/ppc64/hvcserver.h. + * include/asm-powerpc/hvcserver.h * * 1.3.2 -> 1.3.3 Replaced yield() in hvcs_close() with tty_wait_until_sent() to * prevent possible lockup with realtime scheduling as similarily pointed out by @@ -168,9 +168,10 @@ MODULE_VERSION(HVCS_DRIVER_VERSION); /* * The hcall interface involves putting 8 chars into each of two registers. - * We load up those 2 registers (in arch/ppc64/hvconsole.c) by casting char[16] - * to long[2]. It would work without __ALIGNED__, but a little (tiny) bit - * slower because an unaligned load is slower than aligned load. + * We load up those 2 registers (in arch/powerpc/platforms/pseries/hvconsole.c) + * by casting char[16] to long[2]. It would work without __ALIGNED__, but a + * little (tiny) bit slower because an unaligned load is slower than aligned + * load. */ #define __ALIGNED__ __attribute__((__aligned__(8))) |