summaryrefslogtreecommitdiffstats
path: root/arch/x86_64
AgeCommit message (Collapse)AuthorFilesLines
2006-09-26[PATCH] i386/x86-64: PCI: split probing and initialization of type 1 config ↵Andi Kleen1-1/+1
space access First probe if type1/2 accesses work, but then only initialize them at the end. This is useful for a later patch that needs this information inbetween. Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Fix idle notifiersAndi Kleen1-6/+9
Previously exit_idle would be called more often than enter_idle Now instead of using complicated tests just keep track of it using the per CPU variable as a flip flop. I moved the idle state into the PDA to make the access more efficient. Original bug report and an initial patch from Stephane Eranian, but redone by AK. Cc: Stephane Eranian <eranian@hpl.hp.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Remove experimental mark of kexecEric W. Biederman1-2/+1
kexec has been marked experimental for a year now and all of the serious problems have been worked through. So it is time (if not past time) to remove the experimental mark. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Mark per cpu data initialization __initdata againAndi Kleen1-5/+3
Before 2.6.16 this was changed to work around code that accessed CPUs not in the possible map. But that code should be all fixed now, so mark it __initdata again. Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Check return values of __copy_to_user in uname emulationAndi Kleen1-12/+16
Quietens some new warnings Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Add __must_check to copy_*_userAndi Kleen1-2/+4
Following i386. And also fix the two occurrences that caused warnings in arch/x86_64/* Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Fix zeroing on exception in copy_*_userAndi Kleen2-45/+80
- Don't zero for __copy_from_user_inatomic following i386. This will prevent spurious zeros for parallel file system writers when one does a exception - The string instruction version didn't zero the output on exception. Oops. Also I cleaned up the code a bit while I was at it and added a minor optimization to the string instruction path. Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] insert IOAPIC(s) and Local APIC into resource mapadurbin@google.com1-0/+54
This patch places the IOAPIC(s) and the Local APIC specified by ACPI tables into the resource map. The APICs will then be visible within /proc/iomem Signed-off-by: Aaron Durbin <adurbin@google.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Fix a PDA warning uncovered by the new type checkingAndi Kleen1-1/+1
Fix linux/arch/x86_64/kernel/process.c: In function __switch_to: linux/arch/x86_64/kernel/process.c:626: warning: assignment makes integer from pointer without a cast Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Fix a irqcount comment in entry.SAndi Kleen1-1/+6
Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Add the -fstack-protector option to the CFLAGSArjan van de Ven1-0/+3
Add a feature check that checks that the gcc compiler has stack-protector support and has the bugfix for PR28281 to make this work in kernel mode. The easiest solution I could find was to have a shell script in scripts/ to do the detection; if needed we can make this fancier in the future without making the makefile too complex. Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Andi Kleen <ak@suse.de> CC: Andi Kleen <ak@suse.de> CC: Sam Ravnborg <sam@ravnborg.org>
2006-09-26[PATCH] Add the canary field to the PDA area and the task structArjan van de Ven1-0/+8
This patch adds the per thread cookie field to the task struct and the PDA. Also it makes sure that the PDA value gets the new cookie value at context switch, and that a new task gets a new cookie at task creation time. Signed-off-by: Arjan van Ven <arjan@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andi Kleen <ak@suse.de> CC: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Add the Kconfig option for the stackprotector featureArjan van de Ven1-0/+24
This patch adds the config options for -fstack-protector. Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andi Kleen <ak@suse.de> CC: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Don't use kernel_text_address in oops contextAndi Kleen1-1/+3
Because it can take spinlocks. Suggested by Mathieu Desnoyers Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Avoid overwriting the current pgd (V4, x86_64)Magnus Damm2-53/+189
kexec: Avoid overwriting the current pgd (V4, x86_64) This patch upgrades the x86_64-specific kexec code to avoid overwriting the current pgd. Overwriting the current pgd is bad when CONFIG_CRASH_DUMP is used to start a secondary kernel that dumps the memory of the previous kernel. The code introduces a new set of page tables. These tables are used to provide an executable identity mapping without overwriting the current pgd. Signed-off-by: Magnus Damm <magnus@valinux.co.jp> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Remove most of the special cases for the debug IST stackKeith Owens2-36/+6
Remove most of the special cases for the debug IST stack. This is a follow on clean up patch, it requires the bug fix patch that adds orig_ist. Signed-off-by: Keith Owens <kaos@ocs.com.au> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Optimize PDA accesses slightlyAndi Kleen1-0/+3
Based on a idea by Jeremy Fitzhardinge: Replace the volatiles and memory clobbers in the PDA access with telling gcc about access to a proxy PDA structure that doesn't actually exist. But the dummy accesses give a defined ordering for read/write accesses. Also add some memory barriers to the early GS initialization to make sure no PDA access is moved before it. Advantage is some .text savings (probably most from better code for accessing "current"): text data bss dec hex filename 4845647 1223688 615864 6685199 66020f vmlinux 4837780 1223688 615864 6677332 65e354 vmlinux-pda 1.2% smaller code Cc: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Put .note.* sections into a PT_NOTE segmentIan Campbell1-4/+10
This patch updates x86_64 linker script to pack any .note.* sections into a PT_NOTE segment in the output file. To do this, we tell ld that we need a PT_NOTE segment. This requires us to start explicitly mapping sections to segments, so we also need to explicitly create PT_LOAD segments for text and data, and map the sections to them appropriately. Fortunately, each section will default to its previous section's segment, so it doesn't take many changes to vmlinux.lds.S. The corresponding change is already made for i386 in -mm and I'd like this patch to join it. The section to segment mappings do change as do the segment flags so some time in -mm would be good for that reason as well, just in case. In particular .data and .bss move from the text segment to the data segment and .data.cacheline_aligned .data.read_mostly are put in the data segment instead of a separate one. I think that it would be possible to exactly match the existing section to segment mapping and flags but it would be a more intrusive change and I'm not sure there is a reason for the existing layout other than it is what you get by default if you don't explicitly specify something else. If there is a reason for the existing layout then I will of course make the more intrusive change. If there is no reason we could probably drop the executable or writable flags from some segments but I don't know how much attention is paid to them anyway so it might not be worth the effort. The vsyscall related sections need to go in a different segment to the normal data segment and so I invented a "user" segment to contain them. I believe this should appear to be another data segment as far as the kernel is concerned so the flags are setup accordingly. The notes will be used in the Xen paravirt_ops backend to provide additional information to the domain builder. I am in the process of converting the xen-unstable kernels and tools over to this scheme at the moment to support this in the future. It has been suggested to me that the notes segment should have flags 0 (i.e. not readable) since it is only used by the loader and is not used at runtime. For now I went with a readable segment since that is what the i386 patch uses. AK: dropped NOTES addition right now because the needed infrastructure for that is not merged yet Signed-off-by: Ian Campbell <ian.campbell@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Reload CS when startup_64 is used.Eric W. Biederman1-4/+7
In long mode the %cs is largely a relic. However there are a few cases like iret where it matters that we have a valid value. Without this patch it is possible to enter the kernel in startup_64 without setting %cs to a valid value. With this patch we don't care what %cs value we enter the kernel with, so long as the cs shadow register indicates it is a privileged code segment. Thanks to Magnus Damm for finding this problem and posting the first workable patch. I have moved the jump to set %cs down a few instructions so we don't need to take an extra jump. Which keeps the code simpler. Signed-of-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Remove non e820 fallbacks in high level codeAndi Kleen1-19/+9
Drop support for non e820 BIOS calls to get the memory map. The boot assembler code still has some support, but not the C code now. Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Fix boot code head.S warningPaolo 'Blaisorblade' Giarrusso1-1/+2
When compiling a 64-bit kernel on an Ubuntu 6.06 32bit system (whose GCC is also a cross-compiler for x86_64) I've seen that head.o is compiled as a 64-bit file (while it should not) and ld complaining about this during linking: [AK: it happens on all systems with new binutils] ld: warning: i386:x86-64 architecture of input file `arch/x86_64/boot/compressed/head.o' is incompatible with i386 output I've verified that removing -m64 from compilation flags to turn "-m64 -traditional -m32" into "-traditional -m32" fixes the issue. Signed-off-by: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Add a missing check for irq flags tracing in NMIAndi Kleen1-0/+2
NMIs are not supposed to track the irq flags, but TRACE_IRQS_IRETQ did it anyways. Add a check. Cc: mingo@elte.hu Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Fix coding style and output of the mptable parserAndi Kleen1-18/+18
Give the printks a consistent prefix. Add some missing white space. Cc: len.brown@intel.com Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Remove some cruft in apic id checking during processor setupAndi Kleen1-17/+1
- Remove a define that was used only once - Remove the too large APIC ID check because we always support the full 8bit range of APICs. - Restructure code a bit to be simpler. Cc: len.brown@intel.com Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Remove APIC version/cpu capability mpparse checking/printingAndi Kleen2-48/+17
ACPI went to great trouble to get the APIC version and CPU capabilities of different CPUs before passing them to the mpparser. But all that data was used was to print it out. Actually it even faked some data based on the boot cpu, not on the actual CPU being booted. Remove all this code because it's not needed. Cc: len.brown@intel.com Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Use proper accessors to change PSE bits in change_page_attr()Andi Kleen1-10/+6
Use normal pte accessors in change_page_attr() to access the PSE bits. Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Fix pte_exec/mkexec and use it in change_page_attr()Andi Kleen1-3/+5
Fix the pte_exec/mkexec page table accessor functions to really use the NX bit. Previously they only checked the USER bit, but weren't actually used for anything. Then use them in change_page_attr() to manipulate the NX bit properly. Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Remove bogus warning from early_ioremapAndi Kleen1-1/+0
It is correct for its only caller right now, but not for possible future others. Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Remove safe_smp_processor_id()Andi Kleen3-29/+6
And replace all users with ordinary smp_processor_id. The function was originally added to get some basic oops information out even if the GS register was corrupted. However that didn't work for some anymore because printk is needed to print the oops and it uses smp_processor_id() already. Also GS register corruptions are not particularly common anymore. This also helps the Xen port which would otherwise need to do this in a special way because it can't access the local APIC. Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Detect clock skew during suspendRafael J. Wysocki1-1/+9
Detect the situations in which the time after a resume from disk would be earlier than the time before the suspend and prevent them from happening on x86_64. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] make numa_emulation() __initAndrew Morton1-1/+1
Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Don't force reserve the 640k-1MB rangeAndi Kleen1-32/+1
From i386 x86-64 inherited code to force reserve the 640k-1MB area. That was needed on some old systems. But we generally trust the e820 map to be correct on 64bit systems and mark all areas that are not memory correctly. This patch will allow to use the real memory in there. Or rather the only way to find out if it's still needed is to try. So far I'm optimistic. Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] mark init_amd() as __cpuinitMagnus Damm1-1/+1
The init_amd() function is only called from identify_cpu() which is already marked as __cpuinit. So let's mark it as __cpuinit. Signed-off-by: Magnus Damm <magnus@valinux.co.jp> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] x86_64 kernel mapping fixKeith Mannthey1-25/+26
Fix for the x86_64 kernel mapping code. Without this patch the update path only inits one pmd_page worth of memory and tramples any entries on it. now the calling convention to phys_pmd_init and phys_init is to always pass a [pmd/pud] page not an offset within a page. Signed-off-by: Keith Mannthey<kmannth@us.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-09-26[PATCH] wire up oops_enter()/oops_exit()Andrew Morton1-0/+3
Implement pause_on_oops() on x86_64. AK: I redid the patch to do the oops_enter/exit in the existing oops_begin()/end(). This makes it much shorter. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] non lazy "sleazy" fpu implementationArjan van de Ven2-0/+11
Right now the kernel on x86-64 has a 100% lazy fpu behavior: after *every* context switch a trap is taken for the first FPU use to restore the FPU context lazily. This is of course great for applications that have very sporadic or no FPU use (since then you avoid doing the expensive save/restore all the time). However for very frequent FPU users... you take an extra trap every context switch. The patch below adds a simple heuristic to this code: After 5 consecutive context switches of FPU use, the lazy behavior is disabled and the context gets restored every context switch. If the app indeed uses the FPU, the trap is avoided. (the chance of the 6th time slice using FPU after the previous 5 having done so are quite high obviously). After 256 switches, this is reset and lazy behavior is returned (until there are 5 consecutive ones again). The reason for this is to give apps that do longer bursts of FPU use still the lazy behavior back after some time. [akpm@osdl.org: place new task_struct field next to jit_keyring to save space] Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-09-26[PATCH] fix bus numbering format in mmconfig warningBrice Goglin1-3/+2
Make an mmconfig warning print the bus id with a regular format. Signed-off-by: Brice Goglin <brice@myri.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-09-26[PATCH] Auto size the per cpu area.Eric W. Biederman1-5/+2
Now for a completely different but trivial approach. I just boot tested it with 255 CPUS and everything worked. Currently everything (except module data) we place in the per cpu area we know about at compile time. So instead of allocating a fixed size for the per_cpu area allocate the number of bytes we need plus a fixed constant for to be used for modules. It isn't perfect but it is much less of a pain to work with than what we are doing now. AK: fixed warning Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] make fault notifier unconditional and export itAndi Kleen1-9/+3
It's needed for external debuggers and overhead is very small. Also make the actual notifier chain they use static Cc: jbeulich@novell.com Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Make boot_param_data pure BSSAndi Kleen1-1/+1
Since it's all zero. Actually I think gcc 4+ will do that automatically, but earlier compilers won't Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] i386/x86-64: Improve Kconfig description of CRASH_DUMPAndi Kleen1-1/+8
Improve Kconfig description of CONFIG_CRASH_DUMP. Previously it was too brief to be useful. Cc: vgoyal@in.ibm.com Cc: ebiederm@xmission.com Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] X86_64 monotonic_clock goes backwardsDimitri Sivanich1-7/+4
I've noticed some erratic behavior while testing the X86_64 version of monotonic_clock(). While spinning in a loop reading monotonic clock values (pinned to a single cpu) I noticed that the difference between subsequent values occasionally went negative (time going backwards). I found that in the following code: this_offset = get_cycles_sync(); /* FIXME: 1000 or 1000000? */ --> offset = (this_offset - last_offset)*1000 / cpu_khz; } return base + offset; the offset sometimes turns out to be 0, even though this_offset > last_offset. +Added fix From: Toyo Abe <toyoa@mvista.com> The x86_64-mm-monotonic-clock.patch in 2.6.18-rc4-mm2 made a change to the updating of monotonic_base. It now uses cycles_2_ns(). I suggest that a set_cyc2ns_scale() should be done prior to the setup_irq(). Because cycles_2_ns() can be called from the timer ISR right after the irq0 is enabled. Signed-off-by: Toyo Abe <toyoa@mvista.com> Signed-off-by: Dimitri Sivanich <sivanich@sgi.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] x86: error_code is not safe for kprobesPrasanna S.P1-12/+7
This patch moves the entry.S:error_entry to .kprobes.text section, since code marked unsafe for kprobes jumps directly to entry.S::error_entry, that must be marked unsafe as well. This patch also moves all the ".previous.text" asm directives to ".previous" for kprobes section. AK: Following a similar i386 patch from Chuck Ebbert AK: Also merged Jeremy's fix in. +From: Jeremy Fitzhardinge <jeremy@goop.org> KPROBE_ENTRY does a .section .kprobes.text, and expects its users to do a .previous at the end of the function. Unfortunately, if any code within the function switches sections, for example .fixup, then the .previous ends up putting all subsequent code into .fixup. Worse, any subsequent .fixup code gets intermingled with the code its supposed to be fixing (which is also in .fixup). It's surprising this didn't cause more havok. The fix is to use .pushsection/.popsection, so this stuff nests properly. A further cleanup would be to get rid of all .section/.previous pairs, since they're inherently fragile. +From: Chuck Ebbert <76306.1226@compuserve.com> Because code marked unsafe for kprobes jumps directly to entry.S::error_code, that must be marked unsafe as well. The easiest way to do that is to move the page fault entry point to just before error_code and let it inherit the same section. Also moved all the ".previous" asm directives for kprobes sections to column 1 and removed ".text" from them. Signed-off-by: Chuck Ebbert <76306.1226@compuserve.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Check for end of stack trace before falling backAndi Kleen1-0/+2
Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Merge stacktrace and show_traceAndi Kleen2-211/+102
This unifies the standard backtracer and the new stacktrace in memory backtracer. The standard one is converted to use callbacks and then reimplement stacktrace using new callbacks. The main advantage is that stacktrace can now use the new dwarf2 unwinder and avoid false positives in many cases. I kept it simple to make sure the standard backtracer stays reliable. Cc: mingo@elte.hu Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Don't access the APIC in safe_smp_processor_id when it is not mapped yetAndi Kleen2-1/+3
Lockdep can call the dwarf2 unwinder early, and the dwarf2 code uses safe_smp_processor_id which tries to access the local APIC page. But that doesn't work before the APIC code has set up its fixmap. Check for this case and always return boot cpu then. Cc: jbeulich@novell.com Cc: mingo@elte.hu Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] x86: Some preparationary cleanup for stack traceAndi Kleen1-9/+5
- Remove unused all_contexts parameter No caller used it - Move skip argument into the structure (needed for followon patches) Cc: mingo@elte.hu Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] Calgary IOMMU: eradicate sole remaining 80 chars per line offenderMuli Ben-Yehuda1-1/+2
Signed-off-by: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: Jon Mason <jdmason@us.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] remove tce_cache_blast_stress()Muli Ben-Yehuda1-11/+0
tce_cache_blast_stress was useful during bringup to stress the IOMMU's cache flushing. Now that we quiesce DMAs on every cache flush, using _stress() brings the machine down to its knees once you put it under load. Remove this debug / bringup code that isn't useful anymore completely. Signed-off-by: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: Jon Mason <jdmason@us.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-09-26[PATCH] only verify the allocation bitmap if CONFIG_IOMMU_DEBUG is onMuli Ben-Yehuda1-9/+35
Introduce new function verify_bit_range(). Define two versions, one for CONFIG_IOMMU_DEBUG enabled and one for disabled. Previously we were checking that the bitmap was consistent every time we allocated or freed an entry in the TCE table, which is good for debugging but incurs an unnecessary penalty on non debug builds. Signed-off-by: Muli Ben-Yehuda <muli@il.ibm.com> Signed-off-by: Jon Mason <jdmason@us.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>