diff options
author | Andy Lutomirski <luto@kernel.org> | 2015-07-15 10:29:37 -0700 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2015-07-17 12:50:12 +0200 |
commit | a27507ca2d796cfa8d907de31ad730359c8a6d06 (patch) | |
tree | 5d43b8cd95400c02ee2fef4f99dc8c1fb983a950 /arch | |
parent | 0b22930ebad563ae97ff3f8d7b9f12060b4c6e6b (diff) | |
download | linux-a27507ca2d796cfa8d907de31ad730359c8a6d06.tar.bz2 |
x86/nmi/64: Reorder nested NMI checks
Check the repeat_nmi .. end_repeat_nmi special case first. The
next patch will rework the RSP check and, as a side effect, the
RSP check will no longer detect repeat_nmi .. end_repeat_nmi, so
we'll need this ordering of the checks.
Note: this is more subtle than it appears. The check for
repeat_nmi .. end_repeat_nmi jumps straight out of the NMI code
instead of adjusting the "iret" frame to force a repeat. This
is necessary, because the code between repeat_nmi and
end_repeat_nmi sets "NMI executing" and then writes to the
"iret" frame itself. If a nested NMI comes in and modifies the
"iret" frame while repeat_nmi is also modifying it, we'll end up
with garbage. The old code got this right, as does the new
code, but the new code is a bit more explicit.
If we were to move the check right after the "NMI executing"
check, then we'd get it wrong and have random crashes.
( Because the "NMI executing" check would jump to the code that would
modify the "iret" frame without checking if the interrupted NMI was
currently modifying it. )
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/entry/entry_64.S | 34 |
1 files changed, 18 insertions, 16 deletions
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index f54d63a60a3b..5c4ab384b84f 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -1361,7 +1361,24 @@ ENTRY(nmi) /* * Determine whether we're a nested NMI. * - * First check "NMI executing". If it's set, then we're nested. + * If we interrupted kernel code between repeat_nmi and + * end_repeat_nmi, then we are a nested NMI. We must not + * modify the "iret" frame because it's being written by + * the outer NMI. That's okay; the outer NMI handler is + * about to about to call do_nmi anyway, so we can just + * resume the outer NMI. + */ + + movq $repeat_nmi, %rdx + cmpq 8(%rsp), %rdx + ja 1f + movq $end_repeat_nmi, %rdx + cmpq 8(%rsp), %rdx + ja nested_nmi_out +1: + + /* + * Now check "NMI executing". If it's set, then we're nested. * This will not detect if we interrupted an outer NMI just * before IRET. */ @@ -1387,21 +1404,6 @@ ENTRY(nmi) nested_nmi: /* - * If we interrupted an NMI that is between repeat_nmi and - * end_repeat_nmi, then we must not modify the "iret" frame - * because it's being written by the outer NMI. That's okay; - * the outer NMI handler is about to call do_nmi anyway, - * so we can just resume the outer NMI. - */ - movq $repeat_nmi, %rdx - cmpq 8(%rsp), %rdx - ja 1f - movq $end_repeat_nmi, %rdx - cmpq 8(%rsp), %rdx - ja nested_nmi_out - -1: - /* * Modify the "iret" frame to point to repeat_nmi, forcing another * iteration of NMI handling. */ |