summaryrefslogtreecommitdiffstats
path: root/security
diff options
context:
space:
mode:
authorArun KS <arunks@codeaurora.org>2018-12-28 00:34:29 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2018-12-28 12:11:47 -0800
commitca79b0c211af63fa3276f0e3fd7dd9ada2439839 (patch)
treea9198e85582744619903c85349583bb453fe36cb /security
parent9705bea5f833f4fc21d5bef5fce7348427f76ea4 (diff)
downloadlinux-ca79b0c211af63fa3276f0e3fd7dd9ada2439839.tar.bz2
mm: convert totalram_pages and totalhigh_pages variables to atomic
totalram_pages and totalhigh_pages are made static inline function. Main motivation was that managed_page_count_lock handling was complicating things. It was discussed in length here, https://lore.kernel.org/patchwork/patch/995739/#1181785 So it seemes better to remove the lock and convert variables to atomic, with preventing poteintial store-to-read tearing as a bonus. [akpm@linux-foundation.org: coding style fixes] Link: http://lkml.kernel.org/r/1542090790-21750-4-git-send-email-arunks@codeaurora.org Signed-off-by: Arun KS <arunks@codeaurora.org> Suggested-by: Michal Hocko <mhocko@suse.com> Suggested-by: Vlastimil Babka <vbabka@suse.cz> Reviewed-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Reviewed-by: Pavel Tatashin <pasha.tatashin@soleen.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'security')
-rw-r--r--security/integrity/ima/ima_kexec.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/security/integrity/ima/ima_kexec.c b/security/integrity/ima/ima_kexec.c
index 16bd18747cfa..d6f32807b347 100644
--- a/security/integrity/ima/ima_kexec.c
+++ b/security/integrity/ima/ima_kexec.c
@@ -106,7 +106,7 @@ void ima_add_kexec_buffer(struct kimage *image)
kexec_segment_size = ALIGN(ima_get_binary_runtime_size() +
PAGE_SIZE / 2, PAGE_SIZE);
if ((kexec_segment_size == ULONG_MAX) ||
- ((kexec_segment_size >> PAGE_SHIFT) > totalram_pages / 2)) {
+ ((kexec_segment_size >> PAGE_SHIFT) > totalram_pages() / 2)) {
pr_err("Binary measurement list too large.\n");
return;
}