diff options
author | Sean Christopherson <seanjc@google.com> | 2021-12-07 22:09:24 +0000 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2022-02-10 13:50:35 -0500 |
commit | 9c52f6b3d8c09df75b72dab9a0e6eb2b70435ae1 (patch) | |
tree | 6bbf0bc756a060c320872114d8ed72f1447a2d86 /arch/h8300 | |
parent | 79661c3766f878aa9b4e20b4f2f8683431e5ec01 (diff) | |
download | linux-9c52f6b3d8c09df75b72dab9a0e6eb2b70435ae1.tar.bz2 |
KVM: x86: Shove vp_bitmap handling down into sparse_set_to_vcpu_mask()
Move the vp_bitmap "allocation" that's needed to handle mismatched vp_index
values down into sparse_set_to_vcpu_mask() and drop __always_inline from
said helper. The need for an intermediate vp_bitmap is a detail that's
specific to the sparse translation with mismatched VP<=>vCPU indexes and
does not need to be exposed to the caller.
Regarding the __always_inline, prior to commit f21dd494506a ("KVM: x86:
hyperv: optimize sparse VP set processing") the helper, then named
hv_vcpu_in_sparse_set(), was a tiny bit of code that effectively boiled
down to a handful of bit ops. The __always_inline was understandable, if
not justifiable. Since the aforementioned change, sparse_set_to_vcpu_mask()
is a chunky 350-450+ bytes of code without KASAN=y, and balloons to 1100+
with KASAN=y. In other words, it has no business being forcefully inlined.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20211207220926.718794-7-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/h8300')
0 files changed, 0 insertions, 0 deletions