Type: | SECURITY |
Severity: | IMPORTANT |
Release Date: | 2018-11-13 |
[4.3.0-55.el6.186.195]
- From 7d8b5dd98463524686bdee8b973b53c00c232122 Mon Sep 17 00:00:00 2001
From: Liu Jinsong
Date: Mon, 25 Nov 2013 11:19:04 +0100
Subject: [PATCH v2 6/6] x86/xsave: fix nonlazy state handling
Nonlazy xstates should be xsaved each time when vcpu_save_fpu.
Operation to nonlazy xstates will not trigger #NM exception, so
whenever vcpu scheduled in it got restored and whenever scheduled
out it should get saved.
Currently this bug affects AMD LWP feature, and later Intel MPX
feature. With the bugfix both LWP and MPX will work fine.
Signed-off-by: Liu Jinsong
Furthermore, during restore we also need to set nonlazy_xstate_used
according to the incoming accumulated XCR0.
Also adjust the changes to i387.c such that there won't be a pointless
clts()/stts() pair.
Signed-off-by: Jan Beulich
Based on commit from OVM3.4.5
(cherry picked from commit 7d8b5dd98463524686bdee8b973b53c00c232122)
Backported-by: Jie Li
Reviewed-by: Ross Philipson
Reviewed-by: Darren Kenny
[4.3.0-55.el6.186.194]
- From 32df6a92ef8065da5ab471f86ab9c67df29fcafa Mon Sep 17 00:00:00 2001
From: Jan Beulich
Date: Mon, 30 Jul 2018 14:17:45 +0200
Subject: [PATCH v2 5/6] x86/spec-ctrl: command line handling adjustments
For one, 'no-xen' should not imply 'no-eager-fpu', as 'eager FPU' mode
is to guard guests, not Xen itself, which is also expressed so by
print_details().
And then opt_ssbd, despite being off by default, should also be cleared
by the 'no' and 'no-xen' sub-options.
Signed-off-by: Jan Beulich
Reviewed-by: Andrew Cooper
master commit: ac3f9a72141a48d40fabfff561d5a7dc0e1b810d
master date: 2018-07-10 12:22:31 +0200
Cherry-picked from: 2d69b6d00d6ac04bda01e7323bcd006f0f88ceb7
These are followup fixes for Lazy FPU/XSA-267
Orabug: 28696347
Signed-off-by: Ross Philipson
Reviewed-by: Boris Ostrovsky
Based on commit from OVM3.4.5
(Cherry-picked from: 32df6a92ef8065da5ab471f86ab9c67df29fcafa)
Backported-by: Jie Li
Reviewed-by: Ross Philipson
Reviewed-by: Darren Kenny
[4.3.0-55.el6.186.193]
- From f9f5f0bb3188437559ca74021b24b83fe52947d4 Mon Sep 17 00:00:00 2001
From: Jan Beulich
Date: Thu, 28 Jun 2018 12:28:21 +0200
Subject: [PATCH v2 4/6] x86/HVM: don't cause #NM to be raised in Xen
The changes for XSA-267 did not touch management of CR0.TS for HVM
guests. In fully eager mode this bit should never be set when
respective vCPU-s are active, or else hvmemul_get_fpu() might leave it
wrongly set, leading to #NM in hypervisor context.
{svm,vmx}_enter() and {svm,vmx}_fpu_dirty_intercept() become unreachable
this way. Explicit {svm,vmx}_fpu_leave() invocations need to be guarded
now.
With no CR0.TS management necessary in fully eager mode, there's also no
need anymore to intercept #NM.
Reported-by: Charles Arnold
Signed-off-by: Jan Beulich
Reviewed-by: Andrew Cooper
Cherry-picked from: 598a375f5230d91ac88e76a9f4b4dde4a62a4c5b
These are followup fixes for Lazy FPU/XSA-267
Orabug: 28696347
Signed-off-by: Ross Philipson
Reviewed-by: Boris Ostrovsky
Based on commit from OVM3.4.5
(Cherry-picked from: f9f5f0bb3188437559ca74021b24b83fe52947d4)
Conflicts:
- context difference only
Backported-by: Jie Li
Reviewed-by: Ross Philipson
Reviewed-by: Darren Kenny
[4.3.0-55.el6.186.192]
- From fccf5221ce4a55268e31a6c069b4889ee35416e6 Mon Sep 17 00:00:00 2001
From: Jan Beulich
Date: Thu, 28 Jun 2018 12:27:56 +0200
Subject: [PATCH v2 3/6] x86/EFI: further correct FPU state handling around runtime calls
We must not leave a vCPU with CR0.TS clear when it is not in fully eager
mode and has not touched non-lazy state. Instead of adding a 3rd
invocation of stts() to vcpu_restore_fpu_eager(), consolidate all of
them into a single one done at the end of the function.
Rename the function at the same time to better reflect its purpose, as
the patches touches all of its occurences anyway.
The new function parameter is not really well named, but
'need_stts_if_not_fully_eager' seemed excessive to me.
Signed-off-by: Jan Beulich
Reviewed-by: Andrew Cooper
Reviewed-by: Paul Durrant
Cherry-picked from: b7b7c4df2d251b1feba217939ea0b618094a48c2
These are followup fixes for Lazy FPU/XSA-267
Signed-off-by: Ross Philipson
Reviewed-by: Boris Ostrovsky
The previous version of this patch was reverted. This second version
of this patch fixes a crash in xrstor due to idle domain threads not
having an xsave area which leads to a NULL ptr deref. When Xen is
booted as an EFI boot loader, it calls run time services at early
boot time where this occurs on an idle domain vcpu. The fix is to check
for a NULL xsave_area in xsave() and xrstor().
Orabug: 28746888
Signed-off-by: Ross Philipson
Reviewed-by: Boris Ostrovsky
Based on commit from OVM3.4.5
(Cherry-picked from: fccf5221ce4a55268e31a6c069b4889ee35416e6)
Conflicts:
- context differences
- is_hvm_vcpu() is used instead of is_pv_vcpu() due to 3.3 doesn't
have the latter
Backported-by: Jie Li
Reviewed-by: Ross Philipson
Reviewed-by: Darren Kenny
[4.3.0-55.el6.186.191]
- From 8b43dba27c8ea017aae4908bb380693263624e10 Mon Sep 17 00:00:00 2001
From: Jan Beulich
Date: Thu, 28 Jun 2018 12:27:34 +0200
Subject: [PATCH v2 2/6] x86/EFI: fix FPU state handling around runtime calls
There are two issues. First, the nonlazy xstates were never restored
after returning from the runtime call.
Secondly, with the fully_eager_fpu mitigation for XSA-267 / LazyFPU, the
unilateral stts() is no longer correct, and hits an assertion later when
a lazy state restore tries to occur for a fully eager vcpu.
Fix both of these issues by calling vcpu_restore_fpu_eager(). As EFI
runtime services can be used in the idle context, the idle assertion
needs to move until after the fully_eager_fpu check.
Introduce a 'curr' local variable and replace other uses of 'current'
at the same time.
Reported-by: Andrew Cooper
Signed-off-by: Jan Beulich
Signed-off-by: Andrew Cooper
Tested-by: Juergen Gross
Cherry-picked from: ba7d0117ab535280e2b6821aa6d323053ac6b266
These are followup fixes for Lazy FPU/XSA-267
Signed-off-by: Ross Philipson
Reviewed-by: Boris Ostrovsky
The previous version of this patch was reverted. This second version
of this patch fixes a crash in xrstor due to idle domain threads not
having an xsave area which leads to a NULL ptr deref. When Xen is
booted as an EFI boot loader, it calls run time services at early
boot time where this occurs on an idle domain vcpu. The fix is to check
for a NULL xsave_area in xsave() and xrstor().
Orabug: 28746888
Signed-off-by: Ross Philipson
Reviewed-by: Boris Ostrovsky
Based on commit from OVM3.4.5
(Cherry-picked from: 8b43dba27c8ea017aae4908bb380693263624e10)
Conflicts:
- context differences
- is_hvm_vcpu() is used instead of is_pv_vcpu() due to 3.3 doesn't
have the latter
Backported-by: Jie Li
Reviewed-by: Ross Philipson
Reviewed-by: Darren Kenny
[4.3.0-55.el6.186.190]
- From 3ed5db85b82cb67e669fdddffa205187a6343a9c Mon Sep 17 00:00:00 2001
From: Jan Beulich
Date: Tue, 24 Jun 2014 09:42:49 +0200
Subject: [PATCH v2 1/6] x86/EFI: allow FPU/XMM use in runtime service functions
UEFI spec update 2.4B developed a requirement to enter runtime service
functions with CR0.TS (and CR0.EM) clear, thus making feasible the
already previously stated permission for these functions to use some of
the XMM registers. Enforce this requirement (along with the connected
ones on FPU control word and MXCSR) by going through a full FPU save
cycle (if the FPU was dirty) in efi_rs_enter() (along with loading the
specified values into the other two registers).
Note that the UEFI spec mandates that extension registers other than
XMM ones (for our purposes all that get restored eagerly) are preserved
across runtime function calls, hence there's nothing we need to restore
in efi_rs_leave() (they do get saved, but just for simplicity's sake).
Signed-off-by: Jan Beulich
master commit: e0fe297dabc96d8161d568f19a99722c4739b9f9
master date: 2014-06-18 15:53:27 +0200
Based on commit from OVM3.4.5.
This is a prerequisite for XSA-267
(Cherry-picked from: 3ed5db85b82cb67e669fdddffa205187a6343a9c)
Conflicts:
- context difference only
Backported-by: Jie Li
Reviewed-by: Ross Philipson
Reviewed-by: Darren Kenny
[4.3.0-55.el6.186.189]
- From: Julien Grall
Signed-off-by: Julien Grall
Acked-by: Keir Fraser
CC: Jan Beulich
This is a prerequisite for XSA-273
Backported-by: Zhenzhong Duan
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.188]
- From: Ross Philipson
This utility can offline all SMT siblings. It can be used to skip CPUs
dedicated to dom0. For symmetry with other tools, it can also be used
to online SMT siblings.
This is part of XSA-273
Orabug: 28487050
Signed-off-by: Ross Philipson
Reviewed-by: Boris Ostrovsky
Backported-by: Zhenzhong Duan
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.187]
- From: Andrew Cooper
This mitigation requires up-to-date microcode, and is enabled by default on
affected hardware if available.
The default for SMT/Hyperthreading is far more complicated to reason about,
not least because we don't know if the user is going to want to run any HVM
guests to begin with. If a explicit default isn't given, nag the user to
perform a risk assessment and choose an explicit default, and leave other
configuration to the toolstack.
This is part of XSA-273 / CVE-2018-3620.
Signed-off-by: Andrew Cooper
Orabug: 28487050
This version differs from the upstream version in that it does the
L1D flush from the tail end of vmx_vmenter_helper(). Having the flush
done during the VMCS restore causes a platform hang in our version of
Xen. This issue will be investigated separately. The rest
are just some minor context and offset diffs.
Signed-off-by: Ross Philipson
Reviewed-by: Boris Ostrovsky
Backported-by: Zhenzhong Duan
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.186]
- From: Andrew Cooper
Guests (outside of the nested virt case, which isn't supported yet) don't need
L1D_FLUSH for their L1TF mitigations, but offering/emulating MSR_FLUSH_CMD is
easy and doesn't pose an issue for Xen.
The MSR is offered to HVM guests only. PV guests attempting to use it would
trap for emulation, and the L1D cache would fill long before the return to
guest context. As such, PV guests can't make any use of the L1D_FLUSH
functionality.
This is part of XSA-273 / CVE-2018-3646.
Signed-off-by: Andrew Cooper
Orabug: 28487050
This patch differs a fair amount from the upstream version. The CPUID policy
in this version of Xen is a partial implementation of what is upstream so the
management of it is different. The end result is the same functionality
though.
Signed-off-by: Ross Philipson
Reviewed-by: Boris Ostrovsky
Backported-by: Zhenzhong Duan
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.185]
- From: Andrew Cooper
This is part of XSA-273 / CVE-2018-3646.
Signed-off-by: Andrew Cooper
Reviewed-by: Jan Beulich
Orabug: 28487050
Mostly the same. The file tools/misc/xen-cpuid.c does not exist
so those changes did not apply. Also the CPUIDs are specified differenly
in this version of Xen. The rest were just context and offset diffs.
Signed-off-by: Ross Philipson
Reviewed-by: Boris Ostrovsky
Backported-by: Zhenzhong Duan
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.184]
- From: Andrew Cooper
Safe PTE addresses for L1TF mitigations are ones which are within the L1D
address width (may be wider than reported in CPUID), and above the highest
cacheable RAM/NVDIMM/BAR/etc.
All logic here is best-effort heuristics, which should in practice be fine for
most hardware. Future work will see about disentangling and feeding SRAT
information into the heuristics, as well as having L0 pass this information
down to lower levels when virtualised.
This is part of XSA-273 / CVE-2018-3620.
Signed-off-by: Andrew Cooper
Orabug: 28487050
Patch mostly the same, some context differences and offset mismatches.
There are some bits of the PV support in this patch also but they were
left there to keep the patch similar to its upstream cousin.
Also we are not picking up original patch's EFI code changes because they
are related to PV only.
Signed-off-by: Ross Philipson
Reviewed-by: Boris Ostrovsky
Backported-by: Zhenzhong Duan
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.183]
- From: Jan Beulich
Shared resources (L1 cache and TLB in particular) present a risk of
information leak via side channels. Provide a means to avoid use of
hyperthreads in such cases.
Signed-off-by: Jan Beulich
Reviewed-by: Roger Pau Monn?195?169
Reviewed-by: Andrew Cooper
This is a prerequisite for XSA-273
Orabug: 28487050
(cherry picked from commit d8f974f1a646c0200b97ebcabb808324b288fadb)
Mostly the same. Slight difference in logic in the setup CPU online/offline
code. Also use BAD_APICID instead of INVALID_CUID.
Signed-off-by: Ross Philipson
Reviewed-by: Boris Ostrovsky
Reviewed-by: Joao Martins
Backported-by: Zhenzhong Duan
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.182]
- From: Jan Beulich
While I've run into the issue with further patches in place which no
longer guarantee the per-CPU area to start out as all zeros, the
CPU_DOWN_FAILED processing looks to have the same issue: By not zapping
the per-CPU cpupool pointer, cpupool_cpu_add()'s (indirect) invocation
of schedule_cpu_switch() will trigger the 'c != old_pool' assertion
there.
Clearing the field during CPU_DOWN_PREPARE is too early (afaict this
should not happen before cpu_disable_scheduler()). Clearing it in
CPU_DEAD and CPU_DOWN_FAILED would be an option, but would take the same
piece of code twice. Since the field's value shouldn't matter while the
CPU is offline, simply clear it (implicitly) for CPU_ONLINE and
CPU_DOWN_FAILED, but only for other than the suspend/resume case (which
gets specially handled in cpupool_cpu_remove()).
By adjusting the conditional in cpupool_cpu_add() CPU_DOWN_FAILED
handling in the suspend case should now also be handled better.
Signed-off-by: Jan Beulich
Reviewed-by: Juergen Gross
This is a prerequisite for XSA-273
Orabug: 28487050
(cherry picked from commit cb1ae9a27819cea0c5008773c68a7be6f37eb0e5)
Applied cleanly.
Signed-off-by: Ross Philipson
Reviewed-by: Boris Ostrovsky
Reviewed-by: Konrad Rzeszutek Wilk
Backported-by: Zhenzhong Duan
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.181]
- From: Dario Faggioli
in fact, before this change, shutting down or suspending the
system with some CPUs not assigned to any cpupool, would
crash as follows:
(XEN) Xen call trace:
(XEN) [
(XEN) [
(XEN) [
(XEN) [
(XEN) [
(XEN) [
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Xen BUG at cpu.c:191
(XEN) ****************************************
This is because, for free CPUs, -EBUSY were being returned
when trying to tear them down, making cpu_down() unhappy.
It is certainly unpractical to forbid shutting down or
suspenging if there are unassigned CPUs, so this change
fixes the above by just avoiding returning -EBUSY for those
CPUs. If shutting off, that does not matter much anyway. If
suspending, we make sure that the CPUs remain unassigned
when resuming.
While there, take the chance to:
- fix the doc comment of cpupool_cpu_remove() (it was
wrong);
- improve comments in general around and in cpupool_cpu_remove()
and cpupool_cpu_add();
- add a couple of ASSERT()-s for checking consistency.
Signed-off-by: Dario Faggioli
Reviewed-by: Juergen Gross
Tested-by: Juergen Gross
This is a prerequisite for XSA-273
For OVM3.3 only and OVM3.4 already has the commit.
Backported-by: Zhenzhong Duan
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.180]
- From: Juergen Gross
When shutting down the machine while there are cpus in a cpupool other than
Pool-0 a crash is triggered due to cpupool handling rejecting offlining the
non-boot cpus in other cpupools.
It is easy to detect this case and allow offlining those cpus.
Reported-by: Stefan Bader
Signed-off-by: Juergen Gross
Tested-by: Stefan Bader
master commit: 05377dede434c746e6708f055858378d20f619db
master date: 2014-07-23 18:03:19 +0200
This is a prerequisite for XSA-273
For OVM3.3 only and OVM3.4 already has the commit.
Backported-by: Zhenzhong Duan
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.179]
- From: Dario Faggioli
which means such an event must be handled at the call sites
of cpupool_assign_cpu_locked(), and the error, if occurring,
properly propagated.
Signed-off-by: Dario Faggioli
Reviewed-by: Juergen Gross
This is a prerequisite for XSA-273
For OVM3.3 only and OVM3.4 already has the commit.
Backported-by: Zhenzhong Duan
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.178]
- From: Andrew Cooper
Intel Core processors since at least Nehalem speculate past #NM, which is the
mechanism by which lazy FPU context switching is implemented.
On affected processors, Xen must use fully eager FPU context switching to
prevent guests from being able to read FPU state (SSE/AVX/etc) from previously
scheduled vcpus.
This is part of XSA-267
Signed-off-by: Andrew Cooper
Reviewed-by: Jan Beulich
Taken from the xsa267-4.6-2.patch posted with the XSA
Conflicts:
Context differences only
Orabug: 28135175
Signed-off-by: Ross Philipson
Reviewed-by: Boris Ostrovsky
Acked-by: Adnan Misherfi
Backported-by: Zhenzhong Duan
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.177]
- From: Andrew Cooper
This is controlled on a per-vcpu bases for flexibility.
This is part of XSA-267
Signed-off-by: Andrew Cooper
Reviewed-by: Jan Beulich
Taken from the xsa267-4.6-1.patch posted with the XSA
Conflicts:
Context differences only
Orabug: 28135175
Signed-off-by: Ross Philipson
Reviewed-by: Boris Ostrovsky
Acked-by: Adnan Misherfi
Backported-by: Zhenzhong Duan
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.176]
- From 2b1ac3ced59e3d0309572c146d74e90a2e861965 Mon Sep 17 00:00:00 2001
From: Boris Ostrovsky
Date: Wed, 29 Aug 2018 03:12:24 +0800
Subject: [PATCH OVM3.3.x v2 18/18] x86/spectre: Fix SPEC_CTRL_ENTRY_FROM_INTR_IST macro
SPEC_CTRL_ENTRY_FROM_INTR_IST is trying to follow upstream implementation and
is using STACK_CPUINFO_FIELD macro to access cpuinfo's xen_spec_ctrl field.
Unfortunately, OVM's definition of this field is obsolete, we have not updated
to commit 4f6aea06 ('x86: reduce code size of struct cpu_info member accesses').
We are calculating end of stack as
movq -1, %r14;
orq %rsp, %r14;
which is open-coded implemetation of GET_STACK_END from the above commit and then
using older definition of STACK_CPUINFO_FIELD. This results in %eax getting
garbage, causing #GP when writing MSR 0x48.
Instead of backporting 4f6aea06 we should simply open-code STACK_CPUINFO_FIELD
as defined in that commit, especially since we do it elsewhere in this file.
Signed-off-by: Boris Ostrovsky
Reviewed-by: Bhavesh Davda
Backported from OVM3.4 commit 37c139ca51ded61f1aa064d0718643054cdb852a
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.175]
- From cf1d101056f5ac447d09dd3cefb5a560bfd8af8a Mon Sep 17 00:00:00 2001
From: Jan Beulich
Date: Tue, 29 May 2018 12:38:52 +0200
Subject: [PATCH OVM3.3.x v2 17/18] x86: correct default_xen_spec_ctrl calculation
Even with opt_msr_sc_{pv,hvm} both false we should set up the variable
as usual, to ensure proper one-time setup during boot and CPU bringup.
This then also brings the code in line with the comment immediately
ahead of the printk() being modified saying 'irrespective of guests'.
Signed-off-by: Jan Beulich
Reviewed-by: Andrew Cooper
Release-acked-by: Juergen Gross
(cherry picked from commit d6239f64713df819278bf048446d3187c6ac4734)
Signed-off-by: Boris Ostrovsky
Reviewed-by: Patrick Colp
Reviewed-by: Ross Philipson
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.174]
- From 5ce626add58bbe7b2d7463942d9c97e50aff97fb Mon Sep 17 00:00:00 2001
From: Andrew Cooper
Date: Fri, 13 Apr 2018 15:42:34 +0000
Subject: [PATCH OVM3.3.x v2 16/18] x86/msr: Virtualise MSR_SPEC_CTRL.SSBD for guests to use
Almost all infrastructure is already in place. Update the reserved bits
calculation in guest_wrmsr(), and offer SSBD to guests by default.
Signed-off-by: Andrew Cooper
Reviewed-by: Jan Beulich
(cherry picked from commit cd53023df952cf0084be9ee3d15a90f8837049c2)
This is XSA-263.
Signed-off-by: Boris Ostrovsky
Reviewed-by: Ross Philipson
Acked-by: Adnan Misherfi
Conflicts:
- no MSR_INTEL_MISC_FEATURES_ENABLES in guest_wrmsr
- no cpufeatureset.h, set X86_FEATURE_SSBD in libxc
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.173]
- From b031a8b6e42f8ff595b7896c147056fc9fbea88c Mon Sep 17 00:00:00 2001
From: Andrew Cooper
Date: Wed, 28 Mar 2018 15:21:39 +0100
Subject: [PATCH OVM3.3.x v2 15/18] x86/Intel: Mitigations for GPZ SP4 - Speculative Store Bypass
To combat GPZ SP4 'Speculative Store Bypass', Intel have extended their
speculative sidechannel mitigations specification as follows:
* A feature bit to indicate that Speculative Store Bypass Disable is
supported.
* A new bit in MSR_SPEC_CTRL which, when set, disables memory disambiguation
in the pipeline.
* A new bit in MSR_ARCH_CAPABILITIES, which will be set in future hardware,
indicating that the hardware is not susceptible to Speculative Store Bypass
sidechannels.
For contemporary processors, this interface will be implemented via a
microcode update.
Signed-off-by: Andrew Cooper
Reviewed-by: Jan Beulich
(cherry picked from commit 9df52a25e0e95a0b9971aa2fc26c5c6a5cbdf4ef)
This is XSA-263.
Signed-off-by: Boris Ostrovsky
Reviewed-by: Ross Philipson
Acked-by: Adnan Misherfi
Conflicts:
- No upstream's CPUID framework, we have our own. Implement
proper CPUID support for SSBD, OVM-specific.
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.172]
- From 1c4bde7d9761d63cf9564a02f349351ccc9fdcd8 Mon Sep 17 00:00:00 2001
From: Andrew Cooper
Date: Thu, 26 Apr 2018 10:56:28 +0100
Subject: [PATCH OVM3.3.x v2 14/18] x86/AMD: Mitigations for GPZ SP4 - Speculative Store Bypass
AMD processors will execute loads and stores with the same base register in
program order, which is typically how a compiler emits code.
Therefore, by default no mitigating actions are taken, despite there being
corner cases which are vulnerable to the issue.
For performance testing, or for users with particularly sensitive workloads,
the command line option is available to force Xen to disable
Memory Disambiguation on applicable hardware.
Signed-off-by: Andrew Cooper
Reviewed-by: Jan Beulich
(cherry picked from commit 8c0e338086f060eba31d37b83fbdb883928aa085)
This is XSA-263.
Signed-off-by: Boris Ostrovsky
Reviewed-by: Konrad Rzeszutek Wilk
Reviewed-by: Ross Philipson
Acked-by: Adnan Misherfi
Conflicts:
tools/libxl/libxl_cpuid.c
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.171]
- From 86eac18feb0c1cbaa350e627176a99e9bd73297a Mon Sep 17 00:00:00 2001
From: Andrew Cooper
Date: Thu, 26 Apr 2018 10:52:55 +0100
Subject: [PATCH OVM3.3.x v2 13/18] x86/spec_ctrl: Introduce a new command line argument to replace
In hindsight, the options for aren't as flexible or useful as expected
(including several options which don't appear to behave as intended).
Changing the behaviour of an existing option is problematic for compatibility,
so introduce a new in the hopes that we can do better.
One common way of deploying Xen is with a single PV dom0 and all domUs being
HVM domains. In such a setup, an administrator who has weighed up the risks
may wish to forgo protection against malicious PV domains, to reduce the
overall performance hit. To cater for this usecase, will
disable all speculative protection for PV domains, while leaving all
speculative protection for HVM domains intact.
For coding clarity as much as anything else, the suboptions are grouped by
logical area; those which affect the alternatives blocks, and those which
affect Xen's in-hypervisor settings. See the xen-command-line.markdown for
full details of the new options.
While changing the command line options, take the time to change how the data
is reported to the user. The three DEBUG printks are upgraded to unilateral,
as they are all relevant pieces of information, and the old 'mitigations:'
line is split in the two logical areas described above.
Sample output from booting with looks like:
(XEN) Speculative mitigation facilities:
(XEN) Hardware features: IBRS/IBPB STIBP IBPB
(XEN) Compiled-in support: INDIRECT_THUNK
(XEN) Xen settings: BTI-Thunk RETPOLINE, SPEC_CTRL: IBRS-, Other: IBPB
(XEN) Support for VMs: PV: None, HVM: MSR_SPEC_CTRL RSB
(XEN) XPTI (64-bit PV only): Dom0 enabled, DomU enabled
Signed-off-by: Andrew Cooper
Reviewed-by: Wei Liu
Reviewed-by: Jan Beulich
Release-acked-by: Juergen Gross
(cherry picked from commit 3352afc26c497d26ecb70527db3cb29daf7b1422)
This is a prerequisite for XSA-263
Signed-off-by: Boris Ostrovsky
Reviewed-by: Ross Philipson
Acked-by: Adnan Misherfi
Conflicts:
xen/arch/x86/spec_ctrl.c - context conflicts
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.170]
- From f76070cd46469b905d27c28308f95641f0414c2e Mon Sep 17 00:00:00 2001
From: Andrew Cooper
Date: Tue, 1 May 2018 11:59:03 +0100
Subject: [PATCH OVM3.3.x v2 12/18] x86/cpuid: Improvements to guest policies for speculative sidechannel features
If Xen isn't virtualising MSR_SPEC_CTRL for guests, IBRSB shouldn't be
advertised. It is not currently possible to express this via the existing
command line options, but such an ability will be introduced.
Signed-off-by: Andrew Cooper
Reviewed-by: Wei Liu
Reviewed-by: Jan Beulich
Release-acked-by: Juergen Gross
(cherry picked from commit cb06b308ec71b23f37a44f5e2351fe2cae0306e9)
This is a prerequisite for XSA-263
Signed-off-by: Boris Ostrovsky
Reviewed-by: Ross Philipson
Acked-by: Adnan Misherfi
Based on 4.6 patch
Conflicts:
- No changes to xen/arch/x86/traps.c since we don't really
support PV mitigations
- Make changes to update_domain_cpuid_info() to tie X86_FEATURE_SC_MSR_HVM
to cp->feat.ibrsb
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.169]
- From 6fb449d31503913d6187ccb9203f7d00094fa715 Mon Sep 17 00:00:00 2001
From: Andrew Cooper
Date: Mon, 21 May 2018 15:07:13 -0400
Subject: [PATCH OVM3.3.x v2 11/18] x86/spec_ctrl: Explicitly set Xen's default MSR_SPEC_CTRL value
With the impending ability to disable MSR_SPEC_CTRL handling on a
per-guest-type basis, the first exit-from-guest may not have the side effect
of loading Xen's choice of value. Explicitly set Xen's default during the BSP
and AP boot paths.
For the BSP however, delay setting a non-zero MSR_SPEC_CTRL default until
after dom0 has been constructed when safe to do so. Oracle report that this
speeds up boots of some hardware by 50s.
'when safe to do so' is based on whether we are virtualised. A native boot
won't have any other code running in a position to mount an attack.
Reported-by: Zhenzhong Duan
Signed-off-by: Andrew Cooper
Reviewed-by: Wei Liu
Reviewed-by: Jan Beulich
Release-acked-by: Juergen Gross
From upstream commit cb8c12020307b39a89273d7699e89000451987ab
Mostly the same. Signigicant context differences in __start_xen
and start_secondary. Also had to add cpu_has_hypervisor.
This is a prerequisite for XSA-263
Signed-off-by: Ross Philipson
Reviewed-by: Boris Ostrovsky
Acked-by: Adnan Misherfi
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.168]
- From b3f576f2dc8fa64ddf05348463e1e6578f6050d8 Mon Sep 17 00:00:00 2001
From: Andrew Cooper
Date: Mon, 21 May 2018 14:45:14 -0400
Subject: [PATCH OVM3.3.x v2 10/18] x86/spec_ctrl: Split X86_FEATURE_SC_MSR into PV and HVM variants
In order to separately control whether MSR_SPEC_CTRL is virtualised for PV and
HVM guests, split the feature used to control runtime alternatives into two.
Xen will use MSR_SPEC_CTRL itself if either of these features are active.
Signed-off-by: Andrew Cooper
Reviewed-by: Wei Liu
Reviewed-by: Jan Beulich
Release-acked-by: Juergen Gross
From upstream commit fa9eb09d446a1279f5e861e6b84fa8675dabf148
Mostly the same, context differences.
This is a prerequisite for XSA-263
Signed-off-by: Ross Philipson
Reviewed-by: Boris Ostrovsky
Acked-by: Adnan Misherfi
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.167]
- From 6fe58f4b6cddf7f0f7b7fc311073b15f6af316e0 Mon Sep 17 00:00:00 2001
From: Andrew Cooper
Date: Mon, 21 May 2018 14:32:02 -0400
Subject: [PATCH OVM3.3.x v2 09/18] x86/spec_ctrl: Elide MSR_SPEC_CTRL handling in idle context when possible
If Xen is virtualising MSR_SPEC_CTRL handling for guests, but using 0 as its
own MSR_SPEC_CTRL value, spec_ctrl_{enter,exit}_idle() need not write to the
MSR.
Requested-by: Jan Beulich
Signed-off-by: Andrew Cooper
Reviewed-by: Wei Liu
Reviewed-by: Jan Beulich
Release-acked-by: Juergen Gross
From upstream commit 94df6e8588e35cc2028ccb3fd2921c6e6360605e
Mostly the same, context differences.
This is a prerequisite for XSA-263
Signed-off-by: Ross Philipson
Reviewed-by: Boris Ostrovsky
Acked-by: Adnan Misherfi
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky
Reviewed-by: Ross Philipson
[4.3.0-55.el6.186.166]
- From ba52881d66d2fea15388b7cfff6376d66000c5d6 Mon Sep 17 00:00:00 2001
From: Andrew Cooper
Date: Mon, 21 May 2018 13:45:48 -0400
Subject: [PATCH OVM3.3.x v2 08/18] x86/spec_ctrl: Rename bits of infrastructure to avoid NATIVE and VMEXIT
In hindsight, using NATIVE and VMEXIT as naming terminology was not clever.
A future change wants to split SPEC_CTRL_EXIT_TO_GUEST into PV and HVM
specific implementations, and using VMEXIT as a term is completely wrong.
Take the opportunity to fix some stale documentation in spec_ctrl_asm.h. The
IST helpers were missing from the large comment block, and since
SPEC_CTRL_ENTRY_FROM_INTR_IST was introduced, we've gained a new piece of
functionality which currently depends on the fine grain control, which exists
in lieu of livepatching. Note this in the comment.
No functional change.
Signed-off-by: Andrew Cooper
Reviewed-by: Wei Liu
Reviewed-by: Jan Beulich
Release-acked-by: Juergen Gross