OVMSA-2016-0081 - xen security update
Type: | SECURITY |
Severity: | IMPORTANT |
Release Date: | 2016-06-20 |
Description
[4.1.3-25.el5.223.26]
- Revert 'x86/HVM: correct CPUID leaf 80000008 handling'
Originally added by Oracle bug 23170585
Signed-off-by: Chuck Anderson [bug 23500509]
[4.1.3-25.el5.223.25]
- Revert 'x86: limit GFNs to 32 bits for shadowed superpages.'
Originally added through Oracle bug 23170585.
Signed-off-by: Chuck Anderson
Reviewed-by: Joe Jin [bug 23500509]
[4.1.3-25.el5.223.24]
- vga: make sure vga register setup for vbe stays intact
(CVE-2016-3712).
Call vbe_update_vgaregs() when the guest touches GFX, SEQ or CRT
registers, to make sure the vga registers will always have the
values needed by vbe mode. This makes sure the sanity checks
applied by vbe_fixup_regs() are effective.
Without this guests can muck with shift_control, can turn on planar
vga modes or text mode emulation while VBE is active, making qemu
take code paths meant for CGA compatibility, but with the very
large display widths and heigts settable using VBE registers.
Which is good for one or another buffer overflow. Not that
critical as they typically read overflows happening somewhere
in the display code. So guests can DoS by crashing qemu with a
segfault, but it is probably not possible to break out of the VM.
Fixes: CVE-2016-3712
Reported-by: Zuozhi Fzz
Reported-by: P J P
Signed-off-by: Gerd Hoffmann
[Backport to qemu-xen-tradition]
Signed-off-by: Andrew Cooper
Upstream commit 0b0cf8110e97b0cbd0da73d11163e26978822757
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky [bug 23263192] {CVE-2016-3712}
[4.1.3-25.el5.223.23]
- vga: update vga register setup on vbe changes
Call the new vbe_update_vgaregs() function on vbe configuration
changes, to make sure vga registers are up-to-date.
Signed-off-by: Gerd Hoffmann
[Backport to qemu-xen-tradition]
Signed-off-by: Andrew Cooper
Upstream commit 5e840e6292825fcae90f6750a8f57bc989e28c5f
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky [bug 23263192] {CVE-2016-3712}
[4.1.3-25.el5.223.22]
- vga: factor out vga register setup
When enabling vbe mode qemu will setup a bunch of vga registers to make
sure the vga emulation operates in correct mode for a linear
framebuffer. Move that code to a separate function so we can call it
from other places too.
Signed-off-by: Gerd Hoffmann
[Backport to qemu-xen-tradition]
Signed-off-by: Andrew Cooper
Upstream commit df228023ce39e8b72bd5a198b8703319b8b9ca23
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky [bug 23263192] {CVE-2016-3712}
[4.1.3-25.el5.223.21]
- -kes code a bit easier to read.
Signed-off-by: Gerd Hoffmann
[Backport to qemu-xen-tradition]
Signed-off-by: Andrew Cooper
Upstream commit 34db09fb9967441408a1ff0579d553222cf17441
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky
-This line, and those below, will be ignored--
M tools/ioemu-remote/hw/vga.c [bug 23263192]
[4.1.3-25.el5.223.20]
- CVE-2014-3615: vbe: rework sanity checks
Backport of qemu-upstream:
* c1b886c45dc70f247300f549dce9833f3fa2def5
Signed-off-by: Andrew Cooper
Upstream commit 8a1e383df25477e21b48c67c93c3a4dde19f9e4f
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky [bug 23263192] {CVE-2016-3712}
[4.1.3-25.el5.223.19]
- vga: fix banked access bounds checking (CVE-2016-3710)
vga allows banked access to video memory using the window at 0xa00000
and it supports a different access modes with different address
calculations.
The VBE bochs extentions support banked access too, using the
VBE_DISPI_INDEX_BANK register. The code tries to take the different
address calculations into account and applies different limits to
VBE_DISPI_INDEX_BANK depending on the current access mode.
Which is probably effective in stopping misprogramming by accident.
But from a security point of view completely useless as an attacker
can easily change access modes after setting the bank register.
Drop the bogus check, add range checks to vga_mem_{readb,writeb}
instead.
Fixes: CVE-2016-3710
Reported-by: Qinghao Tang
Signed-off-by: Gerd Hoffmann
[Backport to qemu-xen-tradition]
Signed-off-by: Andrew Cooper
Upstream commit bebb4f580901fb638016d9851a28dbb83d44b3a6
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky [bug 23263192] {CVE-2016-3710}
[4.1.3-25.el5.223.18]
- x86: limit GFNs to 32 bits for shadowed superpages.
Superpage shadows store the shadowed GFN in the backpointer field,
which for non-BIGMEM builds is 32 bits wide. Shadowing a superpage
mapping of a guest-physical address above 2^44 would lead to the GFN
being truncated there, and a crash when we come to remove the shadow
from the hash table.
Track the valid width of a GFN for each guest, including reporting it
through CPUID, and enforce it in the shadow pagetables. Set the
maximum witth to 32 for guests where this truncation could occur.
This is XSA-173.
Signed-off-by: Tim Deegan
Signed-off-by: Jan Beulich
Reported-by: Ling Liu
Conflicts:
xen/arch/x86/cpu/common.c
arch/x86/mm/guest_walk.c
Upstream commit 95dd1b6e87b61222fc856724a5d828c9bdc30c80
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky [bug 23170585] {CVE-2016-3960}
[4.1.3-25.el5.223.17]
- x86/HVM: correct CPUID leaf 80000008 handling
CPUID[80000008].EAX[23:16] have been given the meaning of the guest
physical address restriction (in case it needs to be smaller than the
host's), hence we need to mirror that into vCPUID[80000008].EAX[7:0].
Enforce a lower limit at the same time, as well as a fixed value for
the virtual address bits, and zero for the guest physical address ones.
In order for the vMTRR code to see these overrides we need to make it
call hvm_cpuid() instead of domain_cpuid(), which in turn requires
special casing (and relaxing) the controlling domain.
This additionally should hide an ordering problem in the tools: Both
xend and xl appear to be restoring a guest from its image before
setting up the CPUID policy in the hypervisor, resulting in
domain_cpuid() returning all zeros and hence the check in
mtrr_var_range_msr_set() failing if the guest previously had more than
the minimum 36 physical address bits.
Signed-off-by: Jan Beulich
Reviewed-by: Tim Deegan
Conflicts:
xen/arch/x86/hvm/mtrr.c
Upstream commit ef437690af8b75e6758dce77af75a22b63982883
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky [bug 23170585] {CVE-2016-3960}
[4.1.3-25.el5.223.16]
- x86/VMX: sanitize rIP before re-entering guest
... to prevent guest user mode arranging for a guest crash (due to
failed VM entry). (On the AMD system I checked, hardware is doing
exactly the canonicalization being added here.)
Note that fixing this in an architecturally correct way would be quite
a bit more involved: Making the x86 instruction emulator check all
branch targets for validity, plus dealing with invalid rIP resulting
from update_guest_eip() or incoming directly during a VM exit. The only
way to get the latter right would be by not having hardware do the
injection.
Note further that there are a two early returns from
vmx_vmexit_handler(): One (through vmx_failed_vmentry()) leads to
domain_crash() anyway, and the other covers real mode only and can
neither occur with a non-canonical rIP nor result in an altered rIP,
so we don't need to force those paths through the checking logic.
This is XSA-170.
Reported-by: liuling-it
Signed-off-by: Jan Beulich
Reviewed-by: Andrew Cooper
Tested-by: Andrew Cooper
Backported-by: Zhenzhong Duan
Acked-by: Chuck Anderson [bug 23128538] {CVE-2015-5307}
[4.1.3-25.el5.223.15]
- xenfb: avoid reading twice the same fields from the shared page
Reading twice the same field could give the guest an attack of
opportunity. In the case of event->type, gcc could compile the switch
statement into a jump table, effectively ending up reading the type
field multiple times.
This is part of XSA-155.
Signed-off-by: Stefano Stabellini
Upsteam commit 0ffd4547665d2fec648ab2c9ff856c5d9db9b07c
Backported-by: Zhenzhong Duan [bug 23170509] {CVE-2015-8550}
[4.1.3-25.el5.223.14]
- blkif: Avoid double access to src->nr_segments
src is stored in shared memory and src->nr_segments is dereferenced
twice at the end of the function. If a compiler decides to compile this
into two separate memory accesses then the size limitation could be
bypassed.
Fix it by removing the double access to src->nr_segments.
This is part of XSA-155.
Signed-off-by: Stefano Stabellini
Signed-off-by: Konrad Rzeszutek Wilk
Upsteam commit 27942b0cb2327e93deb12326bbe7b36c81f9fa7b
Backported-by: Zhenzhong Duan
Acked-by: Chuck Anderson [bug 23170509] {CVE-2015-8550}
[4.1.3-25.el5.223.13]
- blktap2: Use RING_COPY_REQUEST
Instead of RING_GET_REQUEST. Using a local copy of the
ring (and also with proper memory barriers) will mean
we can do not have to worry about the compiler optimizing
the code and doing a double-fetch in the shared memory space.
This is part of XSA155.
Signed-off-by: Konrad Rzeszutek Wilk
---
v2: Fix compile issues with tapdisk-vbd
Upsteam commit 851ffb4eea917e2708c912291dea4d133026c0ac
Backported-by: Zhenzhong Duan
Acked-by: Chuck Anderson [bug 23170509] {CVE-2015-8550}
[4.1.3-25.el5.223.12]
- xen: Add RING_COPY_REQUEST()
Using RING_GET_REQUEST() on a shared ring is easy to use incorrectly
(i.e., by not considering that the other end may alter the data in the
shared ring while it is being inspected). Safe usage of a request
generally requires taking a local copy.
Provide a RING_COPY_REQUEST() macro to use instead of
RING_GET_REQUEST() and an open-coded memcpy(). This takes care of
ensuring that the copy is done correctly regardless of any possible
compiler optimizations.
Use a volatile source to prevent the compiler from reordering or
omitting the copy.
This is part of XSA155.
Signed-off-by: David Vrabel
Signed-off-by: Konrad Rzeszutek Wilk
---
v2: Add comment about GCC bug.
Upsteam commit 12b11658a9d6a654a1e7acbf2f2d56ce9a396c86
Backported-by: Zhenzhong Duan
Reviewed-by: Chuck Anderson [bug 23170509] {CVE-2015-8550}
[4.1.3-25.el5.223.11]
- x86: don't leak ST(n)/XMMn values to domains first using them
FNINIT doesn't alter these registers, and hence using it is
insufficient to initialize a guest's initial state.
This is XSA-165.
Signed-off-by: Jan Beulich
Reviewed-by: Andrew Cooper
[Backported to Xen 4.1.x]
Signed-off-by: Stefan Bader
Stefan Bader's backport of XSA-165 for Xen 4.1
Backported-by: Zhenzhong Duan
Acked-by: Chuck Anderson [bug 23128446] {CVE-2015-8555}
[4.1.3-25.el5.223.10]
- vmx: restore debug registers when injecting #DB traps
Commit a929bee0e652 ('x86/vmx: Fix injection of #DB traps following
XSA-156') prevents an infinite loop in certain #DB traps. However, it
changed the behavior to not call hvm_hw_inject_trap() for #DB and #AC
traps which which means that the debug registers are not restored
correctly and nullified commit b56ae5b48c38 ('VMX: fix/adjust trap
injection').
To fix this, restore the original code path through hvm_inject_trap(),
but ensure that the struct hvm_trap is populated with all the required
data.
Signed-off-by: Ross Lagerwall
Reviewed-by: Jan Beulich
Acked-by: Kevin Tian
Conflicts:
xen/arch/x86/hvm/vmx/vmx.c
Upstream commit ba22f1f4732acb4d5aebd779122e91753a0e374d
Backported-by: Zhenzhong Duan
Reviewed-by: Chuck Anderson [bug 23128266] {CVE-2015-5307}
[4.1.3-25.el5.223.9]
- x86/vmx: Fix injection of #DB traps following XSA-156
Most #DB exceptions are traps rather than faults, meaning that the instruction
pointer in the exception frame points after the instruction rather than at it.
However, VMX intercepts all have fault semantics, even when intercepting a
trap. Re-injecting an intercepted trap as a fault causes an infinite loop in
the guest, by re-executing the same trapping instruction repeatedly. This
breaks debugging inside the guest.
Introduce a helper which copies VM_EXIT_INTR_INTO to VM_ENTRY_INTR_INFO, and
use it to mirror the intercepted interrupt back to the guest.
Signed-off-by: Andrew Cooper
Acked-by: Kevin Tian
master commit: 0747bc8b4d85f3fc0ee1e58418418fa0229e8ff8
master date: 2016-01-05 11:28:56 +0000
Conflicts:
xen/arch/x86/hvm/vmx/vmx.c
Upstream commit a929bee0e65296ee565251316dc53f1c6b79084a
Backported-by: Zhenzhong Duan
Reviewed-by: Chuck Anderson [bug 23128266] {CVE-2015-5307}
[4.1.3-25.el5.223.8]
- x86/HVM: always intercept #AC and #DB
Both being benign exceptions, and both being possible to get triggered
by exception delivery, this is required to prevent a guest from locking
up a CPU (resulting from no other VM exits occurring once getting into
such a loop).
The specific scenarios:
1) #AC may be raised during exception delivery if the handler is set to
be a ring-3 one by a 32-bit guest, and the stack is misaligned.
2) #DB may be raised during exception delivery when a breakpoint got
placed on a data structure involved in delivering the exception. This
can result in an endless loop when a 64-bit guest uses a non-zero IST
for the vector 1 IDT entry, but even without use of IST the time it
takes until a contributory fault would get raised (results depending
on the handler) may be quite long.
This is XSA-156.
Reported-by: Benjamin Serebrin
Signed-off-by: Jan Beulich
Reviewed-by: Andrew Cooper
Tested-by: Andrew Cooper
Conflicts:
xen/arch/x86/hvm/svm/svm.c
xen/arch/x86/hvm/vmx/vmx.c
Backported-by: Zhenzhong Duan
Reviewed-by: Chuck Anderson [bug 23128266] {CVE-2015-5307}
[4.1.3-25.el5.223.7]
- x86/hvm/vmx: Trace traps and realmode exits
Add some more tracing to vmexits that don't currently have
trace information:
* VMX realmode emulation
* Various VMX traps
* Fast-pathed APIC accesses
Signed-off-by: George Dunlap
Committed-by: Keir Fraser
Upstream commit a2a88004afe1cce99bba724929d59366e752e886
Backported-by: Zhenzhong Duan [bug 23128266] {CVE-2015-5307}
[4.1.3-25.el5.223.6]
- x86: fix unintended fallthrough case from XSA-154
... and annotate the other deliberate one: Coverity objects otherwise.
Signed-off-by: Andrew Cooper
One of the two instances was actually a bug.
Signed-off-by: Jan Beulich
master commit: 8dd6d1c099865ee5f5916616a0ca79cd943c46f9
master date: 2016-02-18 15:10:07 +0100
Conflicts:
xen/arch/x86/mm.c
Upstream commit ccc7adf9cff5d5f93720afcc1d0f7227d50feab2
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky
Reviewed-by: Chuck Anderson [bug 23128198] {CVE-2016-2270}
[4.1.3-25.el5.223.5]
- x86: enforce consistent cachability of MMIO mappings
We've been told by Intel that inconsistent cachability between
multiple mappings of the same page can affect system stability only
when the affected page is an MMIO one. Since the stale data issue is
of no relevance to the hypervisor (since all guest memory accesses go
through proper accessors and validation), handling of RAM pages
remains unchanged here. Any MMIO mapped by domains however needs to be
done consistently (all cachable mappings or all uncachable ones), in
order to avoid Machine Check exceptions. Since converting existing
cachable mappings to uncachable (at the time an uncachable mapping
gets established) would in the PV case require tracking all mappings,
allow MMIO to only get mapped uncachable (UC, UC-, or WC).
This also implies that in the PV case we mustn't use the L1 PTE update
fast path when cachability flags get altered.
Since in the HVM case at least for now we want to continue honoring
pinned cachability attributes for pages not mapped by the hypervisor,
special case handling of r/o MMIO pages (forcing UC) gets added there.
Arguably the counterpart change to p2m-pt.c may not be necessary, since
UC- (which already gets enforced there) is probably strict enough.
Note that the shadow code changes include fixing the write protection
of r/o MMIO ranges: shadow_l1e_remove_flags() and its siblings, other
than l1e_remove_flags() and alike, return the new PTE (and hence
ignoring their return values makes them no-ops).
This is CVE-2016-2270 / XSA-154.
Signed-off-by: Jan Beulich
Acked-by: Andrew Cooper
Conflicts:
docs/misc/xen-command-line.markdown
xen/arch/x86/mm/p2m-pt.c
xen/arch/x86/mm/shadow/multi.c
xen/arch/x86/mm.c
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky
Reviewed-by: Chuck Anderson [bug 23128198] {CVE-2016-2270}
[4.1.3-25.el5.223.4]
- x86/mm: fix mod_l1_entry() return value when encountering r/o MMIO page
While putting together the workaround announced in
http://lists.xen.org/archives/html/xen-devel/2012-06/msg00709.html, I
found that mod_l1_entry(), upon encountering a set bit in
mmio_ro_ranges, would return 1 instead of 0 (the removal of the write
permission is supposed to be entirely transparent to the caller, even
more so to the calling guest).
Signed-off-by: Jan Beulich
Acked-by: Keir Fraser
Upstream commit 70622118ee7bb925205c5d8c1e7bec82fc257351
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky
Reviewed-by: Chuck Anderson [bug 23128198] {CVE-2016-2270}
[4.1.3-25.el5.223.3]
- x86: make mod_l2_entry() return a proper error code
... so that finally all mod_lN_entry() functions behave identically,
allowing some cleanup in do_mmu_update() (which no longer needs to
track both an okay status and an error code).
Signed-off-by: Jan Beulich
Conflicts:
xen/arch/x86/mm.c
Upstream commit 7f107ea5c0790f6a702c801589d13bd275e20260
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky
Reviewed-by: Chuck Anderson [bug 23128198] {CVE-2016-2270}
[4.1.3-25.el5.223.2]
- x86: make mod_l1_entry() return a proper error code
... again is so that the guest can actually know the reason for the
(hypercall) failure.
Signed-off-by: Jan Beulich
Upstream commit 541ce4766e7a3d1c3e36b70ff75f8b900ad8d20f
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky
Reviewed-by: Chuck Anderson [bug 23128198] {CVE-2016-2270}
[4.1.3-25.el5.223.1]
- x86: make get_page_from_l1e() return a proper error code
... so that the guest can actually know the reason for the (hypercall)
failure.
ptwr_do_page_fault() could propagate the error indicator received from
get_page_from_l1e() back to the guest in the high half of the error
code (entry_vector), provided we're sure all existing guests can deal
with that (or indicate so by means of a to-be-added guest feature
flag). Alternatively, a second virtual status register (like CR2)
could be introduced.
Signed-off-by: Jan Beulich
Upstream commit be640b1800bbaaadf4db81b3badbb38714c2f309
Backported-by: Zhenzhong Duan
Reviewed-by: Boris Ostrovsky
Reviewed-by: Chuck Anderson [bug 23128198] {CVE-2016-2270}
[4.1.3-25.el5.223]
- x86: fix information leak on AMD CPUs
The fix for XSA-52 was wrong, and so was the change synchronizing that
new behavior to the FXRSTOR logic: AMD's manuals explictly state that
writes to the ES bit are ignored, and it instead gets calculated from
the exception and mask bits (it gets set whenever there is an unmasked
exception, and cleared otherwise). Hence we need to follow that model
in our workaround.
This is XSA-172 / CVE-2016-3158.
Signed-off-by: Jan Beulich
Reviewed-by: Andrew Cooper
Based on xen.org's xsa172-4.3.patch
Conflicts:
xrstor() is in xen/arch/x86/i387.c
slightly different base code.
Signed-off-by: Chuck Anderson
Reviewed-by: Boris Ostrovsky
Tested-by: Boris Ostrovsky [bug 23006903] {CVE-2016-3158,CVE-2016-3159}
[4.1.3-25.el5.222]
- xen: enable vcpu migration during system suspend
Xen doesn't do vcpu migration when it prepare to disable all non boot cpus.
This does work in most cases, but we could reproduce a page fault with below
cmd:
(XEN) Xen call trace:
(XEN) [] vcpu_wake+0x46/0x3e0
(XEN) [] do_IRQ+0x27a/0x5d0
(XEN) [] vcpu_kick+0x1d/0x80
(XEN) [] __find_next_bit+0x6a/0x70
(XEN) [] evtchn_set_pending+0xab/0x1b0
(XEN) [] send_guest_vcpu_virq+0x46/0x70
(XEN) [] vcpu_singleshot_timer_fn+0x0/0x10
(XEN) [] execute_timer+0x4c/0x70
(XEN) [] timer_softirq_action+0x85/0x220
(XEN) [] __do_softirq+0x65/0x90
(XEN) [] __cpu_die+0x4e/0xa0
(XEN) [] cpu_down+0xdf/0x150
(XEN) [] acpi_get_table+0x5e/0xb0
(XEN) [] disable_nonboot_cpus+0x9e/0x140
(XEN) [] enter_state_helper+0xcb/0x350
(XEN) [] continue_hypercall_tasklet_handler+0xe9/0x100
(XEN) [] do_tasklet_work+0x5e/0xb0
(XEN) [] do_tasklet+0x7e/0xc0
(XEN) [] idle_loop+0x19/0x60
If vcpu isn't migrated when vcpu->processor is down, vcpu->processor keeps a
stale value. All the structures allocated for vcpu->processor are already
freed, access to these structures lead to page fault.
In above trace, per_cpu(schedule_data, v->processor).schedule_lock is accessed.
Signed-off-by: Zhenzhong Duan
Reviewed-by: Konrad Rzeszutek Wilk [bug 22753848]
[4.1.3-25.el5.221]
- xend: error out if pcnet emulated driver model is used.
From http://xenbits.xen.org/xsa/advisory-162.html:
The QEMU security team has predisclosed the following advisory:
The AMD PC-Net II emulator(hw/net/pcnet.c), while receiving
packets in loopback mode, appends CRC code to the receive
buffer. If the data size given is same as the buffer size(4096),
the appended CRC code overwrites 4 bytes after the s->buffer,
making the adjacent 's->irq' object point to a new location.
IMPACT
======
A guest which has access to an emulated PCNET network device
(e.g. with 'model=pcnet' in their VIF configuration) can exploit this
vulnerability to take over the qemu process elevating its privilege to
that of the qemu process.
This patch resolves XSA-162 but is not the solution provided by xen.org.
Fail the creation if model=pcnet is used.
Suggested-by: Boris Ostrovsky
Signed-off-by: Konrad Rzeszutek Wilk
Acked-by: Chuck Anderson [bug 22591625] {CVE-2015-7504}
[4.1.3-25.el5.220]
- x86/HVM: avoid reading ioreq state more than once
Otherwise, especially when the compiler chooses to translate the
switch() to a jump table, unpredictable behavior (and in the jump table
case arbitrary code execution) can result.
This is XSA-166.
Signed-off-by: Jan Beulich
Acked-by: Ian Campbell
Acked-by: Chuck Anderson [bug 22593156]
[4.1.3-25.el5.219]
- MSI-X: avoid array overrun upon MSI-X table writes
pt_msix_init() allocates msix->msix_entry[] to just cover
msix->total_entries entries. While pci_msix_readl() resorts to reading
physical memory for out of bounds reads, pci_msix_writel() so far
simply accessed/corrupted unrelated memory.
pt_iomem_map()'s call to cpu_register_physical_memory() registers a
page granular region, which is necessary as the Pending Bit Array may
share space with the MSI-X table (but nothing else is allowed to). This
also explains why pci_msix_readl() actually honors out of bounds reads,
but pci_msi_writel() doesn't need to.
This is XSA-164.
Signed-off-by: Jan Beulich
Acked-by: Ian Campbell
Acked-by: Chuck Anderson [bug 22588869] {CVE-2015-8554}
[4.1.3-25.el5.218]
- VT-d: fix TLB flushing in dma_pte_clear_one()
From: Jan Beulich
The TLB flush code was wrong since xen-4.1.3-25.el5.127.20 (commit:
vtd-Refactor-iotlb-flush-code.patch), both ovm-3.2.9 and ovm-3.2.10 were
affected.
The third parameter of __intel_iommu_iotlb_flush() is to indicate
whether the to be flushed entry was a present one. A few lines before,
we bailed if !dma_pte_present(*pte), so there's no need to check the
flag here again - we can simply always pass TRUE here.
This is CVE-2013-6375 / XSA-78.
Suggested-by: Cheng Yueqiang
Signed-off-by: Jan Beulich
Reviewed-by: Andrew Cooper
Acked-by: Keir Fraser
(cherry picked from commit 85c72f9fe764ed96f5c149efcdd69ab7c18bfe3d)
Signed-off-by: Bob Liu
Reviewed-by: Konrad Rzeszutek Wilk
Acked-by: Chuck Anderson [bug 22026725] {CVE-2013-6375}
[4.1.3-25.el5.217]
- x86/VMX: prevent INVVPID failure due to non-canonical guest address
While INVLPG (and on SVM INVLPGA) don't fault on non-canonical
addresses, INVVPID fails (in the 'individual address' case) when passed
such an address.
Since such intercepted INVLPG are effectively no-ops anyway, don't fix
this in vmx_invlpg_intercept(), but instead have paging_invlpg() never
return true in such a case.
This is XSA-168.
Signed-off-by: Jan Beulich
Reviewed-by: Andrew Cooper
Acked-by: Ian Campbell
Acked-by: Chuck Anderson [bug 22584930] {CVE-2016-1571}
[4.1.3-25.el5.216]
- x86/mm: PV superpage handling lacks sanity checks
MMUEXT_{,UN}MARK_SUPER fail to check the input MFN for validity before
dereferencing pointers into the superpage frame table.
get_superpage() has a similar issue.
This is XSA-167.
Reported-by: Qinghao Tang
Signed-off-by: Jan Beulich
Acked-by: Ian Campbell
Acked-by: Chuck Anderson [bug 22584929] {CVE-2016-1570}
[4.1.3-25.el5.215]
- xend/image: Don't throw VMException when using backend domains for disks.
If we are using backend domains the disk image may not be
accessible within the host (domain0). As such it is OK to
continue on.
The 'addStoreEntries' in DevController.py already does the check
to make sure that when the 'backend' configuration is used - that
said domain exists.
As such the only change we need to do is to exclude the disk
image location if the domain is not dom0.
Reviewed-by: Konrad Rzeszutek Wilk
Acked-by: Adnan Misherfi
Signed-off-by: Zhigang Wang
Signed-off-by: Joe Jin [bug 22242536]
[4.1.3-25.el5.214]
- memory: fix XENMEM_exchange error handling
assign_pages() can fail due to the domain getting killed in parallel,
which should not result in a hypervisor crash.
Also delete a redundant put_gfn() - all relevant paths leading to the
'fail' label already do this (and there are also paths where it was
plain wrong). All of the put_gfn()-s got introduced by 51032ca058
('Modify naming of queries into the p2m'), including the otherwise
unneeded initializer for k (with even a kind of misleading comment -
the compiler warning could actually have served as a hint that the use
is wrong).
This is XSA-159.
Signed-off-by: Jan Beulich
Acked-by: Ian Campbell
Based on xen.org's xsa159.patch
Conflicts:
OVM 3.2 does not have the change (51032ca058) that is backed out
in xen/common/memory.c or the put_gfn() in xen/common/memory.c
Acked-by: Chuck Anderson
Reviewed-by: John Haxby [bug 22326044] {CVE-2015-8339,CVE-2015-8340}
[4.1.3-25.el5.213]
- x86: rate-limit logging in do_xen{oprof,pmu}_op()
Some of the sub-ops are acessible to all guests, and hence should be
rate-limited. In the xenoprof case, just like for XSA-146, include them
only in debug builds. Since the vPMU code is rather new, allow them to
be always present, but downgrade them to (rate limited) guest messages.
This is XSA-152.
Signed-off-by: Jan Beulich
Acked-by: Chuck Anderson
Reviewed-by: John Haxby [bug 22088906] {CVE-2015-7971}
[4.1.3-25.el5.212]
- xenoprof: free domain's vcpu array
This was overlooked in fb442e2171 ('x86_64: allow more vCPU-s per
guest').
This is XSA-151.
Signed-off-by: Jan Beulich
Reviewed-by: Ian Campbell
Acked-by: Chuck Anderson
Reviewed-by: John Haxby [bug 22088836] {CVE-2015-7969}
[4.1.3-25.el5.211]
- x86: guard against undue super page PTE creation
When optional super page support got added (commit bd1cd81d64 'x86: PV
support for hugepages'), two adjustments were missed: mod_l2_entry()
needs to consider the PSE and RW bits when deciding whether to use the
fast path, and the PSE bit must not be removed from L2_DISALLOW_MASK
unconditionally.
This is XSA-148.
Signed-off-by: Jan Beulich
Reviewed-by: Tim Deegan
[backport to Xen 4.1]
Signed-off-by: Andrew Cooper
Acked-by: Chuck Anderson
Reviewed-by: John Haxby [bug 22088411] {CVE-2015-7835}
[4.1.3-25.el5.210]
- chkconfig services should associated with xen-tools
Signed-off-by: Zhigang Wang
Reviewed-by: Adnan Misherfi [bug 21889174]
Related CVEs
Updated Packages
Release/Architecture | Filename | MD5sum | Superseded By Advisory |
|
Oracle VM 3.2 (x86_64) | xen-4.1.3-25.el5.223.26.src.rpm | 6cc9d0903d5931a9ed841bcb6c5d4919 | OVMSA-2021-0014 |
| xen-4.1.3-25.el5.223.26.x86_64.rpm | 6eca8b8c32062c94e7345727aaff3204 | OVMSA-2021-0014 |
| xen-devel-4.1.3-25.el5.223.26.x86_64.rpm | 6ea6e916274d6efd16bcfa6bafebc1dd | OVMSA-2019-0048 |
| xen-tools-4.1.3-25.el5.223.26.x86_64.rpm | d9029f67d36d0a3e90cdaafb1ade0f8b | OVMSA-2021-0014 |
This page is generated automatically and has not been checked for errors or omissions. For clarification
or corrections please contact the Oracle Linux ULN team