OVMSA-2013-0085

OVMSA-2013-0085 - xen security update

Type:SECURITY
Severity:IMPORTANT
Release Date:2013-12-05

Description


[4.1.3-25.el5.88]
- x86/HVM: only allow ring 0 guest code to make hypercalls
Anything else would allow for privilege escalation.
This is CVE-2013-4554 / XSA-76.
Signed-off-by: Jan Beulich
Signed-off-by: Chuck Anderson
Reviewed-by: Jerry Snitselaar [bug 17822232] {CVE-2013-4554}

[4.1.3-25.el5.87]
- x86: restrict XEN_DOMCTL_getmemlist
Coverity ID 1055652
(See the code comment.)
This is CVE-2013-4553 / XSA-74.
Signed-off-by: Jan Beulich
Reviewed-by: Andrew Cooper
Reviewed-by: Tim Deegan
Signed-off-by: Chuck Anderson
Reviewed-by: Jerry Snitselaar [bug 17821622] {CVE-2013-4553}

[4.1.3-25.el5.86]
- gnttab: update version 1 of xsa73-4.1.patch to version 3
Version 1 of xsa73-4.1.patch had an error:
bool_t drop_dom_ref = (e->tot_pages-- == 0);
should have been:
bool_t drop_dom_ref = (e->tot_pages-- == 1);
Signed-off-by: Andrew Cooper
Consolidate error handling.
Signed-off-by: Jan Beulich
Reviewed-by: Keir Fraser
Tested-by: Matthew Daley
Backported to Xen-4.1
Signed-off-by: Andrew Cooper
Signed-off-by: Chuck Anderson [bug 17760875] {CVE-2013-4494}

[4.1.3-25.el5.85]
- Xen: Spread boot time page scrubbing across all available CPU's
Written by Malcolm Crossley
The page scrubbing is done in 256MB chunks in lockstep across all the CPU's.
This allows for the boot CPU to hold the heap_lock whilst each chunk is being
scrubbed and then release the heap_lock when all CPU's are finished scrubing
their individual chunk. This allows for the heap_lock to not be held
continously and for pending softirqs are to be serviced periodically across
all CPU's.
The page scrub memory chunks are allocated to the CPU's in a NUMA aware
fashion to reduce Socket interconnect overhead and improve performance.
This patch reduces the boot page scrub time on a 256GB 16 core AMD Opteron
machine from 1 minute 46 seconds to 38 seconds.
Signed-off-by: Mukesh Rathor [bug 17723396]

[4.1.3-25.el5.84]
- gnttab: correct locking order reversal
Coverity ID 1087189
Correct a lock order reversal between a domains page allocation and grant
table locks.
This is XSA-73.
Signed-off-by: Andrew Cooper
Consolidate error handling.
Signed-off-by: Jan Beulich
Reviewed-by: Keir Fraser
Tested-by: Matthew Daley
Backported to Xen-4.1
Signed-off-by: Andrew Cooper
Signed-off-by: Chuck Anderson [bug 17723396] {CVE-2013-4494}

[4.1.3-25.el5.83]
- piix4acpi, xen, hotplug: Fix race with ACPI AML code and hotplug.
This is a race so the amount varies but on a 4PCPU box
I seem to get only ~14 out of 16 vCPUs I want to online.
The issue at hand is that QEMU xenstore.c hotplug code changes
the vCPU array and triggers an ACPI SCI for each vCPU
online/offline change. That means we modify the array of vCPUs
as the guests ACPI AML code is reading it - resulting in
the guest reading the data only once and not changing the
CPU states appropiately.
The fix is to seperate the vCPU array changes from the ACPI SCI
notification. The code now will enumerate all of the vCPUs
and change the vCPU array if there is a need for a change.
If a change did occur then only _one_ ACPI SCI pulse is sent
to the guest. The vCPU array at that point has the online/offline
modified to what the user wanted to have.
Specifically, if a user provided this command:
xl vcpu-set latest 16
(guest config has vcpus=1, maxvcpus=32) QEMU and the guest
(in this case Linux) would do:
QEMU: Guest OS:
-xenstore_process_vcpu_set_event
-> Gets an XenBus notification for CPU1
-> Updates the gpe_state.cpus_state bitfield.
-> Pulses the ACPI SCI
- ACPI SCI kicks in
-> Gets an XenBus notification for CPU2
-> Updates the gpe_state.cpus_state bitfield.
-> Pulses the ACPI SCI
-> Gets an XenBus notification for CPU3
-> Updates the gpe_state.cpus_state bitfield.
-> Pulses the ACPI SCI
...
- Method(PRST) invoked
-> Gets an XenBus notification for CPU12
-> Updates the gpe_state.cpus_state bitfield.
-> Pulses the ACPI SCI
- reads AF00 for CPU state
[gets 0xff]
- reads AF02 [gets 0x7f]
-> Gets an XenBus notification for CPU13
-> Updates the gpe_state.cpus_state bitfield.
-> Pulses the ACPI SCI
.. until VCPU 16
- Method PRST updates
PR01 through 13 FLG
entry.
- PR01->PR13 _MAD
invoked.
- Brings up 13 CPUs.
While QEMU updates the rest of the cpus_state bitfields the ACPI AML
only does the CPU hotplug on those it had read.
Signed-off-by: Konrad Rzeszutek Wilk
[v1: Use stack for the 'attr' instead of malloc/free]
Acked-by: Stefano Stabellini
Acked-by: George Dunlap (for 4.3 release)
Signed-off-by: Chuck Anderson [bug 17504060]

[4.1.3-25.el5.82]
- piix4acpi, xen: Clarify that the qemu_set_irq calls just do an IRQ pulse.
The 'qemu_cpu_notify' raises and lowers the ACPI SCI line when the
vCPU state has changed.
Instead of doing the two functions, just use one function that
describes exactly what it does.
Signed-off-by: Konrad Rzeszutek Wilk
Signed-off-by: Chuck Anderson [bug 17504060]

[4.1.3-25.el5.81]
- piix4acpi, xen, vcpu hotplug: Split the notification from the changes.
This is a prepatory patch that splits the notification
of an vCPU change from the actual changes to the vCPU array.
Signed-off-by: Konrad Rzeszutek Wilk
Signed-off-by: Chuck Anderson [bug 17504060]

[4.1.3-25.el5.80]
- Backported Carson's changes - Requests to connect on port 8003 with a LOW/weak cipher are now rejected.
Signed-off-by: Carson Hovey [bug 17669909]


Related CVEs


CVE-2013-4553
CVE-2013-4554
CVE-2013-4494

Updated Packages


Release/ArchitectureFilenameMD5sumSuperseded By Advisory
Oracle VM 3.2 (x86_64) xen-4.1.3-25.el5.88.src.rpm68abbb4b476dc179a69fe762fa36dc4bOVMSA-2021-0014
xen-4.1.3-25.el5.88.x86_64.rpm6842f311fa8a5aadf49e47bf26133af0OVMSA-2021-0014
xen-devel-4.1.3-25.el5.88.x86_64.rpm293617832fca0db9079bc37e5db6fab5OVMSA-2019-0048
xen-tools-4.1.3-25.el5.88.x86_64.rpm3d0270ba1998c96ed2fb3e21ce3049e1OVMSA-2021-0014



This page is generated automatically and has not been checked for errors or omissions. For clarification or corrections please contact the Oracle Linux ULN team

software.hardware.complete