OVMSA-2015-0068

OVMSA-2015-0068 - xen security update

Type:SECURITY
Severity:IMPORTANT
Release Date:2015-06-11

Description


[4.1.3-25.el5.127.52]
- x86/traps: loop in the correct direction in compat_iret()
This is XSA-136.
Reviewed-by: Jan Beulich
Acked-by: Chuck Anderson
Reviewed-by: John Haxby [bug 21219214] {CVE-2015-4164}

[4.1.3-25.el5.127.51]
- pcnet: force the buffer access to be in bounds during tx
4096 is the maximum length per TMD and it is also currently the size of
the relay buffer pcnet driver uses for sending the packet data to QEMU
for further processing. With packet spanning multiple TMDs it can
happen that the overall packet size will be bigger than sizeof(buffer),
which results in memory corruption.
Fix this by only allowing to queue maximum sizeof(buffer) bytes.
This is CVE-2015-3209.
Signed-off-by: Petr Matousek
Reported-by: Matt Tait
Reviewed-by: Peter Maydell
Reviewed-by: Stefan Hajnoczi
Acked-by: Chuck Anderson
Reviewed-by: John Haxby [bug 21218615] {CVE-2015-3209}

[4.1.3-25.el5.127.50]
- pcnet: fix Negative array index read

From: Gonglei
s->xmit_pos maybe assigned to a negative value (-1),
but in this branch variable s->xmit_pos as an index to
array s->buffer. Let's add a check for s->xmit_pos.

upstream-commit-id: 7b50d00911ddd6d56a766ac5671e47304c20a21b
Signed-off-by: Gonglei
Signed-off-by: Paolo Bonzini
Reviewed-by: Jason Wang
Reviewed-by: Jason Wang
Signed-off-by: Stefan Hajnoczi
Acked-by: Chuck Anderson
Reviewed-by: John Haxby [bug 21218615] {CVE-2015-3209}

[4.1.3-25.el5.127.38]
- x86: vcpu_destroy_pagetables() must not return -EINTR
.. otherwise it has the side effect that: domain_relinquish_resources
will stop and will return to user-space with -EINTR which it is not
equipped to deal with that error code; or vcpu_reset - which will
ignore it and convert the error to -ENOMEM..
The preemption mechanism we have for domain destruction is to return
-EAGAIN (and then user-space calls the hypercall again) and as such we need
to catch the case of:
domain_relinquish_resources
->vcpu_destroy_pagetables
-> put_page_and_type_preemptible
-> __put_page_type
returns -EINTR
and convert it to the proper type. For:
XEN_DOMCTL_setvcpucontext
-> vcpu_reset
-> vcpu_destroy_pagetables
we need to return -ERESTART otherwise we end up returning -ENOMEM.
There are also other callers of vcpu_destroy_pagetables: arch_vcpu_reset
(vcpu_reset) are:
- hvm_s3_suspend (asserts on any return code),
- vlapic_init_sipi_one (asserts on any return code),
Signed-off-by: Konrad Rzeszutek Wilk
Signed-off-by: Jan Beulich
Acked-by: Chuck Anderson [bug 20618712]

[4.1.3-25.el5.127.37]
- mm: Make scrubbing a low-priority task
An idle processor will attempt to scrub pages left over by a previously
exited guest. The processor takes global heap_lock in scrub_free_pages(),
manipulates pages on the heap lists and releases the lock before performing
the actual scrubbing in __scrub_free_pages().
It has been observed that on some systems, even though scrubbing itself
is done with the lock not held, other unrelated heap users are unable
to take the (now free) lock. We theorize that massive scrubbing locks out
the bus (or some other HW resources), preventing lock requests from reaching
the scrubbing node.
This patch tries to alleviate this problem by having the scrubber monitor
whether there are other waiters for the heap lock and, if such waiters
exist, stop scrubbing.
To achieve this, we make two changes to existing code:
1. Parallelize the heap lock by breaking it to per-node locks
2. Create an atomic per-node counter array. Before a CPU on a particular
node attempts to acquire the (now per-node) lock it increments the counter.
The scrubbing processor periodically checks this counter and, if it is
non-zero, stops scrubbing.
Few notes:
1. Until now, total_avail_pages and midsize_alloc_zone_pages updates have been
performed under single heap_lock. Since we no longer have this global lock,
we introduce pgcount_lock. Note that this is really only to protect readers
of this variables from reading inconsistent values (such as if another CPU
is in the middle of updating them). The values themselves are somewhat
'unsynchronized' from actual heap state. We try to be conservative and decrement
them before pages are taken from the heap and increment them after they are
placed there.
2. Similarly, page_broken/offlined_list are no longer under heap_lock.
pglist_lock is added to synchronize access to those lists.
3. d->last_alloc_node used to be updated under heap_lock. It was read, however,
without holding this lock so it seems that lockless updates will not make the
situation any worse (and since these updates are simple writes, as opposed to
some sort of RMW, we shouldn't need to convert it to an atomic).
Signed-off-by: Boris Ostrovsky
Reviewed-by: Konrad Rzeszutek Wilk
Acked-by: Chuck Anderson [bug 20326458]


Related CVEs


CVE-2015-3209
CVE-2015-4164

Updated Packages


Release/ArchitectureFilenameMD5sumSuperseded By Advisory
Oracle VM 3.2 (x86_64) xen-4.1.3-25.el5.127.52.src.rpmf98f29f1a9170b76c839ed3525f65d71OVMSA-2021-0014
xen-4.1.3-25.el5.127.52.x86_64.rpm84192dc4e15b297a583ee081b497c089OVMSA-2021-0014
xen-devel-4.1.3-25.el5.127.52.x86_64.rpm9bebb42a0a148d6a20099fcae6aea265OVMSA-2019-0048
xen-tools-4.1.3-25.el5.127.52.x86_64.rpm9d596bfaea8eda77f5d086173035aef6OVMSA-2021-0014



This page is generated automatically and has not been checked for errors or omissions. For clarification or corrections please contact the Oracle Linux ULN team

software.hardware.complete