While freeing the memory associated with a domain during domain destruction Xen could race with a toolstack domain reducing the amount of memory associated with that same domain via the XENMEM_decrease_reservation. In the case where this race is hit, the host will crash. The race is not exposed via the XENMEM_remove_from_physmap or XENMEM_exchange interfaces. Domains deliberately given partial management control may be able to deny service by crashing the host. Such a domain needs to be granted access to at least one of XENMEM_decrease_reservation or XEN_DOMCTL_destroydomain over another domain. As a result, in a system designed to enhance security by radically disaggregating the management, the security may be reduced. But, the security will be no worse than a non-disaggregated design. This issue is only relevant to systems which intend to increase security through the use of advanced disaggregated management techniques. This does not include systems using libxl, libvirt, or OpenStack (unless substantially modified or supplemented, as compared to versions supplied by the respective upstreams). Mitigation: There is no known mitigation. Switching from disaggregated to a non-disaggregated operation does NOT mitigate these vulnerabilities. Rather, it simply recategorises the vulnerability to hostile management code, regarding it "as designed"; thus it merely reclassifies these issues as "not a bug". Users and vendors of disaggregated systems should not change their configuration.
Created attachment 1082805 [details] Upstream patch
External References: http://xenbits.xen.org/xsa/advisory-147.html
Created xen tracking bugs for this issue: Affects: fedora-all [bug 1276344]
xen-4.5.1-14.fc23 has been pushed to the Fedora 23 stable repository. If problems still persist, please make note of it in this bug report.
xen-4.5.1-14.fc22 has been pushed to the Fedora 22 stable repository. If problems still persist, please make note of it in this bug report.
xen-4.4.3-7.fc21 has been pushed to the Fedora 21 stable repository. If problems still persist, please make note of it in this bug report.