1
0
mirror of https://gitlab.com/libvirt/libvirt.git synced 2024-12-22 17:34:18 +03:00
libvirt/tests/cputestdata/x86_64-baseline-Westmere+Nehalem-migratable.xml
Jiri Denemark bb6cedd208 cpu_x86: Ignore enabled features for input models in x86DecodeUseCandidate
While we don't want to aim for the shortest list of disabled features in
the baseline result (it would select a very old model), we want to do so
while looking at any of the input models for which we're trying to
compute a baseline CPU model. Given a set of input models, we always
want to take the least capable one of them (i.e., the one with shortest
list of disabled features) or a better model which is not one of the
input models.

So when considering an input model, we just check whether its list of
disabled features is shorter than the currently best one. When looking
at other models we check both enabled and disabled features while
penalizing disabled features as implemented by the previous patch.

Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2022-05-06 17:33:47 +02:00

15 lines
549 B
XML

<cpu mode='custom' match='exact'>
<model fallback='allow'>Westmere</model>
<vendor>Intel</vendor>
<feature policy='require' name='vme'/>
<feature policy='require' name='ss'/>
<feature policy='require' name='pclmuldq'/>
<feature policy='require' name='pcid'/>
<feature policy='require' name='x2apic'/>
<feature policy='require' name='tsc-deadline'/>
<feature policy='require' name='xsave'/>
<feature policy='require' name='osxsave'/>
<feature policy='require' name='avx'/>
<feature policy='require' name='hypervisor'/>
</cpu>