Difference between revisions of "VM CPU Hotplug"

From stoney cloud
Jump to: navigation, search
[unchecked revision][unchecked revision]
(Created page with "The CPU hotplug works similar to the memory ballooning: one has to specify a maximum number of virtual CPUs available (and this can't be changed at runtime) and a number of cu...")
 
(Example commands: remove a CPU at runtime using virsh)
 
(3 intermediate revisions by the same user not shown)
Line 5: Line 5:
 
In the XML, the <code>vcpu</code> element has to be updated to the following:
 
In the XML, the <code>vcpu</code> element has to be updated to the following:
  
<syntaxhighlight='xml'>
+
<syntaxhighlight lang='xml'>
 
   <vcpu placement='static' current='N'>M</vcpu>
 
   <vcpu placement='static' current='N'>M</vcpu>
 
</syntaxhighlight>
 
</syntaxhighlight>
Line 15: Line 15:
 
The following only works if the Qemu Guest Agent is running inside the guest. If that is not the case, one may still add a CPU but Linux will not take it online automatically and removal of CPUs is not possible (since libvirt can't be sure that the guest has taken the CPU offline prior to removal).
 
The following only works if the Qemu Guest Agent is running inside the guest. If that is not the case, one may still add a CPU but Linux will not take it online automatically and removal of CPUs is not possible (since libvirt can't be sure that the guest has taken the CPU offline prior to removal).
  
<syntaxhighlight='bash'>
+
<syntaxhighlight lang='bash'>
 
# Set the number of vCPUs to 2 = adding one CPU if the VM was started with N=2, M=2
 
# Set the number of vCPUs to 2 = adding one CPU if the VM was started with N=2, M=2
 
virsh setvcpus --live --guest 375e8f9c-8bc7-4bb3-8d9b-fdfe448ce0c2 2
 
virsh setvcpus --live --guest 375e8f9c-8bc7-4bb3-8d9b-fdfe448ce0c2 2
 +
 
# Record the change in the XML (--config and --guest are mutually exclusive options)
 
# Record the change in the XML (--config and --guest are mutually exclusive options)
 
virsh setvcpus --config 375e8f9c-8bc7-4bb3-8d9b-fdfe448ce0c2 2
 
virsh setvcpus --config 375e8f9c-8bc7-4bb3-8d9b-fdfe448ce0c2 2
 +
 +
# and update the LDAP (using the helper script available in the stoney conductor)
 +
./change-vm-attribute.pl --config ldap.conf --vm 375e8f9c-8bc7-4bb3-8d9b-fdfe448ce0c2 --vcpu 2
 
</syntaxhighlight>
 
</syntaxhighlight>
  
Line 31: Line 35:
 
= Example commands: remove a CPU at runtime using virsh =
 
= Example commands: remove a CPU at runtime using virsh =
  
<syntaxhighlight='bash'>
+
<syntaxhighlight lang='bash'>
 
# Set the number of vCPUs to 1 = removing one CPU if the VM was running with N=2, M=2
 
# Set the number of vCPUs to 1 = removing one CPU if the VM was running with N=2, M=2
 
virsh setvcpus --live --guest 375e8f9c-8bc7-4bb3-8d9b-fdfe448ce0c2 1
 
virsh setvcpus --live --guest 375e8f9c-8bc7-4bb3-8d9b-fdfe448ce0c2 1
 +
 
# Record the change in the XML (--config and --guest are mutually exclusive options)
 
# Record the change in the XML (--config and --guest are mutually exclusive options)
 
virsh setvcpus --config 375e8f9c-8bc7-4bb3-8d9b-fdfe448ce0c2 1
 
virsh setvcpus --config 375e8f9c-8bc7-4bb3-8d9b-fdfe448ce0c2 1
 +
 +
# and update the LDAP (using the helper script available in the stoney conductor)
 +
./change-vm-attribute.pl --config ldap.conf --vm 375e8f9c-8bc7-4bb3-8d9b-fdfe448ce0c2 --vcpu 1
 
</syntaxhighlight>
 
</syntaxhighlight>
  

Latest revision as of 14:01, 9 October 2014

The CPU hotplug works similar to the memory ballooning: one has to specify a maximum number of virtual CPUs available (and this can't be changed at runtime) and a number of currently added CPUs.

libvirt XML

In the XML, the vcpu element has to be updated to the following:

  <vcpu placement='static' current='N'>M</vcpu>

where N is the currently enabled number of CPUs and M is the maximum number of CPUs.

Example commands: add a CPU at runtime using virsh

The following only works if the Qemu Guest Agent is running inside the guest. If that is not the case, one may still add a CPU but Linux will not take it online automatically and removal of CPUs is not possible (since libvirt can't be sure that the guest has taken the CPU offline prior to removal).

# Set the number of vCPUs to 2 = adding one CPU if the VM was started with N=2, M=2
virsh setvcpus --live --guest 375e8f9c-8bc7-4bb3-8d9b-fdfe448ce0c2 2
 
# Record the change in the XML (--config and --guest are mutually exclusive options)
virsh setvcpus --config 375e8f9c-8bc7-4bb3-8d9b-fdfe448ce0c2 2
 
# and update the LDAP (using the helper script available in the stoney conductor)
./change-vm-attribute.pl --config ldap.conf --vm 375e8f9c-8bc7-4bb3-8d9b-fdfe448ce0c2 --vcpu 2

This leads to the following output within the VM:

[  236.693907] CPU 1 got hotplugged[  283.406919] SMP alternatives: switching to SMP code[  283.418566] smpboot: Booting Node 0 Processor 1 APIC 0x1[    0.020000] kvm-clock: cpu 1, msr 0:3ff97041, secondary cpu clock[  283.430008] TSC synchronization [CPU#0 -> CPU#1]:[  283.430008] Measured 570321522131 cycles TSC warp between CPUs, turning off TSC clock.[  283.430008] tsc: Marking TSC unstable due to check_tsc_sync_source failed[  283.445610] KVM setup async PF for cpu 1[  283.445618] kvm-stealtime: cpu 1, msr 3fd0cb80[  283.445664] Will online and init hotplugged CPU: 1

If there is no guest agent, you will see only the first line.

Example commands: remove a CPU at runtime using virsh

# Set the number of vCPUs to 1 = removing one CPU if the VM was running with N=2, M=2
virsh setvcpus --live --guest 375e8f9c-8bc7-4bb3-8d9b-fdfe448ce0c2 1
 
# Record the change in the XML (--config and --guest are mutually exclusive options)
virsh setvcpus --config 375e8f9c-8bc7-4bb3-8d9b-fdfe448ce0c2 1
 
# and update the LDAP (using the helper script available in the stoney conductor)
./change-vm-attribute.pl --config ldap.conf --vm 375e8f9c-8bc7-4bb3-8d9b-fdfe448ce0c2 --vcpu 1

This leads to the following output within the VM:

[  352.153088] Unregister pv shared memory for cpu 1[  352.260054] smpboot: CPU 1 is now offline

Possible caveats

Maximum CPUs can't be changed at runtime

The VM must be started with the maximum set higher than 1 to be able to hot-add CPUs.

Removing CPUs may possibly lead to SMP task scheduler on UP system

When adding CPUs Linux automatically switches to a SMP-aware task scheduler. When reducing the number of CPUs to 1, there is no kernel log entry saying that it switches to a UP scheduler. So it is possibly still using a SMP scheduler on a now UP system.