<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.stoney-cloud.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Pat</id>
	<title>stoney-cloud.org - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.stoney-cloud.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Pat"/>
	<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/wiki/Special:Contributions/Pat"/>
	<updated>2026-05-13T13:47:07Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.6</generator>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=File:wrapper-interaction.xmi&amp;diff=3782</id>
		<title>File:wrapper-interaction.xmi</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=File:wrapper-interaction.xmi&amp;diff=3782"/>
		<updated>2014-06-27T12:59:04Z</updated>

		<summary type="html">&lt;p&gt;Pat: Initial version&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Initial version&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3781</id>
		<title>stoney conductor: VM Backup</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3781"/>
		<updated>2014-06-27T12:58:47Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Current Implementation (Backup) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This page describes how the VMs and VM-Templates are backed-up and restored inside the [http://www.stoney-cloud.org stoney cloud].&lt;br /&gt;
&lt;br /&gt;
= Requirements =&lt;br /&gt;
* sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
** This directory might be a single partition which needs to have the same size as your partition for the live images (it&#039;s a &amp;quot;copy&amp;quot; of the live partition)&lt;br /&gt;
* sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
** This directory must be on the same partition as your life images are&lt;br /&gt;
* A working stoney cloud, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].&lt;br /&gt;
* The backup configuration must be set: [[stoney_conductor:_OpenLDAP_directory_data_organisation#Backup | stoney conductor: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Backup =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The main idea to backup a VM or a VM-Template is, to divide the task into three subtasks: &lt;br /&gt;
* createSnapshot: Create a disk only snapshot. A new overlay file is created, all write operations are performed to this file. The underlying disk-image is now read only.&lt;br /&gt;
* exportSnapshot: Copy the read only disk-image to the backup location.&lt;br /&gt;
* commitSnapshot: Commit the performed write operations from the overlay back to the underlying (original) disk image. Now the underlying image is read-write again and the overlay image can be deleted.&lt;br /&gt;
A more detailed and technical description for these three sub-processes can be found [[#Sub-Processes | here]].&lt;br /&gt;
&lt;br /&gt;
Furthermore there is an control instance, which can independently call these three sub-processes for a given machine. Like that, the stoney cloud is able to handle different cases:&lt;br /&gt;
=== Backup a single machine ===&lt;br /&gt;
The procedure for backing up a single machine is very simple. Just call the three sub-processes (create-, export- and commitSnapshot) one after the other. So the control instance would do some very basic stuff: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machine = args[0];&lt;br /&gt;
&lt;br /&gt;
if( createSsnapshot( machine ) )&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
    if ( exportSnapshot( machine ) )&lt;br /&gt;
    {&lt;br /&gt;
&lt;br /&gt;
        if ( commitSnapshot( machine ) )&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Successfully backed up machine %s\n&amp;quot;, machine);&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
} else&lt;br /&gt;
{&lt;br /&gt;
    printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Backup multiple machines at the same time ===&lt;br /&gt;
When backing up multiple machines at the same time, we need to make sure that the snapshots for the machines are as close together as possible. Therefore the control instance should call first the createSnapshot process for all machines. After every machine has been snapshotted, the control instance can call the exportSnapshot and commitSnapshot process for every machine. The most important part here is, that the control instance somehow remembers, if the snapshot for a given machine was successful or not. Because if the snapshot failed, it must not call the exportSnapshot and commitSnapshot process. So the control instance needs a little bit more logic: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machines[] = args[0];&lt;br /&gt;
object successful_snapshots[];&lt;br /&gt;
&lt;br /&gt;
# Snapshot all machines&lt;br /&gt;
for( int i = 0; i &amp;lt;  sizeof(machines) / sizeof(object) ; i++ )&lt;br /&gt;
{&lt;br /&gt;
    # If the snapshot was successful, put the machine into the &lt;br /&gt;
    # successful_snapshots array&lt;br /&gt;
    if ( createSnapshot( machines[i] ) )&lt;br /&gt;
    {&lt;br /&gt;
        successful_snapshots[machines[i]];&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machines[i],error);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# export and commit all successful_snapshot machines&lt;br /&gt;
for ( int i = 0; i &amp;lt;  sizeof(successful_snapshots) / sizeof(object) ; i++ ) )&lt;br /&gt;
{&lt;br /&gt;
    # Check if the element at this position is not null, then the snapshot &lt;br /&gt;
    # for this machine was successful&lt;br /&gt;
    if ( successful_snapshots[i] )&lt;br /&gt;
    {&lt;br /&gt;
        if ( exportSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
        {&lt;br /&gt;
            if ( commitSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
            {&lt;br /&gt;
              printf(&amp;quot;Successfully backed-up machine %s\n&amp;quot;, successful_snapshots[i]);&lt;br /&gt;
            } else&lt;br /&gt;
            {&lt;br /&gt;
                printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sub-Processes ===&lt;br /&gt;
See also [[Libvirt_external_snapshot_with_GlusterFS]]&lt;br /&gt;
==== createSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Part_2:_Create_the_snapshot_using_virsh]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#createSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== exportSnapshot ====&lt;br /&gt;
# Simply copy the underlying image to the backup location&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;image&amp;gt;.qcow2 /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;backup&amp;gt;/&amp;lt;location&amp;gt;/.&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#exportSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== commitSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Cleanup.2FCommit_.28Online.29]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#commitSnapshot]]&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
Since the stoney cloud is (as the name says already) a cloud solution, it makes sense to have a backend (in our case openLDAP) involved in the whole process. Like that it is possible to run the backup jobs decentralized on every vm-node. The control instance can then modify the backend, and theses changes are seen by the diffenrent backup daemons on the vm-nodes. So the communication could look like shown in the following picture (Figure 1): &lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-communication.png|800px|thumbnail|none|Figure 1: Communication between the control instance and the prov-backup-kvm daemon through the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update this workflow by editing [[File:Daemon-communication.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== Control-Instance Daemon Interaction for creating a Backup with LDIF Examples ===&lt;br /&gt;
The step numbers correspond with the graphical overview from above.&lt;br /&gt;
&lt;br /&gt;
==== Step 00: Backup Configuration for a virtual machine ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The following backup configuration says, that the backup should be done daily, at 03:00 hours (localtime).&lt;br /&gt;
# * * * * * command to be executed&lt;br /&gt;
# - - - - -&lt;br /&gt;
# | | | | |&lt;br /&gt;
# | | | | +----- day of week (0 - 6) (Sunday=0)&lt;br /&gt;
# | | | +------- month (1 - 12)&lt;br /&gt;
# | | +--------- day of month (1 - 31)&lt;br /&gt;
# | +----------- hour (0 - 23)&lt;br /&gt;
# +------------- min (0 - 59)&lt;br /&gt;
# localtime in the crontab entry&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
objectclass: sstCronObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
description: This sub tree contains the backup plan for the virtual machine kvm-005.&lt;br /&gt;
sstCronMinute: 0&lt;br /&gt;
sstCronHour: 3&lt;br /&gt;
sstCronDay: *&lt;br /&gt;
sstCronMonth: *&lt;br /&gt;
sstCronDayOfWeek: *&lt;br /&gt;
sstCronActive: TRUE&lt;br /&gt;
sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
sstBackupRamDiskLocation: file:///mnt/ramdisk-test&lt;br /&gt;
sstVirtualizationDiskImageFormat: qcow2&lt;br /&gt;
sstVirtualizationDiskImageOwner: root&lt;br /&gt;
sstVirtualizationDiskImageGroup: vm-storage&lt;br /&gt;
sstVirtualizationDiskImagePermission: 0660&lt;br /&gt;
sstBackupNumberOfIterations: 1&lt;br /&gt;
sstVirtualizationVirtualMachineForceStart: FALSE&lt;br /&gt;
sstVirtualizationBandwidthMerge: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 01: Initialize Backup Sub Tree (Control instance daemon) ====&lt;br /&gt;
The sub tree &#039;&#039;&#039; ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&#039;&#039;&#039; reflects the time, when the backup is planned (in the form of [YYYY][MM][DD]T[hh][mm][ss]Z ([http://en.wikipedia.org/wiki/ISO_8601 ISO 8601]) and it should be written at the time, when the backup is planned and should be executed. The section &#039;&#039;&#039;20121002T010000Z&#039;&#039;&#039; means the following:&lt;br /&gt;
* Year: 2012&lt;br /&gt;
* Month: 10&lt;br /&gt;
* Day of Month: 02&lt;br /&gt;
* Hour of Day: 01&lt;br /&gt;
* Minutes: 00&lt;br /&gt;
* Seconds: 00&lt;br /&gt;
Please be aware the the time is to be written in UTC (see also the comment in the LDIF example below).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# This entry is the place holder for the backup, which is to be executed at 03:00 hours (localtime with daylight-saving). This&lt;br /&gt;
# leads to the 20121002T010000Z timestamp (which is written in UTC).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: sstProvisioning&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
ou: 20121002T010000Z&lt;br /&gt;
sstProvisioningExecutionDate: 0&lt;br /&gt;
sstProvisioningMode: initialize&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
sstProvisioningState: 20121002T014513Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Finalize the Initialization (Control instance daemon) ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is modified.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: initialized&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Start the Snapshot Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshot&#039;&#039;&#039;, the actual backup process is kicked off by the Control instance daemon.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# snapshot (this way the Provisioning-Backup-VKM daemon knows, that it must start the snapshotting process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshot&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 04: Starting the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is snapshotting the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to snapshotting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Finalizing the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the snapshot of the virtual machine or virtual machine template is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010011Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Start the export Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;export&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to export the disk image to the backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# export (this way the Provisioning-Backup-VKM daemon knows, that it must start the export process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: export&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Starting the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exporting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is exporting the virtual machine or virtual machine template disk images.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to exporting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exporting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 08: Finalizing the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exported&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the export of the virtual machine or virtual machine template disk-images is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010500Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exported&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the commit Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;commit&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to commit the changes from the overlay file to the underlying disk-image&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# commit (this way the Provisioning-Backup-VKM daemon knows, that it must start the commit process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: commit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the commit Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the commit command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;comitting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is committing changes from the overlay disk-images back to the underlying ones.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to comitting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: committing&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the commit Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commit command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;comitted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the comitting of the changes from the overlay disk-images back to the underlying ones is done. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: comitted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the Backup Process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;committed&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Backup) ==&lt;br /&gt;
Since we do not have a working control instance, we need to have a workaround for backing up the machines: &lt;br /&gt;
&lt;br /&gt;
* We do already have a BackupKVMWrapper.pl script (File-Backend) which executes the three [[#Sub-Processes | sub-processes ]] in the correct order for a given list of machines (see [[#Backup multiple machines at the same_time]]).&lt;br /&gt;
* We do already have the implementation for the whole backup with the LDAP-Backend (see [[ stoney conductor: prov backup kvm ]]).&lt;br /&gt;
* We can now combine these two existing scripts and create a wrapper (lets call it LDAPKVMWrapper) which, in some way, adds some logic to the BackupKVMWrapper.pl. In fact the LDAPKVMWrapper wrapper will generate the list of machines which need a backup.&lt;br /&gt;
&lt;br /&gt;
The behaviour on our servers is as follows (c.f. Figure 2):&lt;br /&gt;
# The (decentralized) LDAPKVMWrapper wrapper (which is executed everyday via cronjob) generates a list off all machines running on the current host.&lt;br /&gt;
#* Currently on the hosts the cronjobs looks like: &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
#* For each of these machines:&lt;br /&gt;
#** Check if the machine is excluded from the backup, if yes, remove the machine from the list&lt;br /&gt;
#** Check if the last backup was successful, if not, remove the machine from the list&lt;br /&gt;
# Update the backup subtree for each machine in the list&lt;br /&gt;
#* Remove the old backup leaf (the &amp;quot;yesterday-leaf&amp;quot;), and add a new one (the &amp;quot;today-leaf&amp;quot;) &lt;br /&gt;
#* After this step, the machines are ready to be backed up&lt;br /&gt;
# Call the KVMBackupWrapper.pl script with the machines list as a parameter&lt;br /&gt;
# Wait for the KVMBackupWrapper.pl script to finish&lt;br /&gt;
# Go again through all machines and update the backup subtree a last time&lt;br /&gt;
#* Check if the backup was successful, if yes, set sstProvisioningMode = finished (see also TBD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:wrapper-interaction.png|500px|thumbnail|none|Figure 2: How the two wrapper interact with the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update this workflow by editing [[File:wrapper-interaction.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
* If for some reason something does not work at all, the whole backup process can be deactivated by simply disabling the LDAPKVMWrapper cronjob&lt;br /&gt;
** &amp;lt;code&amp;gt;crontab -e&amp;lt;/code&amp;gt;&lt;br /&gt;
** Comment the LDAPKVMWrapper cronjob line: &amp;lt;code&amp;gt;#00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
=== How to exclude a machine from the backup ===&lt;br /&gt;
Login to one of the [[VM-Node | vm-nodes]] and execute the following command&lt;br /&gt;
&lt;br /&gt;
If you want to exclude a machine from the backup run you simply need to add the following entry to your LDAP directory: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the backup subtree in the LDAP directory already exists, you need to add the sstbackupexcludefrombackup attribute: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
add: objectClass&lt;br /&gt;
objectClass: sstVirtualizationBackupObjectClass&lt;br /&gt;
-&lt;br /&gt;
add: sstbackupexcludefrombackup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Re-include the machine to the backup ====&lt;br /&gt;
If you want to re include a machine, simply delete the machines whole backup subtree. It will be recreated during the next backup run.&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
= Restore =&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; The restore process is not yet defined / nor implemented. The following documentation is about the old restore process. &lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The restore process, similar to the backup process, can be divided into three sub-processes: &lt;br /&gt;
* Unretain the small files: Copy the small files (backend entry, XML description) from the backup directory to the retain directory&lt;br /&gt;
* Unretain the big files: Copy the big files (state file, disk image(s)) form the backup directory to the retain directory&lt;br /&gt;
* Restore the machine: Replace the live disk image(s) by the one(s) from the backup and restore the machine from the state file&lt;br /&gt;
&lt;br /&gt;
Additionally the restore process can also be divided into two phases: &lt;br /&gt;
* User-Interaction phase: After the &amp;quot;unretain small files&amp;quot; the user needs to decide two things:&lt;br /&gt;
** On conflicts between the backend entry file and the XML description, the user need to decide how to resolve this conflict(s)&lt;br /&gt;
** The user can also abort the restore process up to this point. After that the restore can not be aborted or undone! &lt;br /&gt;
* Non-User-Interaction phase: The daemons communicate through the backend between each other and the restore process continues without further user input (c.f. [[#Communication_through_backend_2 | Communication through backend]])&lt;br /&gt;
&lt;br /&gt;
=== Sub Processes ===&lt;br /&gt;
==== Unretain small files ====&lt;br /&gt;
This workflow assumes that the backup directory is on the same physical server as the retain directory (protocol is file://)&lt;br /&gt;
# Copy the backend-entry file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.backend /path/to/retain/vm-001.backend&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the XML description from the from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.xml /path/to/retain/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Compare the backend-entry file (the one in the retain directory) with the live-backend entry&lt;br /&gt;
#* Resolve all conflicts between these two backend entries&lt;br /&gt;
#** Modify the backend entry at the retain location accordingly&lt;br /&gt;
# Apply the same changes for the XML description at the retain location (backend entry and XML description need to be consistent).&lt;br /&gt;
&lt;br /&gt;
==== Unretain large files ====&lt;br /&gt;
# Copy the state file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.state /path/to/retain/vm-001.state&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the disk image(s) from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.qcow2 /path/to/retain/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
&lt;br /&gt;
==== Restore the VM ====&lt;br /&gt;
# Shutdown the VM if it is running:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh shutdown vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Undefine the VM if it is still defined: &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh undefine vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Overwrite the original disk image:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;mv /path/to/retain/vm-001.qcow2 /path/to/images/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
# Restore the VMs backend entry: &lt;br /&gt;
#* Write the backend entry from the retain location (&amp;lt;code&amp;gt;/path/to/retain/vm-001.backend&amp;lt;/code&amp;gt;) to the backend&lt;br /&gt;
# Overwrite the VMs XML description with the one from the retain location &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/retain/vm-001.xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Restore the VM from the state file with the corrected XML&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh restore /path/to/retain/vm-001.state --xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
The actual KVM-Restore process is controlled completely by the Control instance daemon via the OpenLDAP directory. See [[#OpenLDAP Directory Integration|OpenLDAP Directory Integration]] the involved attributes and possible values.&lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-interaction-restore.png|thumb|500px|none|Figure 3: Communication between all involved parties during the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update these interactions by editing [[File:Restore-Interaction.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== Control instance Daemon Interaction for restoring a Backup with LDIF Examples ===&lt;br /&gt;
==== Step 01: Start the unretainSmallFiles process (Control instance daemon) ====&lt;br /&gt;
The first step of the restore process is to copy the small files (in this case the XML file and the LDIF) from the configured backup location to the configured retain location. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainSmallFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainSmallFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Starting the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingSmallFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the small files for the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Finalizing the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedSmallFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the small files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Start the unretainLargeFiles process (Control instance daemon) ====&lt;br /&gt;
Next step in the restore process is to copy the large files (state file and disk images) from the configured backup directory to the configured retain directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainLargeFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainLargeFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Starting the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingLargeFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the large files for the virtual machine or virtual machine template.&lt;br /&gt;
&lt;br /&gt;
In the meantime the vm-manager merges the LDIF we have unretained in [[#Step_02:_Starting_the_unretainSmallFiles_process_.28Provisioning-Backup-KVM_daemon.29 | step 02]] with the one in the live directory to sort out possible differences in the configuration of the virtual machine.  &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Finalizing the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedLargeFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the large files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the restore process (Control instance daemon) ====&lt;br /&gt;
Since we now have all necessary files in the configured retain location, the restore process can be started. There we simply copy the disk images back to their original location and restore the VM from the state file (which is also at the configured retain location)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# restore (this way the Provisioning-Backup-VKM daemon knows, that it must start the restore process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restore&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restoring&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is restoring the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to restoring by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restoring&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restored&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restored&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the restore process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;restored&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the Control instance daemon, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the restore process is finished.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Restore) ==&lt;br /&gt;
&#039;&#039;&#039;Attention&#039;&#039;&#039;: The restore process is not yet defined / nor implemented. The following documentation is about the old restore process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Since the prov-backup-kvm daemon is not running on the vm-nodes (c.f. [[stoney_conductor:_Backup#Current_Implementation_.28Backup.29]]), the restore process does not work when clicking the icon in the webinterface. &lt;br /&gt;
&lt;br /&gt;
=== How to manually restore a machine from backup ===&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: Before you continue with this guide, make sure that you have no other possibility to restore the machine. It might be easier and safer to get lost files from the online backup if the machine has one set up.&lt;br /&gt;
&lt;br /&gt;
If you really have to restore the machine from the backup:&lt;br /&gt;
# Stop the machine from via the [https://cloud.stepping-stone.ch/vm-manager/ web interface]&lt;br /&gt;
# Login (as root) on the [[VM-Node]] the machine was running on&lt;br /&gt;
&lt;br /&gt;
As a first step, you would like to set some useful bash variables to be able to copy paste the following guide:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Double check all variables you are setting here. If one is not correct, you will restore a running machine or overwrite a live-disk image!&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
machinename=&amp;quot;&amp;lt;MACHINE-NAME&amp;gt;&amp;quot; # For example: machinename=&amp;quot;b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6&amp;quot;&lt;br /&gt;
vmpool=&amp;quot;&amp;lt;VM-POOL&amp;gt;&amp;quot; # For example vmpool=&amp;quot;0f83f084-8080-413e-b558-b678e504836e&amp;quot;&lt;br /&gt;
vmtype=&amp;quot;&amp;lt;VM-TYPE&amp;gt;&amp;quot; # For example vmtype=&amp;quot;vm-persistent&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change to the backup directory for the given machine and check the iterations:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change into the most recent iteration&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd 2014...&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In there you should have: &lt;br /&gt;
* The state file &amp;lt;MACHINE-NAME&amp;gt;.state.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.state.20140109T134445Z)&lt;br /&gt;
* The XML description &amp;lt;MACHINE-NAME&amp;gt;.xml.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.xml.20140109T134445Z)&lt;br /&gt;
* The ldif file &amp;lt;MACHINE-NAME&amp;gt;.ldif.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.ldif.20140109T134445Z)&lt;br /&gt;
* And at least one disk image &amp;lt;DISK-IMAGE&amp;gt;.qcow2.&amp;lt;BACKUP-DATE&amp;gt; (for example 8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2.20140109T134445Z)&lt;br /&gt;
Now you should save the backup date and the disk image(s) in a variable&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
backupdate=&amp;quot;&amp;lt;BACKUP-DATE&amp;gt;&amp;quot; # For example: backupdate=&amp;quot;20140109T134445Z&amp;quot;&lt;br /&gt;
diskimage1=&amp;quot;&amp;lt;DISK-IMAGE-1&amp;gt;.qcow2&amp;quot; # For example: diskimage1=&amp;quot;8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2&amp;quot;&lt;br /&gt;
diskimage2=&amp;quot;&amp;lt;DISK-IMAGE-2&amp;gt;.qcow2&amp;quot; # For example: diskimage2=&amp;quot;aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee.qcow2&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Have again a look at the different variables and &#039;&#039;&#039;double check them again&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
echo &amp;quot;Machine Name = ${machinename}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Pool = ${vmpool}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Type = ${vmtype}&amp;quot;&lt;br /&gt;
echo &amp;quot;Backup date = ${backupdate}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 1 = ${diskimage1}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 2 = ${diskimage2}&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all these files to the retain location:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
currentdate=`date --utc +&#039;%Y%m%dT%H%M%SZ&#039;`&lt;br /&gt;
mkdir -p /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.ldif.${backupdate} /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Check if there is a difference between the current XML file and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
diff -Naur /etc/libvirt/qemu/${machinename}.xml /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.xml.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Now you are entering the critical part. You won&#039;t be able to undo the following steps&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Check if there is a difference between the current LDAP entry and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
domain=&amp;quot;&amp;lt;DOMAIN&amp;gt;&amp;quot; # For example domain=&amp;quot;stoney-cloud.org&amp;quot;&lt;br /&gt;
ldapbase=&amp;quot;&amp;lt;LDAPBASE&amp;gt;&amp;quot; # For expample ldapbase=&amp;quot;dc=stoney-cloud,dc=org&amp;quot;&lt;br /&gt;
ldapsearch -H ldaps://ldapm.${domain} -b &amp;quot;sstVirtualMachine=${machinename},ou=virtual machines,ou=virtualization,ou=services,${ldapbase}&amp;quot; -s sub -x -LLL -o ldif-wrap=no -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W  &amp;quot;(objectclass=*)&amp;quot; &amp;gt; /tmp/${machinename}.ldif&lt;br /&gt;
diff -Naur /tmp/${machinename}.ldif /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.&lt;br /&gt;
&lt;br /&gt;
If there are no differences (or the differences are not important) you can skip the following step. Otherwise use the [https://cloud.stepping-stone.ch/phpldapadmin PhpLdapAdmin] to delete the machine from the LDAP directory (do not forget to delete the dhcp entry &amp;lt;code&amp;gt;dn: cn=&amp;lt;MACHINE-NAME&amp;gt;,ou=virtual machines,cn=192.168.140.0,cn=config-01,ou=dhcp,ou=networks,ou=virtualization,ou=services,dc=stoney-cloud,dc=org&amp;lt;/code&amp;gt;). Then add the LDIF (the one you just edited) to the LDAP (first do some general replacement)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sed -i\&lt;br /&gt;
 -e &#039;s/snapshotting/finished/&#039;\&lt;br /&gt;
 -e &#039;/member.*/d&#039;\&lt;br /&gt;
 /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&lt;br /&gt;
/usr/bin/ldapadd -H &amp;quot;ldaps://ldapm.${domain}&amp;quot; -x -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W -f /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Undefine the machine&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh undefine ${machinename}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all the disk images from the backup location back to their original location&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage1}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage1}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage2}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage2}&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And restore the domain from the state file from the backup location with the XML from the retain location (the one you might have edited)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh restore /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.state.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now the machine should be up and running again. Continuing where it was stopped when taking the backup.&lt;br /&gt;
&lt;br /&gt;
If everything is OK, you can cleanup the created files and directories&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rm -rf /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
rm /tmp/${machinename}.ldif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: stoney conductor]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=File:Daemon-communication.xmi&amp;diff=3780</id>
		<title>File:Daemon-communication.xmi</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=File:Daemon-communication.xmi&amp;diff=3780"/>
		<updated>2014-06-27T12:58:02Z</updated>

		<summary type="html">&lt;p&gt;Pat: Pat uploaded a new version of &amp;amp;quot;File:Daemon-communication.xmi&amp;amp;quot;: Correct workflow&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Initial Version&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=File:Daemon-communication.xmi&amp;diff=3779</id>
		<title>File:Daemon-communication.xmi</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=File:Daemon-communication.xmi&amp;diff=3779"/>
		<updated>2014-06-27T12:57:15Z</updated>

		<summary type="html">&lt;p&gt;Pat: Initial Version&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Initial Version&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3778</id>
		<title>stoney conductor: VM Backup</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3778"/>
		<updated>2014-06-27T12:55:51Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Communication through backend */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This page describes how the VMs and VM-Templates are backed-up and restored inside the [http://www.stoney-cloud.org stoney cloud].&lt;br /&gt;
&lt;br /&gt;
= Requirements =&lt;br /&gt;
* sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
** This directory might be a single partition which needs to have the same size as your partition for the live images (it&#039;s a &amp;quot;copy&amp;quot; of the live partition)&lt;br /&gt;
* sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
** This directory must be on the same partition as your life images are&lt;br /&gt;
* A working stoney cloud, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].&lt;br /&gt;
* The backup configuration must be set: [[stoney_conductor:_OpenLDAP_directory_data_organisation#Backup | stoney conductor: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Backup =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The main idea to backup a VM or a VM-Template is, to divide the task into three subtasks: &lt;br /&gt;
* createSnapshot: Create a disk only snapshot. A new overlay file is created, all write operations are performed to this file. The underlying disk-image is now read only.&lt;br /&gt;
* exportSnapshot: Copy the read only disk-image to the backup location.&lt;br /&gt;
* commitSnapshot: Commit the performed write operations from the overlay back to the underlying (original) disk image. Now the underlying image is read-write again and the overlay image can be deleted.&lt;br /&gt;
A more detailed and technical description for these three sub-processes can be found [[#Sub-Processes | here]].&lt;br /&gt;
&lt;br /&gt;
Furthermore there is an control instance, which can independently call these three sub-processes for a given machine. Like that, the stoney cloud is able to handle different cases:&lt;br /&gt;
=== Backup a single machine ===&lt;br /&gt;
The procedure for backing up a single machine is very simple. Just call the three sub-processes (create-, export- and commitSnapshot) one after the other. So the control instance would do some very basic stuff: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machine = args[0];&lt;br /&gt;
&lt;br /&gt;
if( createSsnapshot( machine ) )&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
    if ( exportSnapshot( machine ) )&lt;br /&gt;
    {&lt;br /&gt;
&lt;br /&gt;
        if ( commitSnapshot( machine ) )&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Successfully backed up machine %s\n&amp;quot;, machine);&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
} else&lt;br /&gt;
{&lt;br /&gt;
    printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Backup multiple machines at the same time ===&lt;br /&gt;
When backing up multiple machines at the same time, we need to make sure that the snapshots for the machines are as close together as possible. Therefore the control instance should call first the createSnapshot process for all machines. After every machine has been snapshotted, the control instance can call the exportSnapshot and commitSnapshot process for every machine. The most important part here is, that the control instance somehow remembers, if the snapshot for a given machine was successful or not. Because if the snapshot failed, it must not call the exportSnapshot and commitSnapshot process. So the control instance needs a little bit more logic: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machines[] = args[0];&lt;br /&gt;
object successful_snapshots[];&lt;br /&gt;
&lt;br /&gt;
# Snapshot all machines&lt;br /&gt;
for( int i = 0; i &amp;lt;  sizeof(machines) / sizeof(object) ; i++ )&lt;br /&gt;
{&lt;br /&gt;
    # If the snapshot was successful, put the machine into the &lt;br /&gt;
    # successful_snapshots array&lt;br /&gt;
    if ( createSnapshot( machines[i] ) )&lt;br /&gt;
    {&lt;br /&gt;
        successful_snapshots[machines[i]];&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machines[i],error);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# export and commit all successful_snapshot machines&lt;br /&gt;
for ( int i = 0; i &amp;lt;  sizeof(successful_snapshots) / sizeof(object) ; i++ ) )&lt;br /&gt;
{&lt;br /&gt;
    # Check if the element at this position is not null, then the snapshot &lt;br /&gt;
    # for this machine was successful&lt;br /&gt;
    if ( successful_snapshots[i] )&lt;br /&gt;
    {&lt;br /&gt;
        if ( exportSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
        {&lt;br /&gt;
            if ( commitSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
            {&lt;br /&gt;
              printf(&amp;quot;Successfully backed-up machine %s\n&amp;quot;, successful_snapshots[i]);&lt;br /&gt;
            } else&lt;br /&gt;
            {&lt;br /&gt;
                printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sub-Processes ===&lt;br /&gt;
See also [[Libvirt_external_snapshot_with_GlusterFS]]&lt;br /&gt;
==== createSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Part_2:_Create_the_snapshot_using_virsh]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#createSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== exportSnapshot ====&lt;br /&gt;
# Simply copy the underlying image to the backup location&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;image&amp;gt;.qcow2 /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;backup&amp;gt;/&amp;lt;location&amp;gt;/.&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#exportSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== commitSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Cleanup.2FCommit_.28Online.29]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#commitSnapshot]]&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
Since the stoney cloud is (as the name says already) a cloud solution, it makes sense to have a backend (in our case openLDAP) involved in the whole process. Like that it is possible to run the backup jobs decentralized on every vm-node. The control instance can then modify the backend, and theses changes are seen by the diffenrent backup daemons on the vm-nodes. So the communication could look like shown in the following picture (Figure 1): &lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-communication.png|800px|thumbnail|none|Figure 1: Communication between the control instance and the prov-backup-kvm daemon through the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update this workflow by editing [[File:Daemon-communication.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== Control-Instance Daemon Interaction for creating a Backup with LDIF Examples ===&lt;br /&gt;
The step numbers correspond with the graphical overview from above.&lt;br /&gt;
&lt;br /&gt;
==== Step 00: Backup Configuration for a virtual machine ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The following backup configuration says, that the backup should be done daily, at 03:00 hours (localtime).&lt;br /&gt;
# * * * * * command to be executed&lt;br /&gt;
# - - - - -&lt;br /&gt;
# | | | | |&lt;br /&gt;
# | | | | +----- day of week (0 - 6) (Sunday=0)&lt;br /&gt;
# | | | +------- month (1 - 12)&lt;br /&gt;
# | | +--------- day of month (1 - 31)&lt;br /&gt;
# | +----------- hour (0 - 23)&lt;br /&gt;
# +------------- min (0 - 59)&lt;br /&gt;
# localtime in the crontab entry&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
objectclass: sstCronObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
description: This sub tree contains the backup plan for the virtual machine kvm-005.&lt;br /&gt;
sstCronMinute: 0&lt;br /&gt;
sstCronHour: 3&lt;br /&gt;
sstCronDay: *&lt;br /&gt;
sstCronMonth: *&lt;br /&gt;
sstCronDayOfWeek: *&lt;br /&gt;
sstCronActive: TRUE&lt;br /&gt;
sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
sstBackupRamDiskLocation: file:///mnt/ramdisk-test&lt;br /&gt;
sstVirtualizationDiskImageFormat: qcow2&lt;br /&gt;
sstVirtualizationDiskImageOwner: root&lt;br /&gt;
sstVirtualizationDiskImageGroup: vm-storage&lt;br /&gt;
sstVirtualizationDiskImagePermission: 0660&lt;br /&gt;
sstBackupNumberOfIterations: 1&lt;br /&gt;
sstVirtualizationVirtualMachineForceStart: FALSE&lt;br /&gt;
sstVirtualizationBandwidthMerge: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 01: Initialize Backup Sub Tree (Control instance daemon) ====&lt;br /&gt;
The sub tree &#039;&#039;&#039; ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&#039;&#039;&#039; reflects the time, when the backup is planned (in the form of [YYYY][MM][DD]T[hh][mm][ss]Z ([http://en.wikipedia.org/wiki/ISO_8601 ISO 8601]) and it should be written at the time, when the backup is planned and should be executed. The section &#039;&#039;&#039;20121002T010000Z&#039;&#039;&#039; means the following:&lt;br /&gt;
* Year: 2012&lt;br /&gt;
* Month: 10&lt;br /&gt;
* Day of Month: 02&lt;br /&gt;
* Hour of Day: 01&lt;br /&gt;
* Minutes: 00&lt;br /&gt;
* Seconds: 00&lt;br /&gt;
Please be aware the the time is to be written in UTC (see also the comment in the LDIF example below).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# This entry is the place holder for the backup, which is to be executed at 03:00 hours (localtime with daylight-saving). This&lt;br /&gt;
# leads to the 20121002T010000Z timestamp (which is written in UTC).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: sstProvisioning&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
ou: 20121002T010000Z&lt;br /&gt;
sstProvisioningExecutionDate: 0&lt;br /&gt;
sstProvisioningMode: initialize&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
sstProvisioningState: 20121002T014513Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Finalize the Initialization (Control instance daemon) ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is modified.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: initialized&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Start the Snapshot Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshot&#039;&#039;&#039;, the actual backup process is kicked off by the Control instance daemon.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# snapshot (this way the Provisioning-Backup-VKM daemon knows, that it must start the snapshotting process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshot&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 04: Starting the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is snapshotting the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to snapshotting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Finalizing the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the snapshot of the virtual machine or virtual machine template is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010011Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Start the export Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;export&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to export the disk image to the backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# export (this way the Provisioning-Backup-VKM daemon knows, that it must start the export process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: export&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Starting the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exporting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is exporting the virtual machine or virtual machine template disk images.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to exporting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exporting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 08: Finalizing the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exported&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the export of the virtual machine or virtual machine template disk-images is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010500Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exported&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the commit Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;commit&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to commit the changes from the overlay file to the underlying disk-image&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# commit (this way the Provisioning-Backup-VKM daemon knows, that it must start the commit process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: commit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the commit Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the commit command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;comitting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is committing changes from the overlay disk-images back to the underlying ones.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to comitting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: committing&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the commit Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commit command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;comitted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the comitting of the changes from the overlay disk-images back to the underlying ones is done. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: comitted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the Backup Process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;committed&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Backup) ==&lt;br /&gt;
Since we do not have a working control instance, we need to have a workaround for backing up the machines: &lt;br /&gt;
&lt;br /&gt;
* We do already have a BackupKVMWrapper.pl script (File-Backend) which executes the three [[#Sub-Processes | sub-processes ]] in the correct order for a given list of machines (see [[#Backup multiple machines at the same_time]]).&lt;br /&gt;
* We do already have the implementation for the whole backup with the LDAP-Backend (see [[ stoney conductor: prov backup kvm ]]).&lt;br /&gt;
* We can now combine these two existing scripts and create a wrapper (lets call it LDAPKVMWrapper) which, in some way, adds some logic to the BackupKVMWrapper.pl. In fact the LDAPKVMWrapper wrapper will generate the list of machines which need a backup.&lt;br /&gt;
&lt;br /&gt;
The behaviour on our servers is as follows (c.f. Figure 2):&lt;br /&gt;
# The (decentralized) LDAPKVMWrapper wrapper (which is executed everyday via cronjob) generates a list off all machines running on the current host.&lt;br /&gt;
#* Currently on the hosts the cronjobs looks like: &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
#* For each of these machines:&lt;br /&gt;
#** Check if the machine is excluded from the backup, if yes, remove the machine from the list&lt;br /&gt;
#** Check if the last backup was successful, if not, remove the machine from the list&lt;br /&gt;
# Update the backup subtree for each machine in the list&lt;br /&gt;
#* Remove the old backup leaf (the &amp;quot;yesterday-leaf&amp;quot;), and add a new one (the &amp;quot;today-leaf&amp;quot;) &lt;br /&gt;
#* After this step, the machines are ready to be backed up&lt;br /&gt;
# Call the KVMBackupWrapper.pl script with the machines list as a parameter&lt;br /&gt;
# Wait for the KVMBackupWrapper.pl script to finish&lt;br /&gt;
# Go again through all machines and update the backup subtree a last time&lt;br /&gt;
#* Check if the backup was successful, if yes, set sstProvisioningMode = finished (see also TBD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:wrapper-interaction.png|500px|thumbnail|none|Figure 2: How the two wrapper interact with the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
* If for some reason something does not work at all, the whole backup process can be deactivated by simply disabling the LDAPKVMWrapper cronjob&lt;br /&gt;
** &amp;lt;code&amp;gt;crontab -e&amp;lt;/code&amp;gt;&lt;br /&gt;
** Comment the LDAPKVMWrapper cronjob line: &amp;lt;code&amp;gt;#00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
=== How to exclude a machine from the backup ===&lt;br /&gt;
Login to one of the [[VM-Node | vm-nodes]] and execute the following command&lt;br /&gt;
&lt;br /&gt;
If you want to exclude a machine from the backup run you simply need to add the following entry to your LDAP directory: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the backup subtree in the LDAP directory already exists, you need to add the sstbackupexcludefrombackup attribute: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
add: objectClass&lt;br /&gt;
objectClass: sstVirtualizationBackupObjectClass&lt;br /&gt;
-&lt;br /&gt;
add: sstbackupexcludefrombackup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Re-include the machine to the backup ====&lt;br /&gt;
If you want to re include a machine, simply delete the machines whole backup subtree. It will be recreated during the next backup run.&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
= Restore =&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; The restore process is not yet defined / nor implemented. The following documentation is about the old restore process. &lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The restore process, similar to the backup process, can be divided into three sub-processes: &lt;br /&gt;
* Unretain the small files: Copy the small files (backend entry, XML description) from the backup directory to the retain directory&lt;br /&gt;
* Unretain the big files: Copy the big files (state file, disk image(s)) form the backup directory to the retain directory&lt;br /&gt;
* Restore the machine: Replace the live disk image(s) by the one(s) from the backup and restore the machine from the state file&lt;br /&gt;
&lt;br /&gt;
Additionally the restore process can also be divided into two phases: &lt;br /&gt;
* User-Interaction phase: After the &amp;quot;unretain small files&amp;quot; the user needs to decide two things:&lt;br /&gt;
** On conflicts between the backend entry file and the XML description, the user need to decide how to resolve this conflict(s)&lt;br /&gt;
** The user can also abort the restore process up to this point. After that the restore can not be aborted or undone! &lt;br /&gt;
* Non-User-Interaction phase: The daemons communicate through the backend between each other and the restore process continues without further user input (c.f. [[#Communication_through_backend_2 | Communication through backend]])&lt;br /&gt;
&lt;br /&gt;
=== Sub Processes ===&lt;br /&gt;
==== Unretain small files ====&lt;br /&gt;
This workflow assumes that the backup directory is on the same physical server as the retain directory (protocol is file://)&lt;br /&gt;
# Copy the backend-entry file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.backend /path/to/retain/vm-001.backend&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the XML description from the from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.xml /path/to/retain/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Compare the backend-entry file (the one in the retain directory) with the live-backend entry&lt;br /&gt;
#* Resolve all conflicts between these two backend entries&lt;br /&gt;
#** Modify the backend entry at the retain location accordingly&lt;br /&gt;
# Apply the same changes for the XML description at the retain location (backend entry and XML description need to be consistent).&lt;br /&gt;
&lt;br /&gt;
==== Unretain large files ====&lt;br /&gt;
# Copy the state file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.state /path/to/retain/vm-001.state&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the disk image(s) from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.qcow2 /path/to/retain/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
&lt;br /&gt;
==== Restore the VM ====&lt;br /&gt;
# Shutdown the VM if it is running:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh shutdown vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Undefine the VM if it is still defined: &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh undefine vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Overwrite the original disk image:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;mv /path/to/retain/vm-001.qcow2 /path/to/images/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
# Restore the VMs backend entry: &lt;br /&gt;
#* Write the backend entry from the retain location (&amp;lt;code&amp;gt;/path/to/retain/vm-001.backend&amp;lt;/code&amp;gt;) to the backend&lt;br /&gt;
# Overwrite the VMs XML description with the one from the retain location &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/retain/vm-001.xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Restore the VM from the state file with the corrected XML&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh restore /path/to/retain/vm-001.state --xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
The actual KVM-Restore process is controlled completely by the Control instance daemon via the OpenLDAP directory. See [[#OpenLDAP Directory Integration|OpenLDAP Directory Integration]] the involved attributes and possible values.&lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-interaction-restore.png|thumb|500px|none|Figure 3: Communication between all involved parties during the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update these interactions by editing [[File:Restore-Interaction.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== Control instance Daemon Interaction for restoring a Backup with LDIF Examples ===&lt;br /&gt;
==== Step 01: Start the unretainSmallFiles process (Control instance daemon) ====&lt;br /&gt;
The first step of the restore process is to copy the small files (in this case the XML file and the LDIF) from the configured backup location to the configured retain location. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainSmallFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainSmallFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Starting the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingSmallFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the small files for the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Finalizing the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedSmallFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the small files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Start the unretainLargeFiles process (Control instance daemon) ====&lt;br /&gt;
Next step in the restore process is to copy the large files (state file and disk images) from the configured backup directory to the configured retain directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainLargeFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainLargeFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Starting the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingLargeFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the large files for the virtual machine or virtual machine template.&lt;br /&gt;
&lt;br /&gt;
In the meantime the vm-manager merges the LDIF we have unretained in [[#Step_02:_Starting_the_unretainSmallFiles_process_.28Provisioning-Backup-KVM_daemon.29 | step 02]] with the one in the live directory to sort out possible differences in the configuration of the virtual machine.  &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Finalizing the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedLargeFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the large files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the restore process (Control instance daemon) ====&lt;br /&gt;
Since we now have all necessary files in the configured retain location, the restore process can be started. There we simply copy the disk images back to their original location and restore the VM from the state file (which is also at the configured retain location)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# restore (this way the Provisioning-Backup-VKM daemon knows, that it must start the restore process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restore&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restoring&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is restoring the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to restoring by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restoring&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restored&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restored&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the restore process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;restored&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the Control instance daemon, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the restore process is finished.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Restore) ==&lt;br /&gt;
&#039;&#039;&#039;Attention&#039;&#039;&#039;: The restore process is not yet defined / nor implemented. The following documentation is about the old restore process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Since the prov-backup-kvm daemon is not running on the vm-nodes (c.f. [[stoney_conductor:_Backup#Current_Implementation_.28Backup.29]]), the restore process does not work when clicking the icon in the webinterface. &lt;br /&gt;
&lt;br /&gt;
=== How to manually restore a machine from backup ===&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: Before you continue with this guide, make sure that you have no other possibility to restore the machine. It might be easier and safer to get lost files from the online backup if the machine has one set up.&lt;br /&gt;
&lt;br /&gt;
If you really have to restore the machine from the backup:&lt;br /&gt;
# Stop the machine from via the [https://cloud.stepping-stone.ch/vm-manager/ web interface]&lt;br /&gt;
# Login (as root) on the [[VM-Node]] the machine was running on&lt;br /&gt;
&lt;br /&gt;
As a first step, you would like to set some useful bash variables to be able to copy paste the following guide:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Double check all variables you are setting here. If one is not correct, you will restore a running machine or overwrite a live-disk image!&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
machinename=&amp;quot;&amp;lt;MACHINE-NAME&amp;gt;&amp;quot; # For example: machinename=&amp;quot;b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6&amp;quot;&lt;br /&gt;
vmpool=&amp;quot;&amp;lt;VM-POOL&amp;gt;&amp;quot; # For example vmpool=&amp;quot;0f83f084-8080-413e-b558-b678e504836e&amp;quot;&lt;br /&gt;
vmtype=&amp;quot;&amp;lt;VM-TYPE&amp;gt;&amp;quot; # For example vmtype=&amp;quot;vm-persistent&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change to the backup directory for the given machine and check the iterations:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change into the most recent iteration&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd 2014...&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In there you should have: &lt;br /&gt;
* The state file &amp;lt;MACHINE-NAME&amp;gt;.state.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.state.20140109T134445Z)&lt;br /&gt;
* The XML description &amp;lt;MACHINE-NAME&amp;gt;.xml.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.xml.20140109T134445Z)&lt;br /&gt;
* The ldif file &amp;lt;MACHINE-NAME&amp;gt;.ldif.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.ldif.20140109T134445Z)&lt;br /&gt;
* And at least one disk image &amp;lt;DISK-IMAGE&amp;gt;.qcow2.&amp;lt;BACKUP-DATE&amp;gt; (for example 8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2.20140109T134445Z)&lt;br /&gt;
Now you should save the backup date and the disk image(s) in a variable&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
backupdate=&amp;quot;&amp;lt;BACKUP-DATE&amp;gt;&amp;quot; # For example: backupdate=&amp;quot;20140109T134445Z&amp;quot;&lt;br /&gt;
diskimage1=&amp;quot;&amp;lt;DISK-IMAGE-1&amp;gt;.qcow2&amp;quot; # For example: diskimage1=&amp;quot;8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2&amp;quot;&lt;br /&gt;
diskimage2=&amp;quot;&amp;lt;DISK-IMAGE-2&amp;gt;.qcow2&amp;quot; # For example: diskimage2=&amp;quot;aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee.qcow2&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Have again a look at the different variables and &#039;&#039;&#039;double check them again&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
echo &amp;quot;Machine Name = ${machinename}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Pool = ${vmpool}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Type = ${vmtype}&amp;quot;&lt;br /&gt;
echo &amp;quot;Backup date = ${backupdate}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 1 = ${diskimage1}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 2 = ${diskimage2}&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all these files to the retain location:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
currentdate=`date --utc +&#039;%Y%m%dT%H%M%SZ&#039;`&lt;br /&gt;
mkdir -p /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.ldif.${backupdate} /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Check if there is a difference between the current XML file and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
diff -Naur /etc/libvirt/qemu/${machinename}.xml /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.xml.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Now you are entering the critical part. You won&#039;t be able to undo the following steps&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Check if there is a difference between the current LDAP entry and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
domain=&amp;quot;&amp;lt;DOMAIN&amp;gt;&amp;quot; # For example domain=&amp;quot;stoney-cloud.org&amp;quot;&lt;br /&gt;
ldapbase=&amp;quot;&amp;lt;LDAPBASE&amp;gt;&amp;quot; # For expample ldapbase=&amp;quot;dc=stoney-cloud,dc=org&amp;quot;&lt;br /&gt;
ldapsearch -H ldaps://ldapm.${domain} -b &amp;quot;sstVirtualMachine=${machinename},ou=virtual machines,ou=virtualization,ou=services,${ldapbase}&amp;quot; -s sub -x -LLL -o ldif-wrap=no -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W  &amp;quot;(objectclass=*)&amp;quot; &amp;gt; /tmp/${machinename}.ldif&lt;br /&gt;
diff -Naur /tmp/${machinename}.ldif /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.&lt;br /&gt;
&lt;br /&gt;
If there are no differences (or the differences are not important) you can skip the following step. Otherwise use the [https://cloud.stepping-stone.ch/phpldapadmin PhpLdapAdmin] to delete the machine from the LDAP directory (do not forget to delete the dhcp entry &amp;lt;code&amp;gt;dn: cn=&amp;lt;MACHINE-NAME&amp;gt;,ou=virtual machines,cn=192.168.140.0,cn=config-01,ou=dhcp,ou=networks,ou=virtualization,ou=services,dc=stoney-cloud,dc=org&amp;lt;/code&amp;gt;). Then add the LDIF (the one you just edited) to the LDAP (first do some general replacement)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sed -i\&lt;br /&gt;
 -e &#039;s/snapshotting/finished/&#039;\&lt;br /&gt;
 -e &#039;/member.*/d&#039;\&lt;br /&gt;
 /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&lt;br /&gt;
/usr/bin/ldapadd -H &amp;quot;ldaps://ldapm.${domain}&amp;quot; -x -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W -f /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Undefine the machine&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh undefine ${machinename}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all the disk images from the backup location back to their original location&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage1}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage1}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage2}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage2}&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And restore the domain from the state file from the backup location with the XML from the retain location (the one you might have edited)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh restore /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.state.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now the machine should be up and running again. Continuing where it was stopped when taking the backup.&lt;br /&gt;
&lt;br /&gt;
If everything is OK, you can cleanup the created files and directories&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rm -rf /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
rm /tmp/${machinename}.ldif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: stoney conductor]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=File:node-integration.xmi&amp;diff=3777</id>
		<title>File:node-integration.xmi</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=File:node-integration.xmi&amp;diff=3777"/>
		<updated>2014-06-27T12:54:49Z</updated>

		<summary type="html">&lt;p&gt;Pat: Initial version&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Initial version&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_cloud:_Multi-Node_Installation&amp;diff=3776</id>
		<title>stoney cloud: Multi-Node Installation</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_cloud:_Multi-Node_Installation&amp;diff=3776"/>
		<updated>2014-06-27T12:54:34Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Node Integration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The [http://www.stoney-cloud.org/ stoney cloud] builds upon various standard open source components and can be run on commodity hardware. The final Multi-Node Setup consists of the following components:&lt;br /&gt;
* One [[Primary-Master-Node]] with an OpenLDAP Directory Server for the storage of the stoney cloud user and service related data with the web based management [[VM-Manager]] interface and the Linux kernel based virtualization technology.&lt;br /&gt;
* One [[Secondary-Master-Node]] with the Linux kernel based virtualization technology.&lt;br /&gt;
* Two [[Storage-Node | Storage-Nodes]] configured as  a replicated and distributed data storage service based on [http://www.gluster.org/ GlusterFS].&lt;br /&gt;
The components communicate with each other over a standard Ethernet based IPv4 network.&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
The following items and conditions are required to be able to install and configure a stoney cloud environment:&lt;br /&gt;
* Dedicated Hardware (4 Servers) which fulfil the following requirements: &lt;br /&gt;
** 64-Bit Intel with VT-Technologie (AMD is not tested at the moment).&lt;br /&gt;
** 8 Gigabyte Memory (more is better).&lt;br /&gt;
** 147 Gigabyte up to 2 Terabyte disks.&lt;br /&gt;
*** For all four Nodes ([[VM-Node | VM-Nodes]] and [[Storage-Node | Storage-Nodes]]) two 147 Gigabyte Disks, configured as RAID1, are enough for the [http://www.gentoo.org/ Gentoo Linux] operating system.&lt;br /&gt;
*** Optional: For the two [http://www.gluster.org/ GlusterFS] [[Storage-Node | Storage-Nodes]] we recommend a second RAID-Set configured as [http://en.wikipedia.org/wiki/RAID_6#RAID_6 RAID6]-Set with battery backup.&lt;br /&gt;
** Two physical Ethernet Interfaces, which support the same bandwidth (for example 1 Gigabit/s).&lt;br /&gt;
* Two Gigabit layer-2 switches supporting [http://en.wikipedia.org/wiki/IEEE_802.3ad IEEE 802.3ad] (dynamic link aggregation), [http://en.wikipedia.org/wiki/IEEE_802.1Q IEEE 802.1Q] (VLAN tagging) and stacking (optional, but recommended).&lt;br /&gt;
* Experience with Linux environments especially with [http://www.gentoo.org Gentoo Linux].&lt;br /&gt;
* Good experience with IP networking, because the Switches need to be configured manually (dynamic link aggregation and VLAN tagging).&lt;br /&gt;
&lt;br /&gt;
=== Limitations ===&lt;br /&gt;
During the installation of the [http://www.gentoo.org/ Gentoo Linux] operating system, the first two physical Ethernet Interfaces are automatically configured as a logical interface (bond0) and the four tagged VLANs are set up. If more than two physical Ethernet Interfaces are to be included in to the logical interface (bond0) or the physical Ethernet Interfaces have different bandwidths (for example 1 Gigabit/s and 10 Gigabit/s), the final logical interface (bond0) needs to be configured manually after the installation of the [http://www.gentoo.org/ Gentoo Linux] operating system.&lt;br /&gt;
Only the first two [http://www.gluster.org/ GlusterFS] [[Storage-Node | Storage-Nodes]] are set up automatically. More [[Storage-Node | Storage-Nodes]] need to be integrated manually.&lt;br /&gt;
&lt;br /&gt;
=== Network Overview ===&lt;br /&gt;
As stated before, a minimal multi node stoney cloud environment consist of two VM- and Storage-Nodes.&lt;br /&gt;
It is highly recommended to use IEEE 802.3ad link aggregation (bonding, trunking etc.) over two network cards and attach them as one logical link to the access switch.&lt;br /&gt;
&lt;br /&gt;
It is out of scope of this document on how to configure the switches for stacking, link aggregation and VLAN tagging, consult the respective user manual.&lt;br /&gt;
&lt;br /&gt;
There are two scenarios on how to connect the nodes to the network.&lt;br /&gt;
&lt;br /&gt;
==== Network Overview: Physical Layer Scenario 1 (recommended) ====&lt;br /&gt;
The preferred solution is to use two switches which are stackable (supporting link aggregation over two switches). Connect the nodes and switches as illustrated below. Thus eliminating the single point of failure as presented in scenario 2.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                      +----------------------+      +----------------------+&lt;br /&gt;
                      |                      |      |                      |&lt;br /&gt;
                      |      vm-node-01      |      |      vm-node-02      |&lt;br /&gt;
                      |                      |      |                      |&lt;br /&gt;
                      +----------------------+      +----------------------+        &lt;br /&gt;
                                     bond0 | \      / | bond0&lt;br /&gt;
                                           |  \    /  |&lt;br /&gt;
                                           |   \  /   |&lt;br /&gt;
                                           |    \/    |                                          ___&lt;br /&gt;
                                           |    /\    |                                      ___(   )___&lt;br /&gt;
                                           |   /  \   |                                   __(           )__&lt;br /&gt;
                                           |  /    \  |                                 _(                 )_&lt;br /&gt;
                                 +-------------+   +-------------+                    _(                     )_&lt;br /&gt;
                                 |  switch-01  |===|  switch-02  |-------------------(_   Corporate LAN/WAN   _)&lt;br /&gt;
                                 +-------------+   +-------------+                     (_                   _)&lt;br /&gt;
                                           | \      / |                                  (__             __)&lt;br /&gt;
                                           |  \    /  |                                     (___     ___)&lt;br /&gt;
                                           |   \  /   |                                         (___)&lt;br /&gt;
                                           |    \/    |&lt;br /&gt;
                                           |    /\    |&lt;br /&gt;
                                           |   /  \   |&lt;br /&gt;
                                     bond0 |  /    \  | bond0&lt;br /&gt;
                     +-----------------------+      +-----------------------+&lt;br /&gt;
                     |                       |      |                       |&lt;br /&gt;
                     | tier1-storage-node-01 |      | tier1-storage-node-02 |&lt;br /&gt;
                     |                       |      |                       |&lt;br /&gt;
                     +-----------------------+      +-----------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Network Overview: Physical Layer Scenario 2 (use at your own risk) ====&lt;br /&gt;
If theres only one switch available (or the switches aren&#039;t stackable) connect the nodes as illustrated below. As you can see, the switch is as a single point of failure.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                      +----------------------+      +----------------------+&lt;br /&gt;
                      |                      |      |                      |&lt;br /&gt;
                      |      vm-node-01      |      |      vm-node-02      |&lt;br /&gt;
                      |                      |      |                      |&lt;br /&gt;
                      +----------------------+      +----------------------+                     ___&lt;br /&gt;
                                  bond0  \\            // bond0                              ___(   )___&lt;br /&gt;
                                          \\          //                                  __(           )__&lt;br /&gt;
                                           \\        //                                 _(                 )_&lt;br /&gt;
                                         +-------------+                              _(                     )_&lt;br /&gt;
                                         |  switch-01  |-----------------------------(_   Corporate LAN/WAN   _)&lt;br /&gt;
                                         +-------------+                               (_                   _)&lt;br /&gt;
                                           //       \\                                   (__             __) &lt;br /&gt;
                                          //         \\                                     (___     ___)&lt;br /&gt;
                                  bond0  //           \\ bond0                                  (___)&lt;br /&gt;
                     +-----------------------+      +-----------------------+&lt;br /&gt;
                     |                       |      |                       |&lt;br /&gt;
                     | tier1-storage-node-01 |      | tier1-storage-node-02 |&lt;br /&gt;
                     |                       |      |                       |&lt;br /&gt;
                     +-----------------------+      +-----------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Network overview: Logical layer ====&lt;br /&gt;
The goal is to achieve the following configuration:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
+----------------+----------------+----------------+----------------+&lt;br /&gt;
|   10.1.110.1X  |   10.1.120.1X  |   10.1.130.1X  | 192.168.140.1X |&lt;br /&gt;
+----------------+----------------+----------------+----------------+&lt;br /&gt;
|                |                |                |     vmbr0      |&lt;br /&gt;
|     vlan110    |    vlan120     |    vlan130     +----------------+&lt;br /&gt;
|                |                |                |    vlan140     |&lt;br /&gt;
+----------------+----------------+----------------+----------------+&lt;br /&gt;
+-------------------------------------------------------------------+&lt;br /&gt;
|                   bond0 (bonding.mode=802.3ad)                    |&lt;br /&gt;
+-------------------------------------------------------------------+&lt;br /&gt;
+----------------+                                 +----------------+&lt;br /&gt;
|      eth0      |                                 |      eth1      |&lt;br /&gt;
+----------------+                                 +----------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ideal stoney cloud environment is based on four logical separated VLANs (virtual LANs):&lt;br /&gt;
* &#039;&#039;&#039;admin&#039;&#039;&#039;: Administrative network, used for administration and monitoring purposes.&lt;br /&gt;
* &#039;&#039;&#039;data&#039;&#039;&#039;: Data network, used for GlusterFS traffic.&lt;br /&gt;
* &#039;&#039;&#039;int&#039;&#039;&#039;: Internal network, used for internal traffic such as LDAP, libvirt and more.&lt;br /&gt;
* &#039;&#039;&#039;pub&#039;&#039;&#039;: Public network, used for accessing the VM-Manager webinterface, Spice traffic and internet access.&lt;br /&gt;
&lt;br /&gt;
Each of the above VLANs hold dedicated services and separates them from each other. This documentation assumes, that the four VLANs are present and the following IP networks are available:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; style=&amp;quot;border-collapse: collapse; font-size:100%;&amp;quot; width=100%&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|VLAN name&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|VLAN ID&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Network prefix&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Default Gateway address&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Broadcast address&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Domain name&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|VIP&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|admin&lt;br /&gt;
|110&lt;br /&gt;
|10.1.110.0/24&lt;br /&gt;
| -- &lt;br /&gt;
|10.1.110.255&lt;br /&gt;
|admin.example.com&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|data&lt;br /&gt;
|120&lt;br /&gt;
|10.1.120.0/24&lt;br /&gt;
| --&lt;br /&gt;
|10.1.120.255&lt;br /&gt;
|data.example.com&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|int&lt;br /&gt;
|130&lt;br /&gt;
|10.1.130.0/24&lt;br /&gt;
| --&lt;br /&gt;
|10.1.130.255&lt;br /&gt;
|int.example.com&lt;br /&gt;
|10.1.130.10&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|pub&lt;br /&gt;
|140&lt;br /&gt;
|192.168.140.0/24&lt;br /&gt;
|192.168.140.1&lt;br /&gt;
|192.168.140.255&lt;br /&gt;
|example.com&lt;br /&gt;
|192.168.140.10&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The IP allocation of the nodes will be assumed as stated in the table below:&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; style=&amp;quot;border-collapse: collapse; font-size:100%;&amp;quot; width=100% &lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Node name&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Admin address (VLAN 110)&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Data address (VLAN 120)&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Int address (VLAN 130)&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Pub address (VLAN 140)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|tier1-storage-node-01&lt;br /&gt;
|10.1.110.11&lt;br /&gt;
|10.1.120.11&lt;br /&gt;
|10.1.130.11&lt;br /&gt;
|192.168.140.11&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|tier1-storage-node-02&lt;br /&gt;
|10.1.110.12&lt;br /&gt;
|10.1.120.12&lt;br /&gt;
|10.1.130.12&lt;br /&gt;
|192.168.140.12&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|vm-node-01&lt;br /&gt;
|10.1.110.13&lt;br /&gt;
|10.1.120.13&lt;br /&gt;
|10.1.130.13&lt;br /&gt;
|192.168.140.13&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|vm-node-02&lt;br /&gt;
|10.1.110.14&lt;br /&gt;
|10.1.120.14&lt;br /&gt;
|10.1.130.14&lt;br /&gt;
|192.168.140.14&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
We&#039;ll also presume, we have the following Domain Name Servers:&lt;br /&gt;
* Domain Name Server 1: &#039;&#039;&#039;192.168.140.1&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Base Installation ==&lt;br /&gt;
All nodes are based on the [http://www.gentoo.org/ Gentoo Linux] operating system.&lt;br /&gt;
&lt;br /&gt;
=== BIOS Set Up Checklist ===&lt;br /&gt;
* Enable the &amp;quot;reboot-after-power-loss&amp;quot; option (if your BIOS supports it).&lt;br /&gt;
* Make sure, that you have the newest BIOS (BMC and perhaps SCSI firmware) version.&lt;br /&gt;
* Make sure, you&#039;ve disabled halt on post errors (or similar) or enable keyboard-less operation (if your BIOS supports it).&lt;br /&gt;
&lt;br /&gt;
=== RAID Set Up ===&lt;br /&gt;
Create a RAID1 volume. This RAID-Set is used for the Operating System. Please be aware, the the current stoney cloud only supports 147 Gigabyte up to 2 Terabyte disks for this first RAID-Set.&lt;br /&gt;
&lt;br /&gt;
Optional: For the two [http://www.gluster.org/ GlusterFS] [[Storage-Node | Storage-Nodes]] we recommend a second RAID-Set configured as [http://en.wikipedia.org/wiki/RAID_6#RAID_6 RAID6]-Set with battery backup.&lt;br /&gt;
&lt;br /&gt;
=== Node Installation ===&lt;br /&gt;
The first step of Semi-Automatic Multi-Node Set Up is the same for all four Nodes. In this documentation we presume, that you stick to the naming convention mentioned above. After the Base Installation of the Nodes, the following daemons will be running:&lt;br /&gt;
* &#039;&#039;&#039;crond&#039;&#039;&#039;: Crond, to execute scheduled commands&lt;br /&gt;
* &#039;&#039;&#039;ntpd&#039;&#039;&#039;: Network Time Protocol daemon, keeps the time synced with the Time from Servers in the LAN or WAN.&lt;br /&gt;
* &#039;&#039;&#039;sshd&#039;&#039;&#039;: OpenSSH SSH daemon, used for remote access and remote administration.&lt;br /&gt;
* &#039;&#039;&#039;syslogd&#039;&#039;&#039;: System Logging, keeps track of messages from the System and the Applications.&lt;br /&gt;
* &#039;&#039;&#039;udevd&#039;&#039;&#039;:  Linux dynamic device management, that manage events, symlinks and permissions of devices.&lt;br /&gt;
&lt;br /&gt;
==== First Storage-Node (tier1-storage-node-01) ====&lt;br /&gt;
# Insert the stoney cloud CD and boot the server.&lt;br /&gt;
# Answer the questions as follows (the bold values are examples can be set through the administrator and are variable, according to the local setup):&lt;br /&gt;
## Global Section&lt;br /&gt;
### Confirm that you want to start? &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
### Choose a Node-Type: &#039;&#039;&#039;Storage-Node&#039;&#039;&#039;&lt;br /&gt;
### Choose a Block-Device: &#039;&#039;&#039;sda&#039;&#039;&#039;&lt;br /&gt;
### Confirm to erase all (from a previous installation): &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
### Confirm to continue with the given the Partition-Scheme: &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
### Choose the network interfaces to bond together.&lt;br /&gt;
#### Device #0: &#039;&#039;&#039;eth0&#039;&#039;&#039;&lt;br /&gt;
#### Device #1: &#039;&#039;&#039;eth1&#039;&#039;&#039;&lt;br /&gt;
### Node-Name: &#039;&#039;&#039;tier1-storage-node-01&#039;&#039;&#039;&lt;br /&gt;
## pub-VLAN Section&lt;br /&gt;
### VLAN ID: &#039;&#039;&#039;140&#039;&#039;&#039;&lt;br /&gt;
### Domain Name: &#039;&#039;&#039;example.com&#039;&#039;&#039;&lt;br /&gt;
### IP Address: &#039;&#039;&#039;192.168.140.11&#039;&#039;&#039;&lt;br /&gt;
### Netmask: &#039;&#039;&#039;24&#039;&#039;&#039;&lt;br /&gt;
### Broadcast: &#039;&#039;&#039;192.168.140.255&#039;&#039;&#039;&lt;br /&gt;
### Confirm the pub-VLAN Section with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
## admin-VLAN Section&lt;br /&gt;
### VLAN ID: &#039;&#039;&#039;110&#039;&#039;&#039;&lt;br /&gt;
### Domain Name: &#039;&#039;&#039;admin.example.com&#039;&#039;&#039;&lt;br /&gt;
### IP Address: &#039;&#039;&#039;10.1.110.11&#039;&#039;&#039;&lt;br /&gt;
### Netmask: &#039;&#039;&#039;24&#039;&#039;&#039;&lt;br /&gt;
### Broadcast: &#039;&#039;&#039;10.1.110.255&#039;&#039;&#039;&lt;br /&gt;
### Confirm the admin-VLAN Section with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
## data-VLAN Section&lt;br /&gt;
### VLAN ID: &#039;&#039;&#039;120&#039;&#039;&#039;&lt;br /&gt;
### Domain Name: &#039;&#039;&#039;data.example.com&#039;&#039;&#039;&lt;br /&gt;
### IP Address: &#039;&#039;&#039;10.1.120.11&#039;&#039;&#039;&lt;br /&gt;
### Netmask: &#039;&#039;&#039;24&#039;&#039;&#039;&lt;br /&gt;
### Broadcast: &#039;&#039;&#039;10.1.120.255&#039;&#039;&#039;&lt;br /&gt;
### Confirm the data-VLAN Section with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
## int-VLAN Section&lt;br /&gt;
### VLAN ID: &#039;&#039;&#039;130&#039;&#039;&#039;&lt;br /&gt;
### Domain Name: &#039;&#039;&#039;int.example.com&#039;&#039;&#039;&lt;br /&gt;
### IP Address: &#039;&#039;&#039;10.1.130.11&#039;&#039;&#039;&lt;br /&gt;
### Netmask: &#039;&#039;&#039;24&#039;&#039;&#039;&lt;br /&gt;
### Broadcast: &#039;&#039;&#039;10.1.130.255&#039;&#039;&#039;&lt;br /&gt;
### Confirm the int-VLAN Section with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
### Enter the default Gateway: &#039;&#039;&#039;192.168.140.1&#039;&#039;&#039;&lt;br /&gt;
### Enter the primary DNS-Server: &#039;&#039;&#039;192.168.140.1&#039;&#039;&#039;&lt;br /&gt;
### Omit configuring a second DNS-Server with &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
## Confirm the listed configuration with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
## Enter your very secret root password&lt;br /&gt;
## Confirm to reboot with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
# Make sure, that you boot from the first harddisk and not from the installation medium again.&lt;br /&gt;
# Continue with [[Multi-Node Installation#Specialized_Installation|specializing your Node]]&lt;br /&gt;
&lt;br /&gt;
==== Second Storage-Node (tier1-storage-node-02) ====&lt;br /&gt;
# Insert the stoney cloud CD and boot the server.&lt;br /&gt;
# Answer the questions.&lt;br /&gt;
# Reboot the Server and make sure, that you boot from the first harddisk.&lt;br /&gt;
&lt;br /&gt;
==== Primary-Master-Node (vm-node-01) ====&lt;br /&gt;
# Insert the stoney cloud CD and boot the server.&lt;br /&gt;
# Answer the questions.&lt;br /&gt;
# Reboot the Server and make sure, that you boot from the first harddisk.&lt;br /&gt;
&lt;br /&gt;
==== Secondary-Master-Node (vm-node-02) ====&lt;br /&gt;
# Insert the stoney cloud CD and boot the server.&lt;br /&gt;
# Answer the questions.&lt;br /&gt;
# Reboot the Server and make sure, that you boot from the first harddisk.&lt;br /&gt;
&lt;br /&gt;
== Skipping Checks ==&lt;br /&gt;
To skip checks, type &#039;&#039;&#039;no&#039;&#039;&#039; when asked:&lt;br /&gt;
  Do you want to start the installation?&lt;br /&gt;
 yes or no?: &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Then manually restart the stoney cloud installer with the desired options. For example:&lt;br /&gt;
 /mnt/cdrom/foss-cloud-installer -c&lt;br /&gt;
&lt;br /&gt;
Options:&lt;br /&gt;
 -c: Skip CPU requirement checks&lt;br /&gt;
 -m: Skip memory requirement checks&lt;br /&gt;
 -s: Skip CPU and memory requirement checks&lt;br /&gt;
&lt;br /&gt;
== Specialized Installation ==&lt;br /&gt;
==== First Storage-Node (tier1-storage-node-01) ====&lt;br /&gt;
Before running the node configuration script, you may want to create a [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#Before_node_configuration_script | additional Backup Volume on Storage Node]].&lt;br /&gt;
&lt;br /&gt;
Log into the first [[Storage-Node]] and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type primary-storage-node&lt;br /&gt;
&lt;br /&gt;
For more information about the script and what it does, please visit the [[fc-node-configuration]] script page.&lt;br /&gt;
&lt;br /&gt;
==== Second Storage-Node (tier1-storage-node-02) ====&lt;br /&gt;
Before running the node configuration script, you may want to create a [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#Before_node_configuration_script | additional Backup Volume on Storage Node]].&lt;br /&gt;
&lt;br /&gt;
Log into the second [[Storage-Node]] and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type secondary-storage-node&lt;br /&gt;
&lt;br /&gt;
For more information about the script and what it does, please visit the [[node-configuration]] script page.&lt;br /&gt;
&lt;br /&gt;
==== Primary-Master-Node (vm-node-01) ====&lt;br /&gt;
If you configured a additional Backup Volume on the Storage Nodes, you want to [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#On_the_VM-Nodes | mount them now in the VM-Node]].&lt;br /&gt;
&lt;br /&gt;
Log into the [[Primary-Master-Node]] and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type primary-master-node&lt;br /&gt;
&lt;br /&gt;
The stoney cloud uses virtual ip addresses (VIPs) for fail over purposes. Therefore you need to configure [http://www.pureftpd.org/project/ucarp ucarp].&lt;br /&gt;
&lt;br /&gt;
Confirm that you want to run the script.&lt;br /&gt;
 Do you really want to proceed with configuration of the primary-master-node?&lt;br /&gt;
 yes or no (default: no): &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Enter the VIP (virtual IP) for the public interface. The apache will listen on this VIP because it is listening on the public interface (if you followed this documentation the VIP for the public interface is 192.168.140.10).&lt;br /&gt;
 Please enter the VIP for the pub-interface (VLAN 140)&lt;br /&gt;
 (default: 192.168.140.10): &#039;&#039;&#039;192.168.140.10&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Enter the VIP (virtual IP) for the internal interace. The LDAP will listen on this VIP because it is listening on the internal interface (if you followed this documentation the VIP for the internal interface is 10.1.130.10).&lt;br /&gt;
 Please enter the VIP for the int-interface (VLAN 130)&lt;br /&gt;
 (default: 10.1.130.10): &#039;&#039;&#039;10.1.130.10&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Enter the RIP (real IP) for the secondary-master-node on the internal interface. The RIP is needed to keep the LDAP directories synchronized (if you followed this documentation the RIP for the secondary-master-node on the internal interface is 10.1.130.14)&lt;br /&gt;
 Please enter the IP for the int-interface (VLAN 130) of the&lt;br /&gt;
 secondary-master-node (default: 10.1.130.14): &#039;&#039;&#039;10.1.130.14&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The script now tests the network configuration and &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to mount the gluster-filesystem, you need to connect via ssh to the primary-storage-node. Enter the IP for the primary-storage-node on the admin interface (if you followed this documentation the IP for the primary-storage-node on the admin interface is 10.1.110.11). Enter the a valid username which exists on the primary-storage-node (if you followed this documentation it is root) and the corresponding password.&lt;br /&gt;
 Please enter the following information for the primary Storage-Node with the OpenSSH daemon listening on the VLAN with the name &#039;admin&#039; and with the VLAN ID &#039;110&#039;:&lt;br /&gt;
 &lt;br /&gt;
 IP address (default: 10.1.110.11): &#039;&#039;&#039;10.1.110.11&#039;&#039;&#039;&lt;br /&gt;
 Username (default: root): &#039;&#039;&#039;root&#039;&#039;&#039;&lt;br /&gt;
 Password for root: &#039;&#039;&#039;**********&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to mount the gluster-filesystem, you need to connect via ssh to the secondary-storage-node. Enter the IP for the secondary-storage-node on the admin interface (if you followed this documentation the IP for the primary-storage-node on the admin interface is 10.1.110.12). Enter the a valid username which exists on the primary-storage-node (if you followed this documentation it is root) and the corresponding password.&lt;br /&gt;
 Please enter the following information for the secondary Storage-Node with the OpenSSH daemon listening on the VLAN with the name &#039;admin&#039; and with the VLAN ID &#039;110&#039;:&lt;br /&gt;
 &lt;br /&gt;
 IP address (default: 10.1.110.12): &#039;&#039;&#039;10.1.110.12&#039;&#039;&#039;&lt;br /&gt;
 Username (default: root): &#039;&#039;&#039;root&#039;&#039;&#039;&lt;br /&gt;
 Password for root:  &#039;&#039;&#039;**********&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Configure the LDAP directory: &lt;br /&gt;
** Define the password for the LDAP-Superuser (cn=Manager,dc=stoney-cloud,dc=org)&lt;br /&gt;
** Currently the user for the prov-backup-kvm daemon is the LDAP-Superuser so enter the same password again&lt;br /&gt;
** Define the password for the LDAP-dhcp user (cn=dhcp,ou=services,ou=administration,dc=stoney-cloud,dc=org)&lt;br /&gt;
** Enter all necessary information for the stoney cloud administrator (User1)&lt;br /&gt;
*** Given name&lt;br /&gt;
*** Surname&lt;br /&gt;
*** Gender&lt;br /&gt;
*** E-mail&lt;br /&gt;
*** Language&lt;br /&gt;
*** Password&lt;br /&gt;
&lt;br /&gt;
* Finally enter the domain name which will correspond to the public VIP (default is stoney-cloud.example.org)&lt;br /&gt;
&lt;br /&gt;
* Due to [https://github.com/stepping-stone/node-integration/issues/9 bug #9], you need to manually finish the configuration of the libvirthook scripts:&lt;br /&gt;
** You mainly have to fill in the following variables:&lt;br /&gt;
*** &#039;&#039;&#039;libvirtHookFirewallSvnUser&#039;&#039;&#039;&lt;br /&gt;
*** &#039;&#039;&#039;libvirtHookFirewallSvnPassword&#039;&#039;&#039;&lt;br /&gt;
** See also [https://int.stepping-stone.ch/wiki/libvirt_Hooks#Config_for_test-environment this test configuration]&lt;br /&gt;
* Due to [https://github.com/stepping-stone/node-integration/issues/12 bug #12], you need to manually configure the LDAPKVMWrapper.pl script:&lt;br /&gt;
** Fill in the &amp;lt;code&amp;gt;/etc/Provisioning/Backup/LDAPKVMWrapper.conf&amp;lt;/code&amp;gt; file&lt;br /&gt;
** Create a cronjob entry which runs the script &amp;lt;code&amp;gt;/usr/bin/LDAPKVMWrapper.pl&amp;lt;/code&amp;gt; once a day:&lt;br /&gt;
*** &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information about the script and what it does, please visit the [[fc-node-configuration]] script page.&lt;br /&gt;
&lt;br /&gt;
==== Secondary-Master-Node (vm-node-02) ====&lt;br /&gt;
If you configured a additional Backup Volume on the Storage Nodes, you want to [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#On_the_VM-Nodes | mount them now in the Secondary-Master-Node]].&lt;br /&gt;
&lt;br /&gt;
Log into the [[Secondary-Master-Node]] and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type secondary-master-node&lt;br /&gt;
&lt;br /&gt;
* In order to get the configuration from the primary-master-node, we need to access it via ssh&lt;br /&gt;
** Enter the IP for the primary-master-node on the admin interface (if you followed this documentation it is 10.1.130.13)&lt;br /&gt;
** Enter the username (if you followd the default setup it is root)&lt;br /&gt;
** Enter the users password.&lt;br /&gt;
&lt;br /&gt;
* In order to mount the gluster-filesystem, you need to connect via ssh to the primary-storage-node, so enter the following information:&lt;br /&gt;
** Enter the IP for the primary-storage-node on the admin interface (if you followed this documentation the IP for the primary-storage-node on the admin interface is 10.1.110.11)&lt;br /&gt;
** Enter the a valid username which exists on the primary-storage-node (if you followed this documentation it is root)&lt;br /&gt;
** Enter the users password&lt;br /&gt;
* Repeat the same procedure for the secondary-storage-node (if you followed this documentation the IP is 10.1.110.12)&lt;br /&gt;
&lt;br /&gt;
* Enter the LDAP-Superuser password you defined during the [[Multi-Node Installation#Primary-Master-Node (stoney-cloud-node-01)_2 | primary-master-node]] installation&lt;br /&gt;
&lt;br /&gt;
* Due tu [https://github.com/stepping-stone/node-integration/issues/9 bug #9], you need to manually finish the configuration of the libvirthook scripts:&lt;br /&gt;
** You mainly have to fill in the following variables:&lt;br /&gt;
*** &#039;&#039;&#039;libvirtHookFirewallSvnUser&#039;&#039;&#039;&lt;br /&gt;
*** &#039;&#039;&#039;libvirtHookFirewallSvnPassword&#039;&#039;&#039;&lt;br /&gt;
** See also [https://int.stepping-stone.ch/wiki/libvirt_Hooks#Config_for_test-environment this test configuration]&lt;br /&gt;
* Due to [https://github.com/stepping-stone/node-integration/issues/12 bug #12], you need to manually configure the LDAPKVMWrapper.pl script:&lt;br /&gt;
** Fill in the &amp;lt;code&amp;gt;/etc/Provisioning/Backup/LDAPKVMWrapper.conf&amp;lt;/code&amp;gt; file&lt;br /&gt;
** Create a cronjob entry which runs the script &amp;lt;code&amp;gt;/usr/bin/LDAPKVMWrapper.pl&amp;lt;/code&amp;gt; once a day:&lt;br /&gt;
*** &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information about the script and what it does, please visit the [[fc-node-configuration]] script page.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
= Node Integration =&lt;br /&gt;
The following figure gives an overview what the node-integration script does for the different node types&lt;br /&gt;
&lt;br /&gt;
[[File:node-integration.png|500px|thumbnail|none|What does the node-integration script for the different node types]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can modify/update these steps by editing [[File:node-integration.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
= Old Documentation =&lt;br /&gt;
== Specialized Installation ==&lt;br /&gt;
=== Primary-Master-Node (vm-node-01) ===&lt;br /&gt;
If you configured a additional Backup Volume on the Storage Nodes, you want to mount them now in the VM-Node.&lt;br /&gt;
&lt;br /&gt;
Log into the Primary-Master-Node and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type primary-master-node&lt;br /&gt;
&lt;br /&gt;
==== Manual Steps ====&lt;br /&gt;
In order to be able to migrate a VM from a carrier, a special user called transfer will be created.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
lvcreate -L 60G -n transfer local0&lt;br /&gt;
&lt;br /&gt;
mkfs.xfs -L &amp;quot;OSBD_transfe&amp;quot; /dev/local0/transfer &lt;br /&gt;
&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
&lt;br /&gt;
LABEL=OSBD_transfe  /home/transfer    xfs      noatime,nodev,nosuid,noexec  0 2&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
mount /home/transfer&lt;br /&gt;
&lt;br /&gt;
useradd --comment &amp;quot;User which is used for VM disk file transfer between carriers&amp;quot; \&lt;br /&gt;
        --create-home \&lt;br /&gt;
        --system \&lt;br /&gt;
        --user-group \&lt;br /&gt;
        transfer&lt;br /&gt;
&lt;br /&gt;
passwd transfer&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Allow password authentication for the transfer user:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$EDITOR /etc/ssh/sshd_config&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[...]&lt;br /&gt;
&lt;br /&gt;
Match User transfer&lt;br /&gt;
        PasswordAuthentication yes&lt;br /&gt;
&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To apply the changes above, restart the SSH daemon:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
/etc/init.d/sshd restart&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney cloud]][[Category:Installation]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=File:node-integration.png&amp;diff=3775</id>
		<title>File:node-integration.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=File:node-integration.png&amp;diff=3775"/>
		<updated>2014-06-27T12:53:09Z</updated>

		<summary type="html">&lt;p&gt;Pat: Initial verison&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Initial verison&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_cloud:_Multi-Node_Installation&amp;diff=3774</id>
		<title>stoney cloud: Multi-Node Installation</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_cloud:_Multi-Node_Installation&amp;diff=3774"/>
		<updated>2014-06-27T12:52:49Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Node Integration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The [http://www.stoney-cloud.org/ stoney cloud] builds upon various standard open source components and can be run on commodity hardware. The final Multi-Node Setup consists of the following components:&lt;br /&gt;
* One [[Primary-Master-Node]] with an OpenLDAP Directory Server for the storage of the stoney cloud user and service related data with the web based management [[VM-Manager]] interface and the Linux kernel based virtualization technology.&lt;br /&gt;
* One [[Secondary-Master-Node]] with the Linux kernel based virtualization technology.&lt;br /&gt;
* Two [[Storage-Node | Storage-Nodes]] configured as  a replicated and distributed data storage service based on [http://www.gluster.org/ GlusterFS].&lt;br /&gt;
The components communicate with each other over a standard Ethernet based IPv4 network.&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
The following items and conditions are required to be able to install and configure a stoney cloud environment:&lt;br /&gt;
* Dedicated Hardware (4 Servers) which fulfil the following requirements: &lt;br /&gt;
** 64-Bit Intel with VT-Technologie (AMD is not tested at the moment).&lt;br /&gt;
** 8 Gigabyte Memory (more is better).&lt;br /&gt;
** 147 Gigabyte up to 2 Terabyte disks.&lt;br /&gt;
*** For all four Nodes ([[VM-Node | VM-Nodes]] and [[Storage-Node | Storage-Nodes]]) two 147 Gigabyte Disks, configured as RAID1, are enough for the [http://www.gentoo.org/ Gentoo Linux] operating system.&lt;br /&gt;
*** Optional: For the two [http://www.gluster.org/ GlusterFS] [[Storage-Node | Storage-Nodes]] we recommend a second RAID-Set configured as [http://en.wikipedia.org/wiki/RAID_6#RAID_6 RAID6]-Set with battery backup.&lt;br /&gt;
** Two physical Ethernet Interfaces, which support the same bandwidth (for example 1 Gigabit/s).&lt;br /&gt;
* Two Gigabit layer-2 switches supporting [http://en.wikipedia.org/wiki/IEEE_802.3ad IEEE 802.3ad] (dynamic link aggregation), [http://en.wikipedia.org/wiki/IEEE_802.1Q IEEE 802.1Q] (VLAN tagging) and stacking (optional, but recommended).&lt;br /&gt;
* Experience with Linux environments especially with [http://www.gentoo.org Gentoo Linux].&lt;br /&gt;
* Good experience with IP networking, because the Switches need to be configured manually (dynamic link aggregation and VLAN tagging).&lt;br /&gt;
&lt;br /&gt;
=== Limitations ===&lt;br /&gt;
During the installation of the [http://www.gentoo.org/ Gentoo Linux] operating system, the first two physical Ethernet Interfaces are automatically configured as a logical interface (bond0) and the four tagged VLANs are set up. If more than two physical Ethernet Interfaces are to be included in to the logical interface (bond0) or the physical Ethernet Interfaces have different bandwidths (for example 1 Gigabit/s and 10 Gigabit/s), the final logical interface (bond0) needs to be configured manually after the installation of the [http://www.gentoo.org/ Gentoo Linux] operating system.&lt;br /&gt;
Only the first two [http://www.gluster.org/ GlusterFS] [[Storage-Node | Storage-Nodes]] are set up automatically. More [[Storage-Node | Storage-Nodes]] need to be integrated manually.&lt;br /&gt;
&lt;br /&gt;
=== Network Overview ===&lt;br /&gt;
As stated before, a minimal multi node stoney cloud environment consist of two VM- and Storage-Nodes.&lt;br /&gt;
It is highly recommended to use IEEE 802.3ad link aggregation (bonding, trunking etc.) over two network cards and attach them as one logical link to the access switch.&lt;br /&gt;
&lt;br /&gt;
It is out of scope of this document on how to configure the switches for stacking, link aggregation and VLAN tagging, consult the respective user manual.&lt;br /&gt;
&lt;br /&gt;
There are two scenarios on how to connect the nodes to the network.&lt;br /&gt;
&lt;br /&gt;
==== Network Overview: Physical Layer Scenario 1 (recommended) ====&lt;br /&gt;
The preferred solution is to use two switches which are stackable (supporting link aggregation over two switches). Connect the nodes and switches as illustrated below. Thus eliminating the single point of failure as presented in scenario 2.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                      +----------------------+      +----------------------+&lt;br /&gt;
                      |                      |      |                      |&lt;br /&gt;
                      |      vm-node-01      |      |      vm-node-02      |&lt;br /&gt;
                      |                      |      |                      |&lt;br /&gt;
                      +----------------------+      +----------------------+        &lt;br /&gt;
                                     bond0 | \      / | bond0&lt;br /&gt;
                                           |  \    /  |&lt;br /&gt;
                                           |   \  /   |&lt;br /&gt;
                                           |    \/    |                                          ___&lt;br /&gt;
                                           |    /\    |                                      ___(   )___&lt;br /&gt;
                                           |   /  \   |                                   __(           )__&lt;br /&gt;
                                           |  /    \  |                                 _(                 )_&lt;br /&gt;
                                 +-------------+   +-------------+                    _(                     )_&lt;br /&gt;
                                 |  switch-01  |===|  switch-02  |-------------------(_   Corporate LAN/WAN   _)&lt;br /&gt;
                                 +-------------+   +-------------+                     (_                   _)&lt;br /&gt;
                                           | \      / |                                  (__             __)&lt;br /&gt;
                                           |  \    /  |                                     (___     ___)&lt;br /&gt;
                                           |   \  /   |                                         (___)&lt;br /&gt;
                                           |    \/    |&lt;br /&gt;
                                           |    /\    |&lt;br /&gt;
                                           |   /  \   |&lt;br /&gt;
                                     bond0 |  /    \  | bond0&lt;br /&gt;
                     +-----------------------+      +-----------------------+&lt;br /&gt;
                     |                       |      |                       |&lt;br /&gt;
                     | tier1-storage-node-01 |      | tier1-storage-node-02 |&lt;br /&gt;
                     |                       |      |                       |&lt;br /&gt;
                     +-----------------------+      +-----------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Network Overview: Physical Layer Scenario 2 (use at your own risk) ====&lt;br /&gt;
If theres only one switch available (or the switches aren&#039;t stackable) connect the nodes as illustrated below. As you can see, the switch is as a single point of failure.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                      +----------------------+      +----------------------+&lt;br /&gt;
                      |                      |      |                      |&lt;br /&gt;
                      |      vm-node-01      |      |      vm-node-02      |&lt;br /&gt;
                      |                      |      |                      |&lt;br /&gt;
                      +----------------------+      +----------------------+                     ___&lt;br /&gt;
                                  bond0  \\            // bond0                              ___(   )___&lt;br /&gt;
                                          \\          //                                  __(           )__&lt;br /&gt;
                                           \\        //                                 _(                 )_&lt;br /&gt;
                                         +-------------+                              _(                     )_&lt;br /&gt;
                                         |  switch-01  |-----------------------------(_   Corporate LAN/WAN   _)&lt;br /&gt;
                                         +-------------+                               (_                   _)&lt;br /&gt;
                                           //       \\                                   (__             __) &lt;br /&gt;
                                          //         \\                                     (___     ___)&lt;br /&gt;
                                  bond0  //           \\ bond0                                  (___)&lt;br /&gt;
                     +-----------------------+      +-----------------------+&lt;br /&gt;
                     |                       |      |                       |&lt;br /&gt;
                     | tier1-storage-node-01 |      | tier1-storage-node-02 |&lt;br /&gt;
                     |                       |      |                       |&lt;br /&gt;
                     +-----------------------+      +-----------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Network overview: Logical layer ====&lt;br /&gt;
The goal is to achieve the following configuration:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
+----------------+----------------+----------------+----------------+&lt;br /&gt;
|   10.1.110.1X  |   10.1.120.1X  |   10.1.130.1X  | 192.168.140.1X |&lt;br /&gt;
+----------------+----------------+----------------+----------------+&lt;br /&gt;
|                |                |                |     vmbr0      |&lt;br /&gt;
|     vlan110    |    vlan120     |    vlan130     +----------------+&lt;br /&gt;
|                |                |                |    vlan140     |&lt;br /&gt;
+----------------+----------------+----------------+----------------+&lt;br /&gt;
+-------------------------------------------------------------------+&lt;br /&gt;
|                   bond0 (bonding.mode=802.3ad)                    |&lt;br /&gt;
+-------------------------------------------------------------------+&lt;br /&gt;
+----------------+                                 +----------------+&lt;br /&gt;
|      eth0      |                                 |      eth1      |&lt;br /&gt;
+----------------+                                 +----------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ideal stoney cloud environment is based on four logical separated VLANs (virtual LANs):&lt;br /&gt;
* &#039;&#039;&#039;admin&#039;&#039;&#039;: Administrative network, used for administration and monitoring purposes.&lt;br /&gt;
* &#039;&#039;&#039;data&#039;&#039;&#039;: Data network, used for GlusterFS traffic.&lt;br /&gt;
* &#039;&#039;&#039;int&#039;&#039;&#039;: Internal network, used for internal traffic such as LDAP, libvirt and more.&lt;br /&gt;
* &#039;&#039;&#039;pub&#039;&#039;&#039;: Public network, used for accessing the VM-Manager webinterface, Spice traffic and internet access.&lt;br /&gt;
&lt;br /&gt;
Each of the above VLANs hold dedicated services and separates them from each other. This documentation assumes, that the four VLANs are present and the following IP networks are available:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; style=&amp;quot;border-collapse: collapse; font-size:100%;&amp;quot; width=100%&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|VLAN name&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|VLAN ID&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Network prefix&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Default Gateway address&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Broadcast address&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Domain name&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|VIP&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|admin&lt;br /&gt;
|110&lt;br /&gt;
|10.1.110.0/24&lt;br /&gt;
| -- &lt;br /&gt;
|10.1.110.255&lt;br /&gt;
|admin.example.com&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|data&lt;br /&gt;
|120&lt;br /&gt;
|10.1.120.0/24&lt;br /&gt;
| --&lt;br /&gt;
|10.1.120.255&lt;br /&gt;
|data.example.com&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|int&lt;br /&gt;
|130&lt;br /&gt;
|10.1.130.0/24&lt;br /&gt;
| --&lt;br /&gt;
|10.1.130.255&lt;br /&gt;
|int.example.com&lt;br /&gt;
|10.1.130.10&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|pub&lt;br /&gt;
|140&lt;br /&gt;
|192.168.140.0/24&lt;br /&gt;
|192.168.140.1&lt;br /&gt;
|192.168.140.255&lt;br /&gt;
|example.com&lt;br /&gt;
|192.168.140.10&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The IP allocation of the nodes will be assumed as stated in the table below:&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; style=&amp;quot;border-collapse: collapse; font-size:100%;&amp;quot; width=100% &lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Node name&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Admin address (VLAN 110)&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Data address (VLAN 120)&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Int address (VLAN 130)&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Pub address (VLAN 140)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|tier1-storage-node-01&lt;br /&gt;
|10.1.110.11&lt;br /&gt;
|10.1.120.11&lt;br /&gt;
|10.1.130.11&lt;br /&gt;
|192.168.140.11&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|tier1-storage-node-02&lt;br /&gt;
|10.1.110.12&lt;br /&gt;
|10.1.120.12&lt;br /&gt;
|10.1.130.12&lt;br /&gt;
|192.168.140.12&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|vm-node-01&lt;br /&gt;
|10.1.110.13&lt;br /&gt;
|10.1.120.13&lt;br /&gt;
|10.1.130.13&lt;br /&gt;
|192.168.140.13&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|vm-node-02&lt;br /&gt;
|10.1.110.14&lt;br /&gt;
|10.1.120.14&lt;br /&gt;
|10.1.130.14&lt;br /&gt;
|192.168.140.14&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
We&#039;ll also presume, we have the following Domain Name Servers:&lt;br /&gt;
* Domain Name Server 1: &#039;&#039;&#039;192.168.140.1&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Base Installation ==&lt;br /&gt;
All nodes are based on the [http://www.gentoo.org/ Gentoo Linux] operating system.&lt;br /&gt;
&lt;br /&gt;
=== BIOS Set Up Checklist ===&lt;br /&gt;
* Enable the &amp;quot;reboot-after-power-loss&amp;quot; option (if your BIOS supports it).&lt;br /&gt;
* Make sure, that you have the newest BIOS (BMC and perhaps SCSI firmware) version.&lt;br /&gt;
* Make sure, you&#039;ve disabled halt on post errors (or similar) or enable keyboard-less operation (if your BIOS supports it).&lt;br /&gt;
&lt;br /&gt;
=== RAID Set Up ===&lt;br /&gt;
Create a RAID1 volume. This RAID-Set is used for the Operating System. Please be aware, the the current stoney cloud only supports 147 Gigabyte up to 2 Terabyte disks for this first RAID-Set.&lt;br /&gt;
&lt;br /&gt;
Optional: For the two [http://www.gluster.org/ GlusterFS] [[Storage-Node | Storage-Nodes]] we recommend a second RAID-Set configured as [http://en.wikipedia.org/wiki/RAID_6#RAID_6 RAID6]-Set with battery backup.&lt;br /&gt;
&lt;br /&gt;
=== Node Installation ===&lt;br /&gt;
The first step of Semi-Automatic Multi-Node Set Up is the same for all four Nodes. In this documentation we presume, that you stick to the naming convention mentioned above. After the Base Installation of the Nodes, the following daemons will be running:&lt;br /&gt;
* &#039;&#039;&#039;crond&#039;&#039;&#039;: Crond, to execute scheduled commands&lt;br /&gt;
* &#039;&#039;&#039;ntpd&#039;&#039;&#039;: Network Time Protocol daemon, keeps the time synced with the Time from Servers in the LAN or WAN.&lt;br /&gt;
* &#039;&#039;&#039;sshd&#039;&#039;&#039;: OpenSSH SSH daemon, used for remote access and remote administration.&lt;br /&gt;
* &#039;&#039;&#039;syslogd&#039;&#039;&#039;: System Logging, keeps track of messages from the System and the Applications.&lt;br /&gt;
* &#039;&#039;&#039;udevd&#039;&#039;&#039;:  Linux dynamic device management, that manage events, symlinks and permissions of devices.&lt;br /&gt;
&lt;br /&gt;
==== First Storage-Node (tier1-storage-node-01) ====&lt;br /&gt;
# Insert the stoney cloud CD and boot the server.&lt;br /&gt;
# Answer the questions as follows (the bold values are examples can be set through the administrator and are variable, according to the local setup):&lt;br /&gt;
## Global Section&lt;br /&gt;
### Confirm that you want to start? &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
### Choose a Node-Type: &#039;&#039;&#039;Storage-Node&#039;&#039;&#039;&lt;br /&gt;
### Choose a Block-Device: &#039;&#039;&#039;sda&#039;&#039;&#039;&lt;br /&gt;
### Confirm to erase all (from a previous installation): &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
### Confirm to continue with the given the Partition-Scheme: &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
### Choose the network interfaces to bond together.&lt;br /&gt;
#### Device #0: &#039;&#039;&#039;eth0&#039;&#039;&#039;&lt;br /&gt;
#### Device #1: &#039;&#039;&#039;eth1&#039;&#039;&#039;&lt;br /&gt;
### Node-Name: &#039;&#039;&#039;tier1-storage-node-01&#039;&#039;&#039;&lt;br /&gt;
## pub-VLAN Section&lt;br /&gt;
### VLAN ID: &#039;&#039;&#039;140&#039;&#039;&#039;&lt;br /&gt;
### Domain Name: &#039;&#039;&#039;example.com&#039;&#039;&#039;&lt;br /&gt;
### IP Address: &#039;&#039;&#039;192.168.140.11&#039;&#039;&#039;&lt;br /&gt;
### Netmask: &#039;&#039;&#039;24&#039;&#039;&#039;&lt;br /&gt;
### Broadcast: &#039;&#039;&#039;192.168.140.255&#039;&#039;&#039;&lt;br /&gt;
### Confirm the pub-VLAN Section with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
## admin-VLAN Section&lt;br /&gt;
### VLAN ID: &#039;&#039;&#039;110&#039;&#039;&#039;&lt;br /&gt;
### Domain Name: &#039;&#039;&#039;admin.example.com&#039;&#039;&#039;&lt;br /&gt;
### IP Address: &#039;&#039;&#039;10.1.110.11&#039;&#039;&#039;&lt;br /&gt;
### Netmask: &#039;&#039;&#039;24&#039;&#039;&#039;&lt;br /&gt;
### Broadcast: &#039;&#039;&#039;10.1.110.255&#039;&#039;&#039;&lt;br /&gt;
### Confirm the admin-VLAN Section with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
## data-VLAN Section&lt;br /&gt;
### VLAN ID: &#039;&#039;&#039;120&#039;&#039;&#039;&lt;br /&gt;
### Domain Name: &#039;&#039;&#039;data.example.com&#039;&#039;&#039;&lt;br /&gt;
### IP Address: &#039;&#039;&#039;10.1.120.11&#039;&#039;&#039;&lt;br /&gt;
### Netmask: &#039;&#039;&#039;24&#039;&#039;&#039;&lt;br /&gt;
### Broadcast: &#039;&#039;&#039;10.1.120.255&#039;&#039;&#039;&lt;br /&gt;
### Confirm the data-VLAN Section with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
## int-VLAN Section&lt;br /&gt;
### VLAN ID: &#039;&#039;&#039;130&#039;&#039;&#039;&lt;br /&gt;
### Domain Name: &#039;&#039;&#039;int.example.com&#039;&#039;&#039;&lt;br /&gt;
### IP Address: &#039;&#039;&#039;10.1.130.11&#039;&#039;&#039;&lt;br /&gt;
### Netmask: &#039;&#039;&#039;24&#039;&#039;&#039;&lt;br /&gt;
### Broadcast: &#039;&#039;&#039;10.1.130.255&#039;&#039;&#039;&lt;br /&gt;
### Confirm the int-VLAN Section with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
### Enter the default Gateway: &#039;&#039;&#039;192.168.140.1&#039;&#039;&#039;&lt;br /&gt;
### Enter the primary DNS-Server: &#039;&#039;&#039;192.168.140.1&#039;&#039;&#039;&lt;br /&gt;
### Omit configuring a second DNS-Server with &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
## Confirm the listed configuration with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
## Enter your very secret root password&lt;br /&gt;
## Confirm to reboot with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
# Make sure, that you boot from the first harddisk and not from the installation medium again.&lt;br /&gt;
# Continue with [[Multi-Node Installation#Specialized_Installation|specializing your Node]]&lt;br /&gt;
&lt;br /&gt;
==== Second Storage-Node (tier1-storage-node-02) ====&lt;br /&gt;
# Insert the stoney cloud CD and boot the server.&lt;br /&gt;
# Answer the questions.&lt;br /&gt;
# Reboot the Server and make sure, that you boot from the first harddisk.&lt;br /&gt;
&lt;br /&gt;
==== Primary-Master-Node (vm-node-01) ====&lt;br /&gt;
# Insert the stoney cloud CD and boot the server.&lt;br /&gt;
# Answer the questions.&lt;br /&gt;
# Reboot the Server and make sure, that you boot from the first harddisk.&lt;br /&gt;
&lt;br /&gt;
==== Secondary-Master-Node (vm-node-02) ====&lt;br /&gt;
# Insert the stoney cloud CD and boot the server.&lt;br /&gt;
# Answer the questions.&lt;br /&gt;
# Reboot the Server and make sure, that you boot from the first harddisk.&lt;br /&gt;
&lt;br /&gt;
== Skipping Checks ==&lt;br /&gt;
To skip checks, type &#039;&#039;&#039;no&#039;&#039;&#039; when asked:&lt;br /&gt;
  Do you want to start the installation?&lt;br /&gt;
 yes or no?: &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Then manually restart the stoney cloud installer with the desired options. For example:&lt;br /&gt;
 /mnt/cdrom/foss-cloud-installer -c&lt;br /&gt;
&lt;br /&gt;
Options:&lt;br /&gt;
 -c: Skip CPU requirement checks&lt;br /&gt;
 -m: Skip memory requirement checks&lt;br /&gt;
 -s: Skip CPU and memory requirement checks&lt;br /&gt;
&lt;br /&gt;
== Specialized Installation ==&lt;br /&gt;
==== First Storage-Node (tier1-storage-node-01) ====&lt;br /&gt;
Before running the node configuration script, you may want to create a [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#Before_node_configuration_script | additional Backup Volume on Storage Node]].&lt;br /&gt;
&lt;br /&gt;
Log into the first [[Storage-Node]] and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type primary-storage-node&lt;br /&gt;
&lt;br /&gt;
For more information about the script and what it does, please visit the [[fc-node-configuration]] script page.&lt;br /&gt;
&lt;br /&gt;
==== Second Storage-Node (tier1-storage-node-02) ====&lt;br /&gt;
Before running the node configuration script, you may want to create a [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#Before_node_configuration_script | additional Backup Volume on Storage Node]].&lt;br /&gt;
&lt;br /&gt;
Log into the second [[Storage-Node]] and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type secondary-storage-node&lt;br /&gt;
&lt;br /&gt;
For more information about the script and what it does, please visit the [[node-configuration]] script page.&lt;br /&gt;
&lt;br /&gt;
==== Primary-Master-Node (vm-node-01) ====&lt;br /&gt;
If you configured a additional Backup Volume on the Storage Nodes, you want to [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#On_the_VM-Nodes | mount them now in the VM-Node]].&lt;br /&gt;
&lt;br /&gt;
Log into the [[Primary-Master-Node]] and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type primary-master-node&lt;br /&gt;
&lt;br /&gt;
The stoney cloud uses virtual ip addresses (VIPs) for fail over purposes. Therefore you need to configure [http://www.pureftpd.org/project/ucarp ucarp].&lt;br /&gt;
&lt;br /&gt;
Confirm that you want to run the script.&lt;br /&gt;
 Do you really want to proceed with configuration of the primary-master-node?&lt;br /&gt;
 yes or no (default: no): &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Enter the VIP (virtual IP) for the public interface. The apache will listen on this VIP because it is listening on the public interface (if you followed this documentation the VIP for the public interface is 192.168.140.10).&lt;br /&gt;
 Please enter the VIP for the pub-interface (VLAN 140)&lt;br /&gt;
 (default: 192.168.140.10): &#039;&#039;&#039;192.168.140.10&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Enter the VIP (virtual IP) for the internal interace. The LDAP will listen on this VIP because it is listening on the internal interface (if you followed this documentation the VIP for the internal interface is 10.1.130.10).&lt;br /&gt;
 Please enter the VIP for the int-interface (VLAN 130)&lt;br /&gt;
 (default: 10.1.130.10): &#039;&#039;&#039;10.1.130.10&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Enter the RIP (real IP) for the secondary-master-node on the internal interface. The RIP is needed to keep the LDAP directories synchronized (if you followed this documentation the RIP for the secondary-master-node on the internal interface is 10.1.130.14)&lt;br /&gt;
 Please enter the IP for the int-interface (VLAN 130) of the&lt;br /&gt;
 secondary-master-node (default: 10.1.130.14): &#039;&#039;&#039;10.1.130.14&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The script now tests the network configuration and &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to mount the gluster-filesystem, you need to connect via ssh to the primary-storage-node. Enter the IP for the primary-storage-node on the admin interface (if you followed this documentation the IP for the primary-storage-node on the admin interface is 10.1.110.11). Enter the a valid username which exists on the primary-storage-node (if you followed this documentation it is root) and the corresponding password.&lt;br /&gt;
 Please enter the following information for the primary Storage-Node with the OpenSSH daemon listening on the VLAN with the name &#039;admin&#039; and with the VLAN ID &#039;110&#039;:&lt;br /&gt;
 &lt;br /&gt;
 IP address (default: 10.1.110.11): &#039;&#039;&#039;10.1.110.11&#039;&#039;&#039;&lt;br /&gt;
 Username (default: root): &#039;&#039;&#039;root&#039;&#039;&#039;&lt;br /&gt;
 Password for root: &#039;&#039;&#039;**********&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to mount the gluster-filesystem, you need to connect via ssh to the secondary-storage-node. Enter the IP for the secondary-storage-node on the admin interface (if you followed this documentation the IP for the primary-storage-node on the admin interface is 10.1.110.12). Enter the a valid username which exists on the primary-storage-node (if you followed this documentation it is root) and the corresponding password.&lt;br /&gt;
 Please enter the following information for the secondary Storage-Node with the OpenSSH daemon listening on the VLAN with the name &#039;admin&#039; and with the VLAN ID &#039;110&#039;:&lt;br /&gt;
 &lt;br /&gt;
 IP address (default: 10.1.110.12): &#039;&#039;&#039;10.1.110.12&#039;&#039;&#039;&lt;br /&gt;
 Username (default: root): &#039;&#039;&#039;root&#039;&#039;&#039;&lt;br /&gt;
 Password for root:  &#039;&#039;&#039;**********&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Configure the LDAP directory: &lt;br /&gt;
** Define the password for the LDAP-Superuser (cn=Manager,dc=stoney-cloud,dc=org)&lt;br /&gt;
** Currently the user for the prov-backup-kvm daemon is the LDAP-Superuser so enter the same password again&lt;br /&gt;
** Define the password for the LDAP-dhcp user (cn=dhcp,ou=services,ou=administration,dc=stoney-cloud,dc=org)&lt;br /&gt;
** Enter all necessary information for the stoney cloud administrator (User1)&lt;br /&gt;
*** Given name&lt;br /&gt;
*** Surname&lt;br /&gt;
*** Gender&lt;br /&gt;
*** E-mail&lt;br /&gt;
*** Language&lt;br /&gt;
*** Password&lt;br /&gt;
&lt;br /&gt;
* Finally enter the domain name which will correspond to the public VIP (default is stoney-cloud.example.org)&lt;br /&gt;
&lt;br /&gt;
* Due to [https://github.com/stepping-stone/node-integration/issues/9 bug #9], you need to manually finish the configuration of the libvirthook scripts:&lt;br /&gt;
** You mainly have to fill in the following variables:&lt;br /&gt;
*** &#039;&#039;&#039;libvirtHookFirewallSvnUser&#039;&#039;&#039;&lt;br /&gt;
*** &#039;&#039;&#039;libvirtHookFirewallSvnPassword&#039;&#039;&#039;&lt;br /&gt;
** See also [https://int.stepping-stone.ch/wiki/libvirt_Hooks#Config_for_test-environment this test configuration]&lt;br /&gt;
* Due to [https://github.com/stepping-stone/node-integration/issues/12 bug #12], you need to manually configure the LDAPKVMWrapper.pl script:&lt;br /&gt;
** Fill in the &amp;lt;code&amp;gt;/etc/Provisioning/Backup/LDAPKVMWrapper.conf&amp;lt;/code&amp;gt; file&lt;br /&gt;
** Create a cronjob entry which runs the script &amp;lt;code&amp;gt;/usr/bin/LDAPKVMWrapper.pl&amp;lt;/code&amp;gt; once a day:&lt;br /&gt;
*** &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information about the script and what it does, please visit the [[fc-node-configuration]] script page.&lt;br /&gt;
&lt;br /&gt;
==== Secondary-Master-Node (vm-node-02) ====&lt;br /&gt;
If you configured a additional Backup Volume on the Storage Nodes, you want to [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#On_the_VM-Nodes | mount them now in the Secondary-Master-Node]].&lt;br /&gt;
&lt;br /&gt;
Log into the [[Secondary-Master-Node]] and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type secondary-master-node&lt;br /&gt;
&lt;br /&gt;
* In order to get the configuration from the primary-master-node, we need to access it via ssh&lt;br /&gt;
** Enter the IP for the primary-master-node on the admin interface (if you followed this documentation it is 10.1.130.13)&lt;br /&gt;
** Enter the username (if you followd the default setup it is root)&lt;br /&gt;
** Enter the users password.&lt;br /&gt;
&lt;br /&gt;
* In order to mount the gluster-filesystem, you need to connect via ssh to the primary-storage-node, so enter the following information:&lt;br /&gt;
** Enter the IP for the primary-storage-node on the admin interface (if you followed this documentation the IP for the primary-storage-node on the admin interface is 10.1.110.11)&lt;br /&gt;
** Enter the a valid username which exists on the primary-storage-node (if you followed this documentation it is root)&lt;br /&gt;
** Enter the users password&lt;br /&gt;
* Repeat the same procedure for the secondary-storage-node (if you followed this documentation the IP is 10.1.110.12)&lt;br /&gt;
&lt;br /&gt;
* Enter the LDAP-Superuser password you defined during the [[Multi-Node Installation#Primary-Master-Node (stoney-cloud-node-01)_2 | primary-master-node]] installation&lt;br /&gt;
&lt;br /&gt;
* Due tu [https://github.com/stepping-stone/node-integration/issues/9 bug #9], you need to manually finish the configuration of the libvirthook scripts:&lt;br /&gt;
** You mainly have to fill in the following variables:&lt;br /&gt;
*** &#039;&#039;&#039;libvirtHookFirewallSvnUser&#039;&#039;&#039;&lt;br /&gt;
*** &#039;&#039;&#039;libvirtHookFirewallSvnPassword&#039;&#039;&#039;&lt;br /&gt;
** See also [https://int.stepping-stone.ch/wiki/libvirt_Hooks#Config_for_test-environment this test configuration]&lt;br /&gt;
* Due to [https://github.com/stepping-stone/node-integration/issues/12 bug #12], you need to manually configure the LDAPKVMWrapper.pl script:&lt;br /&gt;
** Fill in the &amp;lt;code&amp;gt;/etc/Provisioning/Backup/LDAPKVMWrapper.conf&amp;lt;/code&amp;gt; file&lt;br /&gt;
** Create a cronjob entry which runs the script &amp;lt;code&amp;gt;/usr/bin/LDAPKVMWrapper.pl&amp;lt;/code&amp;gt; once a day:&lt;br /&gt;
*** &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information about the script and what it does, please visit the [[fc-node-configuration]] script page.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
= Node Integration =&lt;br /&gt;
The following figure gives an overview what the node-integration script does for the different node types&lt;br /&gt;
&lt;br /&gt;
[[File:node-integration.png|500px|thumbnail|none|What does the node-integration script for the different node types]]&lt;br /&gt;
&lt;br /&gt;
= Old Documentation =&lt;br /&gt;
== Specialized Installation ==&lt;br /&gt;
=== Primary-Master-Node (vm-node-01) ===&lt;br /&gt;
If you configured a additional Backup Volume on the Storage Nodes, you want to mount them now in the VM-Node.&lt;br /&gt;
&lt;br /&gt;
Log into the Primary-Master-Node and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type primary-master-node&lt;br /&gt;
&lt;br /&gt;
==== Manual Steps ====&lt;br /&gt;
In order to be able to migrate a VM from a carrier, a special user called transfer will be created.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
lvcreate -L 60G -n transfer local0&lt;br /&gt;
&lt;br /&gt;
mkfs.xfs -L &amp;quot;OSBD_transfe&amp;quot; /dev/local0/transfer &lt;br /&gt;
&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
&lt;br /&gt;
LABEL=OSBD_transfe  /home/transfer    xfs      noatime,nodev,nosuid,noexec  0 2&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
mount /home/transfer&lt;br /&gt;
&lt;br /&gt;
useradd --comment &amp;quot;User which is used for VM disk file transfer between carriers&amp;quot; \&lt;br /&gt;
        --create-home \&lt;br /&gt;
        --system \&lt;br /&gt;
        --user-group \&lt;br /&gt;
        transfer&lt;br /&gt;
&lt;br /&gt;
passwd transfer&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Allow password authentication for the transfer user:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$EDITOR /etc/ssh/sshd_config&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[...]&lt;br /&gt;
&lt;br /&gt;
Match User transfer&lt;br /&gt;
        PasswordAuthentication yes&lt;br /&gt;
&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To apply the changes above, restart the SSH daemon:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
/etc/init.d/sshd restart&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney cloud]][[Category:Installation]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_cloud:_Multi-Node_Installation&amp;diff=3773</id>
		<title>stoney cloud: Multi-Node Installation</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_cloud:_Multi-Node_Installation&amp;diff=3773"/>
		<updated>2014-06-27T12:51:33Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Node Integration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The [http://www.stoney-cloud.org/ stoney cloud] builds upon various standard open source components and can be run on commodity hardware. The final Multi-Node Setup consists of the following components:&lt;br /&gt;
* One [[Primary-Master-Node]] with an OpenLDAP Directory Server for the storage of the stoney cloud user and service related data with the web based management [[VM-Manager]] interface and the Linux kernel based virtualization technology.&lt;br /&gt;
* One [[Secondary-Master-Node]] with the Linux kernel based virtualization technology.&lt;br /&gt;
* Two [[Storage-Node | Storage-Nodes]] configured as  a replicated and distributed data storage service based on [http://www.gluster.org/ GlusterFS].&lt;br /&gt;
The components communicate with each other over a standard Ethernet based IPv4 network.&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
The following items and conditions are required to be able to install and configure a stoney cloud environment:&lt;br /&gt;
* Dedicated Hardware (4 Servers) which fulfil the following requirements: &lt;br /&gt;
** 64-Bit Intel with VT-Technologie (AMD is not tested at the moment).&lt;br /&gt;
** 8 Gigabyte Memory (more is better).&lt;br /&gt;
** 147 Gigabyte up to 2 Terabyte disks.&lt;br /&gt;
*** For all four Nodes ([[VM-Node | VM-Nodes]] and [[Storage-Node | Storage-Nodes]]) two 147 Gigabyte Disks, configured as RAID1, are enough for the [http://www.gentoo.org/ Gentoo Linux] operating system.&lt;br /&gt;
*** Optional: For the two [http://www.gluster.org/ GlusterFS] [[Storage-Node | Storage-Nodes]] we recommend a second RAID-Set configured as [http://en.wikipedia.org/wiki/RAID_6#RAID_6 RAID6]-Set with battery backup.&lt;br /&gt;
** Two physical Ethernet Interfaces, which support the same bandwidth (for example 1 Gigabit/s).&lt;br /&gt;
* Two Gigabit layer-2 switches supporting [http://en.wikipedia.org/wiki/IEEE_802.3ad IEEE 802.3ad] (dynamic link aggregation), [http://en.wikipedia.org/wiki/IEEE_802.1Q IEEE 802.1Q] (VLAN tagging) and stacking (optional, but recommended).&lt;br /&gt;
* Experience with Linux environments especially with [http://www.gentoo.org Gentoo Linux].&lt;br /&gt;
* Good experience with IP networking, because the Switches need to be configured manually (dynamic link aggregation and VLAN tagging).&lt;br /&gt;
&lt;br /&gt;
=== Limitations ===&lt;br /&gt;
During the installation of the [http://www.gentoo.org/ Gentoo Linux] operating system, the first two physical Ethernet Interfaces are automatically configured as a logical interface (bond0) and the four tagged VLANs are set up. If more than two physical Ethernet Interfaces are to be included in to the logical interface (bond0) or the physical Ethernet Interfaces have different bandwidths (for example 1 Gigabit/s and 10 Gigabit/s), the final logical interface (bond0) needs to be configured manually after the installation of the [http://www.gentoo.org/ Gentoo Linux] operating system.&lt;br /&gt;
Only the first two [http://www.gluster.org/ GlusterFS] [[Storage-Node | Storage-Nodes]] are set up automatically. More [[Storage-Node | Storage-Nodes]] need to be integrated manually.&lt;br /&gt;
&lt;br /&gt;
=== Network Overview ===&lt;br /&gt;
As stated before, a minimal multi node stoney cloud environment consist of two VM- and Storage-Nodes.&lt;br /&gt;
It is highly recommended to use IEEE 802.3ad link aggregation (bonding, trunking etc.) over two network cards and attach them as one logical link to the access switch.&lt;br /&gt;
&lt;br /&gt;
It is out of scope of this document on how to configure the switches for stacking, link aggregation and VLAN tagging, consult the respective user manual.&lt;br /&gt;
&lt;br /&gt;
There are two scenarios on how to connect the nodes to the network.&lt;br /&gt;
&lt;br /&gt;
==== Network Overview: Physical Layer Scenario 1 (recommended) ====&lt;br /&gt;
The preferred solution is to use two switches which are stackable (supporting link aggregation over two switches). Connect the nodes and switches as illustrated below. Thus eliminating the single point of failure as presented in scenario 2.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                      +----------------------+      +----------------------+&lt;br /&gt;
                      |                      |      |                      |&lt;br /&gt;
                      |      vm-node-01      |      |      vm-node-02      |&lt;br /&gt;
                      |                      |      |                      |&lt;br /&gt;
                      +----------------------+      +----------------------+        &lt;br /&gt;
                                     bond0 | \      / | bond0&lt;br /&gt;
                                           |  \    /  |&lt;br /&gt;
                                           |   \  /   |&lt;br /&gt;
                                           |    \/    |                                          ___&lt;br /&gt;
                                           |    /\    |                                      ___(   )___&lt;br /&gt;
                                           |   /  \   |                                   __(           )__&lt;br /&gt;
                                           |  /    \  |                                 _(                 )_&lt;br /&gt;
                                 +-------------+   +-------------+                    _(                     )_&lt;br /&gt;
                                 |  switch-01  |===|  switch-02  |-------------------(_   Corporate LAN/WAN   _)&lt;br /&gt;
                                 +-------------+   +-------------+                     (_                   _)&lt;br /&gt;
                                           | \      / |                                  (__             __)&lt;br /&gt;
                                           |  \    /  |                                     (___     ___)&lt;br /&gt;
                                           |   \  /   |                                         (___)&lt;br /&gt;
                                           |    \/    |&lt;br /&gt;
                                           |    /\    |&lt;br /&gt;
                                           |   /  \   |&lt;br /&gt;
                                     bond0 |  /    \  | bond0&lt;br /&gt;
                     +-----------------------+      +-----------------------+&lt;br /&gt;
                     |                       |      |                       |&lt;br /&gt;
                     | tier1-storage-node-01 |      | tier1-storage-node-02 |&lt;br /&gt;
                     |                       |      |                       |&lt;br /&gt;
                     +-----------------------+      +-----------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Network Overview: Physical Layer Scenario 2 (use at your own risk) ====&lt;br /&gt;
If theres only one switch available (or the switches aren&#039;t stackable) connect the nodes as illustrated below. As you can see, the switch is as a single point of failure.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                      +----------------------+      +----------------------+&lt;br /&gt;
                      |                      |      |                      |&lt;br /&gt;
                      |      vm-node-01      |      |      vm-node-02      |&lt;br /&gt;
                      |                      |      |                      |&lt;br /&gt;
                      +----------------------+      +----------------------+                     ___&lt;br /&gt;
                                  bond0  \\            // bond0                              ___(   )___&lt;br /&gt;
                                          \\          //                                  __(           )__&lt;br /&gt;
                                           \\        //                                 _(                 )_&lt;br /&gt;
                                         +-------------+                              _(                     )_&lt;br /&gt;
                                         |  switch-01  |-----------------------------(_   Corporate LAN/WAN   _)&lt;br /&gt;
                                         +-------------+                               (_                   _)&lt;br /&gt;
                                           //       \\                                   (__             __) &lt;br /&gt;
                                          //         \\                                     (___     ___)&lt;br /&gt;
                                  bond0  //           \\ bond0                                  (___)&lt;br /&gt;
                     +-----------------------+      +-----------------------+&lt;br /&gt;
                     |                       |      |                       |&lt;br /&gt;
                     | tier1-storage-node-01 |      | tier1-storage-node-02 |&lt;br /&gt;
                     |                       |      |                       |&lt;br /&gt;
                     +-----------------------+      +-----------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Network overview: Logical layer ====&lt;br /&gt;
The goal is to achieve the following configuration:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
+----------------+----------------+----------------+----------------+&lt;br /&gt;
|   10.1.110.1X  |   10.1.120.1X  |   10.1.130.1X  | 192.168.140.1X |&lt;br /&gt;
+----------------+----------------+----------------+----------------+&lt;br /&gt;
|                |                |                |     vmbr0      |&lt;br /&gt;
|     vlan110    |    vlan120     |    vlan130     +----------------+&lt;br /&gt;
|                |                |                |    vlan140     |&lt;br /&gt;
+----------------+----------------+----------------+----------------+&lt;br /&gt;
+-------------------------------------------------------------------+&lt;br /&gt;
|                   bond0 (bonding.mode=802.3ad)                    |&lt;br /&gt;
+-------------------------------------------------------------------+&lt;br /&gt;
+----------------+                                 +----------------+&lt;br /&gt;
|      eth0      |                                 |      eth1      |&lt;br /&gt;
+----------------+                                 +----------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ideal stoney cloud environment is based on four logical separated VLANs (virtual LANs):&lt;br /&gt;
* &#039;&#039;&#039;admin&#039;&#039;&#039;: Administrative network, used for administration and monitoring purposes.&lt;br /&gt;
* &#039;&#039;&#039;data&#039;&#039;&#039;: Data network, used for GlusterFS traffic.&lt;br /&gt;
* &#039;&#039;&#039;int&#039;&#039;&#039;: Internal network, used for internal traffic such as LDAP, libvirt and more.&lt;br /&gt;
* &#039;&#039;&#039;pub&#039;&#039;&#039;: Public network, used for accessing the VM-Manager webinterface, Spice traffic and internet access.&lt;br /&gt;
&lt;br /&gt;
Each of the above VLANs hold dedicated services and separates them from each other. This documentation assumes, that the four VLANs are present and the following IP networks are available:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; style=&amp;quot;border-collapse: collapse; font-size:100%;&amp;quot; width=100%&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|VLAN name&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|VLAN ID&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Network prefix&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Default Gateway address&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Broadcast address&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Domain name&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|VIP&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|admin&lt;br /&gt;
|110&lt;br /&gt;
|10.1.110.0/24&lt;br /&gt;
| -- &lt;br /&gt;
|10.1.110.255&lt;br /&gt;
|admin.example.com&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|data&lt;br /&gt;
|120&lt;br /&gt;
|10.1.120.0/24&lt;br /&gt;
| --&lt;br /&gt;
|10.1.120.255&lt;br /&gt;
|data.example.com&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|int&lt;br /&gt;
|130&lt;br /&gt;
|10.1.130.0/24&lt;br /&gt;
| --&lt;br /&gt;
|10.1.130.255&lt;br /&gt;
|int.example.com&lt;br /&gt;
|10.1.130.10&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|pub&lt;br /&gt;
|140&lt;br /&gt;
|192.168.140.0/24&lt;br /&gt;
|192.168.140.1&lt;br /&gt;
|192.168.140.255&lt;br /&gt;
|example.com&lt;br /&gt;
|192.168.140.10&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The IP allocation of the nodes will be assumed as stated in the table below:&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; style=&amp;quot;border-collapse: collapse; font-size:100%;&amp;quot; width=100% &lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Node name&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Admin address (VLAN 110)&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Data address (VLAN 120)&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Int address (VLAN 130)&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Pub address (VLAN 140)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|tier1-storage-node-01&lt;br /&gt;
|10.1.110.11&lt;br /&gt;
|10.1.120.11&lt;br /&gt;
|10.1.130.11&lt;br /&gt;
|192.168.140.11&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|tier1-storage-node-02&lt;br /&gt;
|10.1.110.12&lt;br /&gt;
|10.1.120.12&lt;br /&gt;
|10.1.130.12&lt;br /&gt;
|192.168.140.12&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|vm-node-01&lt;br /&gt;
|10.1.110.13&lt;br /&gt;
|10.1.120.13&lt;br /&gt;
|10.1.130.13&lt;br /&gt;
|192.168.140.13&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|vm-node-02&lt;br /&gt;
|10.1.110.14&lt;br /&gt;
|10.1.120.14&lt;br /&gt;
|10.1.130.14&lt;br /&gt;
|192.168.140.14&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
We&#039;ll also presume, we have the following Domain Name Servers:&lt;br /&gt;
* Domain Name Server 1: &#039;&#039;&#039;192.168.140.1&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Base Installation ==&lt;br /&gt;
All nodes are based on the [http://www.gentoo.org/ Gentoo Linux] operating system.&lt;br /&gt;
&lt;br /&gt;
=== BIOS Set Up Checklist ===&lt;br /&gt;
* Enable the &amp;quot;reboot-after-power-loss&amp;quot; option (if your BIOS supports it).&lt;br /&gt;
* Make sure, that you have the newest BIOS (BMC and perhaps SCSI firmware) version.&lt;br /&gt;
* Make sure, you&#039;ve disabled halt on post errors (or similar) or enable keyboard-less operation (if your BIOS supports it).&lt;br /&gt;
&lt;br /&gt;
=== RAID Set Up ===&lt;br /&gt;
Create a RAID1 volume. This RAID-Set is used for the Operating System. Please be aware, the the current stoney cloud only supports 147 Gigabyte up to 2 Terabyte disks for this first RAID-Set.&lt;br /&gt;
&lt;br /&gt;
Optional: For the two [http://www.gluster.org/ GlusterFS] [[Storage-Node | Storage-Nodes]] we recommend a second RAID-Set configured as [http://en.wikipedia.org/wiki/RAID_6#RAID_6 RAID6]-Set with battery backup.&lt;br /&gt;
&lt;br /&gt;
=== Node Installation ===&lt;br /&gt;
The first step of Semi-Automatic Multi-Node Set Up is the same for all four Nodes. In this documentation we presume, that you stick to the naming convention mentioned above. After the Base Installation of the Nodes, the following daemons will be running:&lt;br /&gt;
* &#039;&#039;&#039;crond&#039;&#039;&#039;: Crond, to execute scheduled commands&lt;br /&gt;
* &#039;&#039;&#039;ntpd&#039;&#039;&#039;: Network Time Protocol daemon, keeps the time synced with the Time from Servers in the LAN or WAN.&lt;br /&gt;
* &#039;&#039;&#039;sshd&#039;&#039;&#039;: OpenSSH SSH daemon, used for remote access and remote administration.&lt;br /&gt;
* &#039;&#039;&#039;syslogd&#039;&#039;&#039;: System Logging, keeps track of messages from the System and the Applications.&lt;br /&gt;
* &#039;&#039;&#039;udevd&#039;&#039;&#039;:  Linux dynamic device management, that manage events, symlinks and permissions of devices.&lt;br /&gt;
&lt;br /&gt;
==== First Storage-Node (tier1-storage-node-01) ====&lt;br /&gt;
# Insert the stoney cloud CD and boot the server.&lt;br /&gt;
# Answer the questions as follows (the bold values are examples can be set through the administrator and are variable, according to the local setup):&lt;br /&gt;
## Global Section&lt;br /&gt;
### Confirm that you want to start? &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
### Choose a Node-Type: &#039;&#039;&#039;Storage-Node&#039;&#039;&#039;&lt;br /&gt;
### Choose a Block-Device: &#039;&#039;&#039;sda&#039;&#039;&#039;&lt;br /&gt;
### Confirm to erase all (from a previous installation): &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
### Confirm to continue with the given the Partition-Scheme: &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
### Choose the network interfaces to bond together.&lt;br /&gt;
#### Device #0: &#039;&#039;&#039;eth0&#039;&#039;&#039;&lt;br /&gt;
#### Device #1: &#039;&#039;&#039;eth1&#039;&#039;&#039;&lt;br /&gt;
### Node-Name: &#039;&#039;&#039;tier1-storage-node-01&#039;&#039;&#039;&lt;br /&gt;
## pub-VLAN Section&lt;br /&gt;
### VLAN ID: &#039;&#039;&#039;140&#039;&#039;&#039;&lt;br /&gt;
### Domain Name: &#039;&#039;&#039;example.com&#039;&#039;&#039;&lt;br /&gt;
### IP Address: &#039;&#039;&#039;192.168.140.11&#039;&#039;&#039;&lt;br /&gt;
### Netmask: &#039;&#039;&#039;24&#039;&#039;&#039;&lt;br /&gt;
### Broadcast: &#039;&#039;&#039;192.168.140.255&#039;&#039;&#039;&lt;br /&gt;
### Confirm the pub-VLAN Section with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
## admin-VLAN Section&lt;br /&gt;
### VLAN ID: &#039;&#039;&#039;110&#039;&#039;&#039;&lt;br /&gt;
### Domain Name: &#039;&#039;&#039;admin.example.com&#039;&#039;&#039;&lt;br /&gt;
### IP Address: &#039;&#039;&#039;10.1.110.11&#039;&#039;&#039;&lt;br /&gt;
### Netmask: &#039;&#039;&#039;24&#039;&#039;&#039;&lt;br /&gt;
### Broadcast: &#039;&#039;&#039;10.1.110.255&#039;&#039;&#039;&lt;br /&gt;
### Confirm the admin-VLAN Section with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
## data-VLAN Section&lt;br /&gt;
### VLAN ID: &#039;&#039;&#039;120&#039;&#039;&#039;&lt;br /&gt;
### Domain Name: &#039;&#039;&#039;data.example.com&#039;&#039;&#039;&lt;br /&gt;
### IP Address: &#039;&#039;&#039;10.1.120.11&#039;&#039;&#039;&lt;br /&gt;
### Netmask: &#039;&#039;&#039;24&#039;&#039;&#039;&lt;br /&gt;
### Broadcast: &#039;&#039;&#039;10.1.120.255&#039;&#039;&#039;&lt;br /&gt;
### Confirm the data-VLAN Section with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
## int-VLAN Section&lt;br /&gt;
### VLAN ID: &#039;&#039;&#039;130&#039;&#039;&#039;&lt;br /&gt;
### Domain Name: &#039;&#039;&#039;int.example.com&#039;&#039;&#039;&lt;br /&gt;
### IP Address: &#039;&#039;&#039;10.1.130.11&#039;&#039;&#039;&lt;br /&gt;
### Netmask: &#039;&#039;&#039;24&#039;&#039;&#039;&lt;br /&gt;
### Broadcast: &#039;&#039;&#039;10.1.130.255&#039;&#039;&#039;&lt;br /&gt;
### Confirm the int-VLAN Section with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
### Enter the default Gateway: &#039;&#039;&#039;192.168.140.1&#039;&#039;&#039;&lt;br /&gt;
### Enter the primary DNS-Server: &#039;&#039;&#039;192.168.140.1&#039;&#039;&#039;&lt;br /&gt;
### Omit configuring a second DNS-Server with &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
## Confirm the listed configuration with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
## Enter your very secret root password&lt;br /&gt;
## Confirm to reboot with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
# Make sure, that you boot from the first harddisk and not from the installation medium again.&lt;br /&gt;
# Continue with [[Multi-Node Installation#Specialized_Installation|specializing your Node]]&lt;br /&gt;
&lt;br /&gt;
==== Second Storage-Node (tier1-storage-node-02) ====&lt;br /&gt;
# Insert the stoney cloud CD and boot the server.&lt;br /&gt;
# Answer the questions.&lt;br /&gt;
# Reboot the Server and make sure, that you boot from the first harddisk.&lt;br /&gt;
&lt;br /&gt;
==== Primary-Master-Node (vm-node-01) ====&lt;br /&gt;
# Insert the stoney cloud CD and boot the server.&lt;br /&gt;
# Answer the questions.&lt;br /&gt;
# Reboot the Server and make sure, that you boot from the first harddisk.&lt;br /&gt;
&lt;br /&gt;
==== Secondary-Master-Node (vm-node-02) ====&lt;br /&gt;
# Insert the stoney cloud CD and boot the server.&lt;br /&gt;
# Answer the questions.&lt;br /&gt;
# Reboot the Server and make sure, that you boot from the first harddisk.&lt;br /&gt;
&lt;br /&gt;
== Skipping Checks ==&lt;br /&gt;
To skip checks, type &#039;&#039;&#039;no&#039;&#039;&#039; when asked:&lt;br /&gt;
  Do you want to start the installation?&lt;br /&gt;
 yes or no?: &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Then manually restart the stoney cloud installer with the desired options. For example:&lt;br /&gt;
 /mnt/cdrom/foss-cloud-installer -c&lt;br /&gt;
&lt;br /&gt;
Options:&lt;br /&gt;
 -c: Skip CPU requirement checks&lt;br /&gt;
 -m: Skip memory requirement checks&lt;br /&gt;
 -s: Skip CPU and memory requirement checks&lt;br /&gt;
&lt;br /&gt;
== Specialized Installation ==&lt;br /&gt;
==== First Storage-Node (tier1-storage-node-01) ====&lt;br /&gt;
Before running the node configuration script, you may want to create a [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#Before_node_configuration_script | additional Backup Volume on Storage Node]].&lt;br /&gt;
&lt;br /&gt;
Log into the first [[Storage-Node]] and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type primary-storage-node&lt;br /&gt;
&lt;br /&gt;
For more information about the script and what it does, please visit the [[fc-node-configuration]] script page.&lt;br /&gt;
&lt;br /&gt;
==== Second Storage-Node (tier1-storage-node-02) ====&lt;br /&gt;
Before running the node configuration script, you may want to create a [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#Before_node_configuration_script | additional Backup Volume on Storage Node]].&lt;br /&gt;
&lt;br /&gt;
Log into the second [[Storage-Node]] and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type secondary-storage-node&lt;br /&gt;
&lt;br /&gt;
For more information about the script and what it does, please visit the [[node-configuration]] script page.&lt;br /&gt;
&lt;br /&gt;
==== Primary-Master-Node (vm-node-01) ====&lt;br /&gt;
If you configured a additional Backup Volume on the Storage Nodes, you want to [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#On_the_VM-Nodes | mount them now in the VM-Node]].&lt;br /&gt;
&lt;br /&gt;
Log into the [[Primary-Master-Node]] and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type primary-master-node&lt;br /&gt;
&lt;br /&gt;
The stoney cloud uses virtual ip addresses (VIPs) for fail over purposes. Therefore you need to configure [http://www.pureftpd.org/project/ucarp ucarp].&lt;br /&gt;
&lt;br /&gt;
Confirm that you want to run the script.&lt;br /&gt;
 Do you really want to proceed with configuration of the primary-master-node?&lt;br /&gt;
 yes or no (default: no): &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Enter the VIP (virtual IP) for the public interface. The apache will listen on this VIP because it is listening on the public interface (if you followed this documentation the VIP for the public interface is 192.168.140.10).&lt;br /&gt;
 Please enter the VIP for the pub-interface (VLAN 140)&lt;br /&gt;
 (default: 192.168.140.10): &#039;&#039;&#039;192.168.140.10&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Enter the VIP (virtual IP) for the internal interace. The LDAP will listen on this VIP because it is listening on the internal interface (if you followed this documentation the VIP for the internal interface is 10.1.130.10).&lt;br /&gt;
 Please enter the VIP for the int-interface (VLAN 130)&lt;br /&gt;
 (default: 10.1.130.10): &#039;&#039;&#039;10.1.130.10&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Enter the RIP (real IP) for the secondary-master-node on the internal interface. The RIP is needed to keep the LDAP directories synchronized (if you followed this documentation the RIP for the secondary-master-node on the internal interface is 10.1.130.14)&lt;br /&gt;
 Please enter the IP for the int-interface (VLAN 130) of the&lt;br /&gt;
 secondary-master-node (default: 10.1.130.14): &#039;&#039;&#039;10.1.130.14&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The script now tests the network configuration and &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to mount the gluster-filesystem, you need to connect via ssh to the primary-storage-node. Enter the IP for the primary-storage-node on the admin interface (if you followed this documentation the IP for the primary-storage-node on the admin interface is 10.1.110.11). Enter the a valid username which exists on the primary-storage-node (if you followed this documentation it is root) and the corresponding password.&lt;br /&gt;
 Please enter the following information for the primary Storage-Node with the OpenSSH daemon listening on the VLAN with the name &#039;admin&#039; and with the VLAN ID &#039;110&#039;:&lt;br /&gt;
 &lt;br /&gt;
 IP address (default: 10.1.110.11): &#039;&#039;&#039;10.1.110.11&#039;&#039;&#039;&lt;br /&gt;
 Username (default: root): &#039;&#039;&#039;root&#039;&#039;&#039;&lt;br /&gt;
 Password for root: &#039;&#039;&#039;**********&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to mount the gluster-filesystem, you need to connect via ssh to the secondary-storage-node. Enter the IP for the secondary-storage-node on the admin interface (if you followed this documentation the IP for the primary-storage-node on the admin interface is 10.1.110.12). Enter the a valid username which exists on the primary-storage-node (if you followed this documentation it is root) and the corresponding password.&lt;br /&gt;
 Please enter the following information for the secondary Storage-Node with the OpenSSH daemon listening on the VLAN with the name &#039;admin&#039; and with the VLAN ID &#039;110&#039;:&lt;br /&gt;
 &lt;br /&gt;
 IP address (default: 10.1.110.12): &#039;&#039;&#039;10.1.110.12&#039;&#039;&#039;&lt;br /&gt;
 Username (default: root): &#039;&#039;&#039;root&#039;&#039;&#039;&lt;br /&gt;
 Password for root:  &#039;&#039;&#039;**********&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Configure the LDAP directory: &lt;br /&gt;
** Define the password for the LDAP-Superuser (cn=Manager,dc=stoney-cloud,dc=org)&lt;br /&gt;
** Currently the user for the prov-backup-kvm daemon is the LDAP-Superuser so enter the same password again&lt;br /&gt;
** Define the password for the LDAP-dhcp user (cn=dhcp,ou=services,ou=administration,dc=stoney-cloud,dc=org)&lt;br /&gt;
** Enter all necessary information for the stoney cloud administrator (User1)&lt;br /&gt;
*** Given name&lt;br /&gt;
*** Surname&lt;br /&gt;
*** Gender&lt;br /&gt;
*** E-mail&lt;br /&gt;
*** Language&lt;br /&gt;
*** Password&lt;br /&gt;
&lt;br /&gt;
* Finally enter the domain name which will correspond to the public VIP (default is stoney-cloud.example.org)&lt;br /&gt;
&lt;br /&gt;
* Due to [https://github.com/stepping-stone/node-integration/issues/9 bug #9], you need to manually finish the configuration of the libvirthook scripts:&lt;br /&gt;
** You mainly have to fill in the following variables:&lt;br /&gt;
*** &#039;&#039;&#039;libvirtHookFirewallSvnUser&#039;&#039;&#039;&lt;br /&gt;
*** &#039;&#039;&#039;libvirtHookFirewallSvnPassword&#039;&#039;&#039;&lt;br /&gt;
** See also [https://int.stepping-stone.ch/wiki/libvirt_Hooks#Config_for_test-environment this test configuration]&lt;br /&gt;
* Due to [https://github.com/stepping-stone/node-integration/issues/12 bug #12], you need to manually configure the LDAPKVMWrapper.pl script:&lt;br /&gt;
** Fill in the &amp;lt;code&amp;gt;/etc/Provisioning/Backup/LDAPKVMWrapper.conf&amp;lt;/code&amp;gt; file&lt;br /&gt;
** Create a cronjob entry which runs the script &amp;lt;code&amp;gt;/usr/bin/LDAPKVMWrapper.pl&amp;lt;/code&amp;gt; once a day:&lt;br /&gt;
*** &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information about the script and what it does, please visit the [[fc-node-configuration]] script page.&lt;br /&gt;
&lt;br /&gt;
==== Secondary-Master-Node (vm-node-02) ====&lt;br /&gt;
If you configured a additional Backup Volume on the Storage Nodes, you want to [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#On_the_VM-Nodes | mount them now in the Secondary-Master-Node]].&lt;br /&gt;
&lt;br /&gt;
Log into the [[Secondary-Master-Node]] and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type secondary-master-node&lt;br /&gt;
&lt;br /&gt;
* In order to get the configuration from the primary-master-node, we need to access it via ssh&lt;br /&gt;
** Enter the IP for the primary-master-node on the admin interface (if you followed this documentation it is 10.1.130.13)&lt;br /&gt;
** Enter the username (if you followd the default setup it is root)&lt;br /&gt;
** Enter the users password.&lt;br /&gt;
&lt;br /&gt;
* In order to mount the gluster-filesystem, you need to connect via ssh to the primary-storage-node, so enter the following information:&lt;br /&gt;
** Enter the IP for the primary-storage-node on the admin interface (if you followed this documentation the IP for the primary-storage-node on the admin interface is 10.1.110.11)&lt;br /&gt;
** Enter the a valid username which exists on the primary-storage-node (if you followed this documentation it is root)&lt;br /&gt;
** Enter the users password&lt;br /&gt;
* Repeat the same procedure for the secondary-storage-node (if you followed this documentation the IP is 10.1.110.12)&lt;br /&gt;
&lt;br /&gt;
* Enter the LDAP-Superuser password you defined during the [[Multi-Node Installation#Primary-Master-Node (stoney-cloud-node-01)_2 | primary-master-node]] installation&lt;br /&gt;
&lt;br /&gt;
* Due tu [https://github.com/stepping-stone/node-integration/issues/9 bug #9], you need to manually finish the configuration of the libvirthook scripts:&lt;br /&gt;
** You mainly have to fill in the following variables:&lt;br /&gt;
*** &#039;&#039;&#039;libvirtHookFirewallSvnUser&#039;&#039;&#039;&lt;br /&gt;
*** &#039;&#039;&#039;libvirtHookFirewallSvnPassword&#039;&#039;&#039;&lt;br /&gt;
** See also [https://int.stepping-stone.ch/wiki/libvirt_Hooks#Config_for_test-environment this test configuration]&lt;br /&gt;
* Due to [https://github.com/stepping-stone/node-integration/issues/12 bug #12], you need to manually configure the LDAPKVMWrapper.pl script:&lt;br /&gt;
** Fill in the &amp;lt;code&amp;gt;/etc/Provisioning/Backup/LDAPKVMWrapper.conf&amp;lt;/code&amp;gt; file&lt;br /&gt;
** Create a cronjob entry which runs the script &amp;lt;code&amp;gt;/usr/bin/LDAPKVMWrapper.pl&amp;lt;/code&amp;gt; once a day:&lt;br /&gt;
*** &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information about the script and what it does, please visit the [[fc-node-configuration]] script page.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
= Node Integration =&lt;br /&gt;
The following figure gives an overview what the node-integration script does for the different node types&lt;br /&gt;
&lt;br /&gt;
[[File:node-integration.png | thumb | 500 px]]&lt;br /&gt;
&lt;br /&gt;
= Old Documentation =&lt;br /&gt;
== Specialized Installation ==&lt;br /&gt;
=== Primary-Master-Node (vm-node-01) ===&lt;br /&gt;
If you configured a additional Backup Volume on the Storage Nodes, you want to mount them now in the VM-Node.&lt;br /&gt;
&lt;br /&gt;
Log into the Primary-Master-Node and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type primary-master-node&lt;br /&gt;
&lt;br /&gt;
==== Manual Steps ====&lt;br /&gt;
In order to be able to migrate a VM from a carrier, a special user called transfer will be created.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
lvcreate -L 60G -n transfer local0&lt;br /&gt;
&lt;br /&gt;
mkfs.xfs -L &amp;quot;OSBD_transfe&amp;quot; /dev/local0/transfer &lt;br /&gt;
&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
&lt;br /&gt;
LABEL=OSBD_transfe  /home/transfer    xfs      noatime,nodev,nosuid,noexec  0 2&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
mount /home/transfer&lt;br /&gt;
&lt;br /&gt;
useradd --comment &amp;quot;User which is used for VM disk file transfer between carriers&amp;quot; \&lt;br /&gt;
        --create-home \&lt;br /&gt;
        --system \&lt;br /&gt;
        --user-group \&lt;br /&gt;
        transfer&lt;br /&gt;
&lt;br /&gt;
passwd transfer&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Allow password authentication for the transfer user:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$EDITOR /etc/ssh/sshd_config&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[...]&lt;br /&gt;
&lt;br /&gt;
Match User transfer&lt;br /&gt;
        PasswordAuthentication yes&lt;br /&gt;
&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To apply the changes above, restart the SSH daemon:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
/etc/init.d/sshd restart&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney cloud]][[Category:Installation]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_cloud:_Multi-Node_Installation&amp;diff=3772</id>
		<title>stoney cloud: Multi-Node Installation</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_cloud:_Multi-Node_Installation&amp;diff=3772"/>
		<updated>2014-06-27T12:05:08Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
The [http://www.stoney-cloud.org/ stoney cloud] builds upon various standard open source components and can be run on commodity hardware. The final Multi-Node Setup consists of the following components:&lt;br /&gt;
* One [[Primary-Master-Node]] with an OpenLDAP Directory Server for the storage of the stoney cloud user and service related data with the web based management [[VM-Manager]] interface and the Linux kernel based virtualization technology.&lt;br /&gt;
* One [[Secondary-Master-Node]] with the Linux kernel based virtualization technology.&lt;br /&gt;
* Two [[Storage-Node | Storage-Nodes]] configured as  a replicated and distributed data storage service based on [http://www.gluster.org/ GlusterFS].&lt;br /&gt;
The components communicate with each other over a standard Ethernet based IPv4 network.&lt;br /&gt;
&lt;br /&gt;
=== Prerequisites ===&lt;br /&gt;
The following items and conditions are required to be able to install and configure a stoney cloud environment:&lt;br /&gt;
* Dedicated Hardware (4 Servers) which fulfil the following requirements: &lt;br /&gt;
** 64-Bit Intel with VT-Technologie (AMD is not tested at the moment).&lt;br /&gt;
** 8 Gigabyte Memory (more is better).&lt;br /&gt;
** 147 Gigabyte up to 2 Terabyte disks.&lt;br /&gt;
*** For all four Nodes ([[VM-Node | VM-Nodes]] and [[Storage-Node | Storage-Nodes]]) two 147 Gigabyte Disks, configured as RAID1, are enough for the [http://www.gentoo.org/ Gentoo Linux] operating system.&lt;br /&gt;
*** Optional: For the two [http://www.gluster.org/ GlusterFS] [[Storage-Node | Storage-Nodes]] we recommend a second RAID-Set configured as [http://en.wikipedia.org/wiki/RAID_6#RAID_6 RAID6]-Set with battery backup.&lt;br /&gt;
** Two physical Ethernet Interfaces, which support the same bandwidth (for example 1 Gigabit/s).&lt;br /&gt;
* Two Gigabit layer-2 switches supporting [http://en.wikipedia.org/wiki/IEEE_802.3ad IEEE 802.3ad] (dynamic link aggregation), [http://en.wikipedia.org/wiki/IEEE_802.1Q IEEE 802.1Q] (VLAN tagging) and stacking (optional, but recommended).&lt;br /&gt;
* Experience with Linux environments especially with [http://www.gentoo.org Gentoo Linux].&lt;br /&gt;
* Good experience with IP networking, because the Switches need to be configured manually (dynamic link aggregation and VLAN tagging).&lt;br /&gt;
&lt;br /&gt;
=== Limitations ===&lt;br /&gt;
During the installation of the [http://www.gentoo.org/ Gentoo Linux] operating system, the first two physical Ethernet Interfaces are automatically configured as a logical interface (bond0) and the four tagged VLANs are set up. If more than two physical Ethernet Interfaces are to be included in to the logical interface (bond0) or the physical Ethernet Interfaces have different bandwidths (for example 1 Gigabit/s and 10 Gigabit/s), the final logical interface (bond0) needs to be configured manually after the installation of the [http://www.gentoo.org/ Gentoo Linux] operating system.&lt;br /&gt;
Only the first two [http://www.gluster.org/ GlusterFS] [[Storage-Node | Storage-Nodes]] are set up automatically. More [[Storage-Node | Storage-Nodes]] need to be integrated manually.&lt;br /&gt;
&lt;br /&gt;
=== Network Overview ===&lt;br /&gt;
As stated before, a minimal multi node stoney cloud environment consist of two VM- and Storage-Nodes.&lt;br /&gt;
It is highly recommended to use IEEE 802.3ad link aggregation (bonding, trunking etc.) over two network cards and attach them as one logical link to the access switch.&lt;br /&gt;
&lt;br /&gt;
It is out of scope of this document on how to configure the switches for stacking, link aggregation and VLAN tagging, consult the respective user manual.&lt;br /&gt;
&lt;br /&gt;
There are two scenarios on how to connect the nodes to the network.&lt;br /&gt;
&lt;br /&gt;
==== Network Overview: Physical Layer Scenario 1 (recommended) ====&lt;br /&gt;
The preferred solution is to use two switches which are stackable (supporting link aggregation over two switches). Connect the nodes and switches as illustrated below. Thus eliminating the single point of failure as presented in scenario 2.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                      +----------------------+      +----------------------+&lt;br /&gt;
                      |                      |      |                      |&lt;br /&gt;
                      |      vm-node-01      |      |      vm-node-02      |&lt;br /&gt;
                      |                      |      |                      |&lt;br /&gt;
                      +----------------------+      +----------------------+        &lt;br /&gt;
                                     bond0 | \      / | bond0&lt;br /&gt;
                                           |  \    /  |&lt;br /&gt;
                                           |   \  /   |&lt;br /&gt;
                                           |    \/    |                                          ___&lt;br /&gt;
                                           |    /\    |                                      ___(   )___&lt;br /&gt;
                                           |   /  \   |                                   __(           )__&lt;br /&gt;
                                           |  /    \  |                                 _(                 )_&lt;br /&gt;
                                 +-------------+   +-------------+                    _(                     )_&lt;br /&gt;
                                 |  switch-01  |===|  switch-02  |-------------------(_   Corporate LAN/WAN   _)&lt;br /&gt;
                                 +-------------+   +-------------+                     (_                   _)&lt;br /&gt;
                                           | \      / |                                  (__             __)&lt;br /&gt;
                                           |  \    /  |                                     (___     ___)&lt;br /&gt;
                                           |   \  /   |                                         (___)&lt;br /&gt;
                                           |    \/    |&lt;br /&gt;
                                           |    /\    |&lt;br /&gt;
                                           |   /  \   |&lt;br /&gt;
                                     bond0 |  /    \  | bond0&lt;br /&gt;
                     +-----------------------+      +-----------------------+&lt;br /&gt;
                     |                       |      |                       |&lt;br /&gt;
                     | tier1-storage-node-01 |      | tier1-storage-node-02 |&lt;br /&gt;
                     |                       |      |                       |&lt;br /&gt;
                     +-----------------------+      +-----------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Network Overview: Physical Layer Scenario 2 (use at your own risk) ====&lt;br /&gt;
If theres only one switch available (or the switches aren&#039;t stackable) connect the nodes as illustrated below. As you can see, the switch is as a single point of failure.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
                      +----------------------+      +----------------------+&lt;br /&gt;
                      |                      |      |                      |&lt;br /&gt;
                      |      vm-node-01      |      |      vm-node-02      |&lt;br /&gt;
                      |                      |      |                      |&lt;br /&gt;
                      +----------------------+      +----------------------+                     ___&lt;br /&gt;
                                  bond0  \\            // bond0                              ___(   )___&lt;br /&gt;
                                          \\          //                                  __(           )__&lt;br /&gt;
                                           \\        //                                 _(                 )_&lt;br /&gt;
                                         +-------------+                              _(                     )_&lt;br /&gt;
                                         |  switch-01  |-----------------------------(_   Corporate LAN/WAN   _)&lt;br /&gt;
                                         +-------------+                               (_                   _)&lt;br /&gt;
                                           //       \\                                   (__             __) &lt;br /&gt;
                                          //         \\                                     (___     ___)&lt;br /&gt;
                                  bond0  //           \\ bond0                                  (___)&lt;br /&gt;
                     +-----------------------+      +-----------------------+&lt;br /&gt;
                     |                       |      |                       |&lt;br /&gt;
                     | tier1-storage-node-01 |      | tier1-storage-node-02 |&lt;br /&gt;
                     |                       |      |                       |&lt;br /&gt;
                     +-----------------------+      +-----------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Network overview: Logical layer ====&lt;br /&gt;
The goal is to achieve the following configuration:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
+----------------+----------------+----------------+----------------+&lt;br /&gt;
|   10.1.110.1X  |   10.1.120.1X  |   10.1.130.1X  | 192.168.140.1X |&lt;br /&gt;
+----------------+----------------+----------------+----------------+&lt;br /&gt;
|                |                |                |     vmbr0      |&lt;br /&gt;
|     vlan110    |    vlan120     |    vlan130     +----------------+&lt;br /&gt;
|                |                |                |    vlan140     |&lt;br /&gt;
+----------------+----------------+----------------+----------------+&lt;br /&gt;
+-------------------------------------------------------------------+&lt;br /&gt;
|                   bond0 (bonding.mode=802.3ad)                    |&lt;br /&gt;
+-------------------------------------------------------------------+&lt;br /&gt;
+----------------+                                 +----------------+&lt;br /&gt;
|      eth0      |                                 |      eth1      |&lt;br /&gt;
+----------------+                                 +----------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ideal stoney cloud environment is based on four logical separated VLANs (virtual LANs):&lt;br /&gt;
* &#039;&#039;&#039;admin&#039;&#039;&#039;: Administrative network, used for administration and monitoring purposes.&lt;br /&gt;
* &#039;&#039;&#039;data&#039;&#039;&#039;: Data network, used for GlusterFS traffic.&lt;br /&gt;
* &#039;&#039;&#039;int&#039;&#039;&#039;: Internal network, used for internal traffic such as LDAP, libvirt and more.&lt;br /&gt;
* &#039;&#039;&#039;pub&#039;&#039;&#039;: Public network, used for accessing the VM-Manager webinterface, Spice traffic and internet access.&lt;br /&gt;
&lt;br /&gt;
Each of the above VLANs hold dedicated services and separates them from each other. This documentation assumes, that the four VLANs are present and the following IP networks are available:&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; style=&amp;quot;border-collapse: collapse; font-size:100%;&amp;quot; width=100%&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|VLAN name&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|VLAN ID&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Network prefix&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Default Gateway address&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Broadcast address&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Domain name&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|VIP&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|admin&lt;br /&gt;
|110&lt;br /&gt;
|10.1.110.0/24&lt;br /&gt;
| -- &lt;br /&gt;
|10.1.110.255&lt;br /&gt;
|admin.example.com&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|data&lt;br /&gt;
|120&lt;br /&gt;
|10.1.120.0/24&lt;br /&gt;
| --&lt;br /&gt;
|10.1.120.255&lt;br /&gt;
|data.example.com&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|int&lt;br /&gt;
|130&lt;br /&gt;
|10.1.130.0/24&lt;br /&gt;
| --&lt;br /&gt;
|10.1.130.255&lt;br /&gt;
|int.example.com&lt;br /&gt;
|10.1.130.10&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|pub&lt;br /&gt;
|140&lt;br /&gt;
|192.168.140.0/24&lt;br /&gt;
|192.168.140.1&lt;br /&gt;
|192.168.140.255&lt;br /&gt;
|example.com&lt;br /&gt;
|192.168.140.10&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The IP allocation of the nodes will be assumed as stated in the table below:&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; style=&amp;quot;border-collapse: collapse; font-size:100%;&amp;quot; width=100% &lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Node name&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Admin address (VLAN 110)&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Data address (VLAN 120)&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Int address (VLAN 130)&lt;br /&gt;
!align=&amp;quot;left&amp;quot;|Pub address (VLAN 140)&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|tier1-storage-node-01&lt;br /&gt;
|10.1.110.11&lt;br /&gt;
|10.1.120.11&lt;br /&gt;
|10.1.130.11&lt;br /&gt;
|192.168.140.11&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|tier1-storage-node-02&lt;br /&gt;
|10.1.110.12&lt;br /&gt;
|10.1.120.12&lt;br /&gt;
|10.1.130.12&lt;br /&gt;
|192.168.140.12&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|vm-node-01&lt;br /&gt;
|10.1.110.13&lt;br /&gt;
|10.1.120.13&lt;br /&gt;
|10.1.130.13&lt;br /&gt;
|192.168.140.13&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|vm-node-02&lt;br /&gt;
|10.1.110.14&lt;br /&gt;
|10.1.120.14&lt;br /&gt;
|10.1.130.14&lt;br /&gt;
|192.168.140.14&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
We&#039;ll also presume, we have the following Domain Name Servers:&lt;br /&gt;
* Domain Name Server 1: &#039;&#039;&#039;192.168.140.1&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Base Installation ==&lt;br /&gt;
All nodes are based on the [http://www.gentoo.org/ Gentoo Linux] operating system.&lt;br /&gt;
&lt;br /&gt;
=== BIOS Set Up Checklist ===&lt;br /&gt;
* Enable the &amp;quot;reboot-after-power-loss&amp;quot; option (if your BIOS supports it).&lt;br /&gt;
* Make sure, that you have the newest BIOS (BMC and perhaps SCSI firmware) version.&lt;br /&gt;
* Make sure, you&#039;ve disabled halt on post errors (or similar) or enable keyboard-less operation (if your BIOS supports it).&lt;br /&gt;
&lt;br /&gt;
=== RAID Set Up ===&lt;br /&gt;
Create a RAID1 volume. This RAID-Set is used for the Operating System. Please be aware, the the current stoney cloud only supports 147 Gigabyte up to 2 Terabyte disks for this first RAID-Set.&lt;br /&gt;
&lt;br /&gt;
Optional: For the two [http://www.gluster.org/ GlusterFS] [[Storage-Node | Storage-Nodes]] we recommend a second RAID-Set configured as [http://en.wikipedia.org/wiki/RAID_6#RAID_6 RAID6]-Set with battery backup.&lt;br /&gt;
&lt;br /&gt;
=== Node Installation ===&lt;br /&gt;
The first step of Semi-Automatic Multi-Node Set Up is the same for all four Nodes. In this documentation we presume, that you stick to the naming convention mentioned above. After the Base Installation of the Nodes, the following daemons will be running:&lt;br /&gt;
* &#039;&#039;&#039;crond&#039;&#039;&#039;: Crond, to execute scheduled commands&lt;br /&gt;
* &#039;&#039;&#039;ntpd&#039;&#039;&#039;: Network Time Protocol daemon, keeps the time synced with the Time from Servers in the LAN or WAN.&lt;br /&gt;
* &#039;&#039;&#039;sshd&#039;&#039;&#039;: OpenSSH SSH daemon, used for remote access and remote administration.&lt;br /&gt;
* &#039;&#039;&#039;syslogd&#039;&#039;&#039;: System Logging, keeps track of messages from the System and the Applications.&lt;br /&gt;
* &#039;&#039;&#039;udevd&#039;&#039;&#039;:  Linux dynamic device management, that manage events, symlinks and permissions of devices.&lt;br /&gt;
&lt;br /&gt;
==== First Storage-Node (tier1-storage-node-01) ====&lt;br /&gt;
# Insert the stoney cloud CD and boot the server.&lt;br /&gt;
# Answer the questions as follows (the bold values are examples can be set through the administrator and are variable, according to the local setup):&lt;br /&gt;
## Global Section&lt;br /&gt;
### Confirm that you want to start? &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
### Choose a Node-Type: &#039;&#039;&#039;Storage-Node&#039;&#039;&#039;&lt;br /&gt;
### Choose a Block-Device: &#039;&#039;&#039;sda&#039;&#039;&#039;&lt;br /&gt;
### Confirm to erase all (from a previous installation): &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
### Confirm to continue with the given the Partition-Scheme: &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
### Choose the network interfaces to bond together.&lt;br /&gt;
#### Device #0: &#039;&#039;&#039;eth0&#039;&#039;&#039;&lt;br /&gt;
#### Device #1: &#039;&#039;&#039;eth1&#039;&#039;&#039;&lt;br /&gt;
### Node-Name: &#039;&#039;&#039;tier1-storage-node-01&#039;&#039;&#039;&lt;br /&gt;
## pub-VLAN Section&lt;br /&gt;
### VLAN ID: &#039;&#039;&#039;140&#039;&#039;&#039;&lt;br /&gt;
### Domain Name: &#039;&#039;&#039;example.com&#039;&#039;&#039;&lt;br /&gt;
### IP Address: &#039;&#039;&#039;192.168.140.11&#039;&#039;&#039;&lt;br /&gt;
### Netmask: &#039;&#039;&#039;24&#039;&#039;&#039;&lt;br /&gt;
### Broadcast: &#039;&#039;&#039;192.168.140.255&#039;&#039;&#039;&lt;br /&gt;
### Confirm the pub-VLAN Section with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
## admin-VLAN Section&lt;br /&gt;
### VLAN ID: &#039;&#039;&#039;110&#039;&#039;&#039;&lt;br /&gt;
### Domain Name: &#039;&#039;&#039;admin.example.com&#039;&#039;&#039;&lt;br /&gt;
### IP Address: &#039;&#039;&#039;10.1.110.11&#039;&#039;&#039;&lt;br /&gt;
### Netmask: &#039;&#039;&#039;24&#039;&#039;&#039;&lt;br /&gt;
### Broadcast: &#039;&#039;&#039;10.1.110.255&#039;&#039;&#039;&lt;br /&gt;
### Confirm the admin-VLAN Section with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
## data-VLAN Section&lt;br /&gt;
### VLAN ID: &#039;&#039;&#039;120&#039;&#039;&#039;&lt;br /&gt;
### Domain Name: &#039;&#039;&#039;data.example.com&#039;&#039;&#039;&lt;br /&gt;
### IP Address: &#039;&#039;&#039;10.1.120.11&#039;&#039;&#039;&lt;br /&gt;
### Netmask: &#039;&#039;&#039;24&#039;&#039;&#039;&lt;br /&gt;
### Broadcast: &#039;&#039;&#039;10.1.120.255&#039;&#039;&#039;&lt;br /&gt;
### Confirm the data-VLAN Section with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
## int-VLAN Section&lt;br /&gt;
### VLAN ID: &#039;&#039;&#039;130&#039;&#039;&#039;&lt;br /&gt;
### Domain Name: &#039;&#039;&#039;int.example.com&#039;&#039;&#039;&lt;br /&gt;
### IP Address: &#039;&#039;&#039;10.1.130.11&#039;&#039;&#039;&lt;br /&gt;
### Netmask: &#039;&#039;&#039;24&#039;&#039;&#039;&lt;br /&gt;
### Broadcast: &#039;&#039;&#039;10.1.130.255&#039;&#039;&#039;&lt;br /&gt;
### Confirm the int-VLAN Section with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
### Enter the default Gateway: &#039;&#039;&#039;192.168.140.1&#039;&#039;&#039;&lt;br /&gt;
### Enter the primary DNS-Server: &#039;&#039;&#039;192.168.140.1&#039;&#039;&#039;&lt;br /&gt;
### Omit configuring a second DNS-Server with &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
## Confirm the listed configuration with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
## Enter your very secret root password&lt;br /&gt;
## Confirm to reboot with &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
# Make sure, that you boot from the first harddisk and not from the installation medium again.&lt;br /&gt;
# Continue with [[Multi-Node Installation#Specialized_Installation|specializing your Node]]&lt;br /&gt;
&lt;br /&gt;
==== Second Storage-Node (tier1-storage-node-02) ====&lt;br /&gt;
# Insert the stoney cloud CD and boot the server.&lt;br /&gt;
# Answer the questions.&lt;br /&gt;
# Reboot the Server and make sure, that you boot from the first harddisk.&lt;br /&gt;
&lt;br /&gt;
==== Primary-Master-Node (vm-node-01) ====&lt;br /&gt;
# Insert the stoney cloud CD and boot the server.&lt;br /&gt;
# Answer the questions.&lt;br /&gt;
# Reboot the Server and make sure, that you boot from the first harddisk.&lt;br /&gt;
&lt;br /&gt;
==== Secondary-Master-Node (vm-node-02) ====&lt;br /&gt;
# Insert the stoney cloud CD and boot the server.&lt;br /&gt;
# Answer the questions.&lt;br /&gt;
# Reboot the Server and make sure, that you boot from the first harddisk.&lt;br /&gt;
&lt;br /&gt;
== Skipping Checks ==&lt;br /&gt;
To skip checks, type &#039;&#039;&#039;no&#039;&#039;&#039; when asked:&lt;br /&gt;
  Do you want to start the installation?&lt;br /&gt;
 yes or no?: &#039;&#039;&#039;no&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Then manually restart the stoney cloud installer with the desired options. For example:&lt;br /&gt;
 /mnt/cdrom/foss-cloud-installer -c&lt;br /&gt;
&lt;br /&gt;
Options:&lt;br /&gt;
 -c: Skip CPU requirement checks&lt;br /&gt;
 -m: Skip memory requirement checks&lt;br /&gt;
 -s: Skip CPU and memory requirement checks&lt;br /&gt;
&lt;br /&gt;
== Specialized Installation ==&lt;br /&gt;
==== First Storage-Node (tier1-storage-node-01) ====&lt;br /&gt;
Before running the node configuration script, you may want to create a [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#Before_node_configuration_script | additional Backup Volume on Storage Node]].&lt;br /&gt;
&lt;br /&gt;
Log into the first [[Storage-Node]] and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type primary-storage-node&lt;br /&gt;
&lt;br /&gt;
For more information about the script and what it does, please visit the [[fc-node-configuration]] script page.&lt;br /&gt;
&lt;br /&gt;
==== Second Storage-Node (tier1-storage-node-02) ====&lt;br /&gt;
Before running the node configuration script, you may want to create a [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#Before_node_configuration_script | additional Backup Volume on Storage Node]].&lt;br /&gt;
&lt;br /&gt;
Log into the second [[Storage-Node]] and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type secondary-storage-node&lt;br /&gt;
&lt;br /&gt;
For more information about the script and what it does, please visit the [[node-configuration]] script page.&lt;br /&gt;
&lt;br /&gt;
==== Primary-Master-Node (vm-node-01) ====&lt;br /&gt;
If you configured a additional Backup Volume on the Storage Nodes, you want to [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#On_the_VM-Nodes | mount them now in the VM-Node]].&lt;br /&gt;
&lt;br /&gt;
Log into the [[Primary-Master-Node]] and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type primary-master-node&lt;br /&gt;
&lt;br /&gt;
The stoney cloud uses virtual ip addresses (VIPs) for fail over purposes. Therefore you need to configure [http://www.pureftpd.org/project/ucarp ucarp].&lt;br /&gt;
&lt;br /&gt;
Confirm that you want to run the script.&lt;br /&gt;
 Do you really want to proceed with configuration of the primary-master-node?&lt;br /&gt;
 yes or no (default: no): &#039;&#039;&#039;yes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Enter the VIP (virtual IP) for the public interface. The apache will listen on this VIP because it is listening on the public interface (if you followed this documentation the VIP for the public interface is 192.168.140.10).&lt;br /&gt;
 Please enter the VIP for the pub-interface (VLAN 140)&lt;br /&gt;
 (default: 192.168.140.10): &#039;&#039;&#039;192.168.140.10&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Enter the VIP (virtual IP) for the internal interace. The LDAP will listen on this VIP because it is listening on the internal interface (if you followed this documentation the VIP for the internal interface is 10.1.130.10).&lt;br /&gt;
 Please enter the VIP for the int-interface (VLAN 130)&lt;br /&gt;
 (default: 10.1.130.10): &#039;&#039;&#039;10.1.130.10&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Enter the RIP (real IP) for the secondary-master-node on the internal interface. The RIP is needed to keep the LDAP directories synchronized (if you followed this documentation the RIP for the secondary-master-node on the internal interface is 10.1.130.14)&lt;br /&gt;
 Please enter the IP for the int-interface (VLAN 130) of the&lt;br /&gt;
 secondary-master-node (default: 10.1.130.14): &#039;&#039;&#039;10.1.130.14&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The script now tests the network configuration and &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In order to mount the gluster-filesystem, you need to connect via ssh to the primary-storage-node. Enter the IP for the primary-storage-node on the admin interface (if you followed this documentation the IP for the primary-storage-node on the admin interface is 10.1.110.11). Enter the a valid username which exists on the primary-storage-node (if you followed this documentation it is root) and the corresponding password.&lt;br /&gt;
 Please enter the following information for the primary Storage-Node with the OpenSSH daemon listening on the VLAN with the name &#039;admin&#039; and with the VLAN ID &#039;110&#039;:&lt;br /&gt;
 &lt;br /&gt;
 IP address (default: 10.1.110.11): &#039;&#039;&#039;10.1.110.11&#039;&#039;&#039;&lt;br /&gt;
 Username (default: root): &#039;&#039;&#039;root&#039;&#039;&#039;&lt;br /&gt;
 Password for root: &#039;&#039;&#039;**********&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In order to mount the gluster-filesystem, you need to connect via ssh to the secondary-storage-node. Enter the IP for the secondary-storage-node on the admin interface (if you followed this documentation the IP for the primary-storage-node on the admin interface is 10.1.110.12). Enter the a valid username which exists on the primary-storage-node (if you followed this documentation it is root) and the corresponding password.&lt;br /&gt;
 Please enter the following information for the secondary Storage-Node with the OpenSSH daemon listening on the VLAN with the name &#039;admin&#039; and with the VLAN ID &#039;110&#039;:&lt;br /&gt;
 &lt;br /&gt;
 IP address (default: 10.1.110.12): &#039;&#039;&#039;10.1.110.12&#039;&#039;&#039;&lt;br /&gt;
 Username (default: root): &#039;&#039;&#039;root&#039;&#039;&#039;&lt;br /&gt;
 Password for root:  &#039;&#039;&#039;**********&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Configure the LDAP directory: &lt;br /&gt;
** Define the password for the LDAP-Superuser (cn=Manager,dc=stoney-cloud,dc=org)&lt;br /&gt;
** Currently the user for the prov-backup-kvm daemon is the LDAP-Superuser so enter the same password again&lt;br /&gt;
** Define the password for the LDAP-dhcp user (cn=dhcp,ou=services,ou=administration,dc=stoney-cloud,dc=org)&lt;br /&gt;
** Enter all necessary information for the stoney cloud administrator (User1)&lt;br /&gt;
*** Given name&lt;br /&gt;
*** Surname&lt;br /&gt;
*** Gender&lt;br /&gt;
*** E-mail&lt;br /&gt;
*** Language&lt;br /&gt;
*** Password&lt;br /&gt;
&lt;br /&gt;
* Finally enter the domain name which will correspond to the public VIP (default is stoney-cloud.example.org)&lt;br /&gt;
&lt;br /&gt;
* Due to [https://github.com/stepping-stone/node-integration/issues/9 bug #9], you need to manually finish the configuration of the libvirthook scripts:&lt;br /&gt;
** You mainly have to fill in the following variables:&lt;br /&gt;
*** &#039;&#039;&#039;libvirtHookFirewallSvnUser&#039;&#039;&#039;&lt;br /&gt;
*** &#039;&#039;&#039;libvirtHookFirewallSvnPassword&#039;&#039;&#039;&lt;br /&gt;
** See also [https://int.stepping-stone.ch/wiki/libvirt_Hooks#Config_for_test-environment this test configuration]&lt;br /&gt;
* Due to [https://github.com/stepping-stone/node-integration/issues/12 bug #12], you need to manually configure the LDAPKVMWrapper.pl script:&lt;br /&gt;
** Fill in the &amp;lt;code&amp;gt;/etc/Provisioning/Backup/LDAPKVMWrapper.conf&amp;lt;/code&amp;gt; file&lt;br /&gt;
** Create a cronjob entry which runs the script &amp;lt;code&amp;gt;/usr/bin/LDAPKVMWrapper.pl&amp;lt;/code&amp;gt; once a day:&lt;br /&gt;
*** &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information about the script and what it does, please visit the [[fc-node-configuration]] script page.&lt;br /&gt;
&lt;br /&gt;
==== Secondary-Master-Node (vm-node-02) ====&lt;br /&gt;
If you configured a additional Backup Volume on the Storage Nodes, you want to [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#On_the_VM-Nodes | mount them now in the Secondary-Master-Node]].&lt;br /&gt;
&lt;br /&gt;
Log into the [[Secondary-Master-Node]] and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type secondary-master-node&lt;br /&gt;
&lt;br /&gt;
* In order to get the configuration from the primary-master-node, we need to access it via ssh&lt;br /&gt;
** Enter the IP for the primary-master-node on the admin interface (if you followed this documentation it is 10.1.130.13)&lt;br /&gt;
** Enter the username (if you followd the default setup it is root)&lt;br /&gt;
** Enter the users password.&lt;br /&gt;
&lt;br /&gt;
* In order to mount the gluster-filesystem, you need to connect via ssh to the primary-storage-node, so enter the following information:&lt;br /&gt;
** Enter the IP for the primary-storage-node on the admin interface (if you followed this documentation the IP for the primary-storage-node on the admin interface is 10.1.110.11)&lt;br /&gt;
** Enter the a valid username which exists on the primary-storage-node (if you followed this documentation it is root)&lt;br /&gt;
** Enter the users password&lt;br /&gt;
* Repeat the same procedure for the secondary-storage-node (if you followed this documentation the IP is 10.1.110.12)&lt;br /&gt;
&lt;br /&gt;
* Enter the LDAP-Superuser password you defined during the [[Multi-Node Installation#Primary-Master-Node (stoney-cloud-node-01)_2 | primary-master-node]] installation&lt;br /&gt;
&lt;br /&gt;
* Due tu [https://github.com/stepping-stone/node-integration/issues/9 bug #9], you need to manually finish the configuration of the libvirthook scripts:&lt;br /&gt;
** You mainly have to fill in the following variables:&lt;br /&gt;
*** &#039;&#039;&#039;libvirtHookFirewallSvnUser&#039;&#039;&#039;&lt;br /&gt;
*** &#039;&#039;&#039;libvirtHookFirewallSvnPassword&#039;&#039;&#039;&lt;br /&gt;
** See also [https://int.stepping-stone.ch/wiki/libvirt_Hooks#Config_for_test-environment this test configuration]&lt;br /&gt;
* Due to [https://github.com/stepping-stone/node-integration/issues/12 bug #12], you need to manually configure the LDAPKVMWrapper.pl script:&lt;br /&gt;
** Fill in the &amp;lt;code&amp;gt;/etc/Provisioning/Backup/LDAPKVMWrapper.conf&amp;lt;/code&amp;gt; file&lt;br /&gt;
** Create a cronjob entry which runs the script &amp;lt;code&amp;gt;/usr/bin/LDAPKVMWrapper.pl&amp;lt;/code&amp;gt; once a day:&lt;br /&gt;
*** &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information about the script and what it does, please visit the [[fc-node-configuration]] script page.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
= Node Integration =&lt;br /&gt;
&lt;br /&gt;
= Old Documentation =&lt;br /&gt;
== Specialized Installation ==&lt;br /&gt;
=== Primary-Master-Node (vm-node-01) ===&lt;br /&gt;
If you configured a additional Backup Volume on the Storage Nodes, you want to mount them now in the VM-Node.&lt;br /&gt;
&lt;br /&gt;
Log into the Primary-Master-Node and execute the node-configuration script as follows:&lt;br /&gt;
 /usr/sbin/fc-node-configuration --node-type primary-master-node&lt;br /&gt;
&lt;br /&gt;
==== Manual Steps ====&lt;br /&gt;
In order to be able to migrate a VM from a carrier, a special user called transfer will be created.&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
lvcreate -L 60G -n transfer local0&lt;br /&gt;
&lt;br /&gt;
mkfs.xfs -L &amp;quot;OSBD_transfe&amp;quot; /dev/local0/transfer &lt;br /&gt;
&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; /etc/fstab&lt;br /&gt;
&lt;br /&gt;
LABEL=OSBD_transfe  /home/transfer    xfs      noatime,nodev,nosuid,noexec  0 2&lt;br /&gt;
EOF&lt;br /&gt;
&lt;br /&gt;
mount /home/transfer&lt;br /&gt;
&lt;br /&gt;
useradd --comment &amp;quot;User which is used for VM disk file transfer between carriers&amp;quot; \&lt;br /&gt;
        --create-home \&lt;br /&gt;
        --system \&lt;br /&gt;
        --user-group \&lt;br /&gt;
        transfer&lt;br /&gt;
&lt;br /&gt;
passwd transfer&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Allow password authentication for the transfer user:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$EDITOR /etc/ssh/sshd_config&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[...]&lt;br /&gt;
&lt;br /&gt;
Match User transfer&lt;br /&gt;
        PasswordAuthentication yes&lt;br /&gt;
&lt;br /&gt;
[...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To apply the changes above, restart the SSH daemon:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
/etc/init.d/sshd restart&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney cloud]][[Category:Installation]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_backup:_Server_set-up&amp;diff=3771</id>
		<title>stoney backup: Server set-up</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_backup:_Server_set-up&amp;diff=3771"/>
		<updated>2014-06-27T07:41:57Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Provisioning global configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Abstract =&lt;br /&gt;
This document describes server setup for the stoney cloud (Online) Backup service, built upon the [http://www.gentoo.org/ Gentoo] Linux distribution.&lt;br /&gt;
&lt;br /&gt;
= Overview =&lt;br /&gt;
After working through this documentation, you will be able to set up and configure your own (Online) Backup service server.&lt;br /&gt;
&lt;br /&gt;
= Software Installation =&lt;br /&gt;
&lt;br /&gt;
== Requirements ==&lt;br /&gt;
A working stoney cloud, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].&lt;br /&gt;
&lt;br /&gt;
== Keywords &amp;amp; USE-Flags ==&lt;br /&gt;
For a minimal OpenLDAP directory installation:&lt;br /&gt;
 echo &amp;quot;net-nds/openldap minimal sasl&amp;quot; &amp;gt;&amp;gt; /etc/portage/package.use&lt;br /&gt;
 echo &amp;quot;net-nds/openldap ~amd64&amp;quot; &amp;gt;&amp;gt; /etc/portage/package.keywords&lt;br /&gt;
&lt;br /&gt;
NSS and PAM modules for lookups using LDAP:&lt;br /&gt;
 echo &amp;quot;sys-auth/nss-pam-ldapd sasl&amp;quot; &amp;gt;&amp;gt; /etc/portage/package.use&lt;br /&gt;
 echo &amp;quot;sys-auth/nss-pam-ldapd ~amd64&amp;quot; &amp;gt;&amp;gt; /etc/portage/package.keywords&lt;br /&gt;
 echo &amp;quot;sys-fs/quota ldap&amp;quot; &amp;gt;&amp;gt; /etc/portage/package.use&lt;br /&gt;
&lt;br /&gt;
 echo &amp;quot;=app-admin/jailkit-2.16 ~amd64&amp;quot; &amp;gt;&amp;gt; /etc/portage/package.keywords&lt;br /&gt;
&lt;br /&gt;
For the prov-backup-rsnapshot daemon:&lt;br /&gt;
 echo &amp;quot;dev-perl/Net-SMTPS ~amd64&amp;quot; &amp;gt;&amp;gt; /etc/portage/package.keywords&lt;br /&gt;
 echo &amp;quot;perl-core/Switch ~amd64&amp;quot; &amp;gt;&amp;gt; /etc/portage/package.keywords&lt;br /&gt;
&lt;br /&gt;
To build puttygen only without X11:&lt;br /&gt;
 echo &amp;quot;net-misc/putty ~amd64&amp;quot; &amp;gt;&amp;gt; /etc/portage/package.keywords&lt;br /&gt;
 echo &amp;quot;net-misc/putty -gtk&amp;quot; &amp;gt;&amp;gt; /etc/portage/package.use&lt;br /&gt;
&lt;br /&gt;
== Emerge ==&lt;br /&gt;
 emerge -va nss-pam-ldapd \&lt;br /&gt;
            quota \&lt;br /&gt;
            net-misc/putty \&lt;br /&gt;
            app-admin/jailkit \&lt;br /&gt;
            sys-apps/haveged \&lt;br /&gt;
            net-misc/putty \&lt;br /&gt;
            sys-apps/sst-backup-utils \&lt;br /&gt;
            sys-apps/sst-prov-backup-rsnapshot&lt;br /&gt;
&lt;br /&gt;
To list the dependencies of ebuilds, you can use &amp;lt;code&amp;gt;equery&amp;lt;/code&amp;gt;:&lt;br /&gt;
 equery depgraph sst-backup-utils&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 * Searching for sst-backup-utils ...&lt;br /&gt;
&lt;br /&gt;
 * dependency graph for sys-apps/sst-backup-utils-0.1.0&lt;br /&gt;
 `--  sys-apps/sst-backup-utils-0.1.0  amd64 &lt;br /&gt;
   `--  dev-perl/PerlUtil-0.1.0  (&amp;gt;=dev-perl/PerlUtil-0.1.0) amd64 &lt;br /&gt;
   `--  virtual/perl-Sys-Syslog-0.320.0  (virtual/perl-Sys-Syslog) amd64 &lt;br /&gt;
   `--  dev-perl/perl-ldap-0.530.0  (dev-perl/perl-ldap) amd64 &lt;br /&gt;
   `--  dev-perl/XML-Simple-2.200.0  (dev-perl/XML-Simple) amd64 &lt;br /&gt;
   `--  dev-perl/Config-IniFiles-2.780.0  (dev-perl/Config-IniFiles) amd64 &lt;br /&gt;
   `--  dev-perl/XML-Validator-Schema-1.100.0  (dev-perl/XML-Validator-Schema) amd64 &lt;br /&gt;
   `--  dev-perl/Date-Calc-6.300.0  (dev-perl/Date-Calc) amd64 &lt;br /&gt;
   `--  dev-perl/DateManip-6.310.0  (dev-perl/DateManip) amd64 &lt;br /&gt;
   `--  dev-perl/Schedule-Cron-Events-1.930.0  (dev-perl/Schedule-Cron-Events) amd64 &lt;br /&gt;
   `--  dev-perl/DateTime-Format-Strptime-1.520.0  (dev-perl/DateTime-Format-Strptime) amd64 &lt;br /&gt;
   `--  dev-perl/XML-SAX-0.990.0  (dev-perl/XML-SAX) amd64 &lt;br /&gt;
   `--  virtual/perl-MIME-Base64-3.130.0-r2  (virtual/perl-MIME-Base64) amd64 &lt;br /&gt;
   `--  dev-perl/Authen-SASL-2.160.0  (dev-perl/Authen-SASL) amd64 &lt;br /&gt;
   `--  dev-perl/Net-SMTPS-0.30.0  (dev-perl/Net-SMTPS) ~amd64 &lt;br /&gt;
   `--  dev-perl/text-template-1.450.0  (dev-perl/text-template) amd64 &lt;br /&gt;
   `--  virtual/perl-Getopt-Long-2.380.0-r2  (virtual/perl-Getopt-Long) amd64 &lt;br /&gt;
   `--  dev-perl/Parallel-ForkManager-1.20.0  (dev-perl/Parallel-ForkManager) amd64 &lt;br /&gt;
   `--  dev-perl/Time-Stopwatch-1.0.0  (dev-perl/Time-Stopwatch) amd64 &lt;br /&gt;
   `--  app-backup/rsnapshot-1.3.1-r1  (app-backup/rsnapshot) amd64 &lt;br /&gt;
[ sys-apps/sst-backup-utils-0.1.0 stats: packages (20), max depth (1) ]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information, visit the [http://www.gentoo.org/doc/en/gentoolkit.xml Gentoolkit] page.&lt;br /&gt;
&lt;br /&gt;
= Base Server Software Configuration =&lt;br /&gt;
== OpenSSH ==&lt;br /&gt;
=== OpenSSH Configuration ===&lt;br /&gt;
Configure the OpenSSH daemon:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
vi /etc/ssh/sshd_config&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Set following options:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
PubkeyAuthentication yes&lt;br /&gt;
PasswordAuthentication yes&lt;br /&gt;
UsePAM yes&lt;br /&gt;
Subsystem     sftp   internal-sftp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure, that &amp;lt;code&amp;gt;Subsystem     sftp   internal-sftp&amp;lt;/code&amp;gt; is the last line in the configuration file.&lt;br /&gt;
&lt;br /&gt;
We want to reduce the numbers of chroot environments in one folder. As the &amp;lt;code&amp;gt;ChrootDirectory&amp;lt;/code&amp;gt; configuration option only allows &amp;lt;code&amp;gt;%h&amp;lt;/code&amp;gt; (home directory of the user) and &amp;lt;code&amp;gt;%u&amp;lt;/code&amp;gt; (username of the user), we need to create the necessary matching rules in the form of:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
Match User *000&lt;br /&gt;
  ChrootDirectory /var/backup/000/%u&lt;br /&gt;
  AuthorizedKeysFile /var/backup/000/%u/%h/.ssh/authorized_keys&lt;br /&gt;
Match&lt;br /&gt;
Match User *001&lt;br /&gt;
  ChrootDirectory /var/backup/001/%u&lt;br /&gt;
  AuthorizedKeysFile /var/backup/001/%u/%h/.ssh/authorized_keys&lt;br /&gt;
Match&lt;br /&gt;
...&lt;br /&gt;
Match User *999&lt;br /&gt;
  ChrootDirectory /var/backup/999/%u&lt;br /&gt;
  AuthorizedKeysFile /var/backup/999/%u/%h/.ssh/authorized_keys&lt;br /&gt;
Match&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The creation of the matching rules is done by executing the following bash commands:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
FILE=/etc/ssh/sshd_config;&lt;br /&gt;
&lt;br /&gt;
for x in {0..999} ; do \&lt;br /&gt;
  printf &amp;quot;Match User *%03d\n&amp;quot; $x &amp;gt;&amp;gt; ${FILE}; \&lt;br /&gt;
  printf &amp;quot;  ChrootDirectory /var/backup/%03d/%%u\n&amp;quot; $x &amp;gt;&amp;gt; ${FILE}; \&lt;br /&gt;
  printf &amp;quot;  AuthorizedKeysFile /var/backup/%03d/%%u/%%h/.ssh/authorized_keys\n&amp;quot; $x &amp;gt;&amp;gt; ${FILE}; \&lt;br /&gt;
  printf &amp;quot;Match\n&amp;quot; &amp;gt;&amp;gt; ${FILE}; \&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Don&#039;t forget to restart the OpenSSH daemon:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
/etc/init.d/sshd restart&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== OpenSSH Host Keys ===&lt;br /&gt;
If you migrate from a existing backup server, you might want to copy the ssh host keys to the new server. If you do so clients want see a difference between the two hosts as the fingerprint remains the same. Copy the following files from the existing host to the new:&lt;br /&gt;
* /etc/ssh/ssh_host_dsa_key&lt;br /&gt;
* /etc/ssh/ssh_host_ecdsa_key&lt;br /&gt;
* /etc/ssh/ssh_host_key&lt;br /&gt;
* /etc/ssh/ssh_host_rsa_key&lt;br /&gt;
* /etc/ssh/ssh_host_dsa_key.pub&lt;br /&gt;
* /etc/ssh/ssh_host_ecdsa_key.pub&lt;br /&gt;
* /etc/ssh/ssh_host_key.pub&lt;br /&gt;
* /etc/ssh/ssh_host_rsa_key.pub&lt;br /&gt;
&lt;br /&gt;
Set the correct permissions on the new host:&lt;br /&gt;
 chmod 600 /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_ecdsa_key /etc/ssh/ssh_host_key /etc/ssh/ssh_host_rsa_key&lt;br /&gt;
 chmod 644 /etc/ssh/*.pub&lt;br /&gt;
&lt;br /&gt;
And restart the ssh daemon. &#039;&#039;Caution&#039;&#039;: do not close your existing ssh session as long as you are not sure the ssh daemon has restarted properly and you can login again.&lt;br /&gt;
 /etc/init.d/sshd restart&lt;br /&gt;
&lt;br /&gt;
== OpenLDAP ==&lt;br /&gt;
=== /etc/hosts ===&lt;br /&gt;
Update the &amp;lt;code&amp;gt;/etc/hosts&amp;lt;/code&amp;gt; with the LDAP server:&lt;br /&gt;
 /etc/hosts&lt;br /&gt;
&lt;br /&gt;
 # VIP of the LDAP Server&lt;br /&gt;
 31.216.40.4      ldapm.stoney-cloud.org&lt;br /&gt;
&lt;br /&gt;
=== Root CA Certificate Installation ===&lt;br /&gt;
Install the root CA certificate into the OpenSSL default certificate storage directory:&lt;br /&gt;
 fqdn=&amp;quot;cloud.stoney-cloud.org&amp;quot;    # The fully qualified domain name of the server containing the root certificate.&lt;br /&gt;
 &lt;br /&gt;
 cd /etc/ssl/certs/&lt;br /&gt;
 wget --no-check-certificate https://${fqdn}/ca/FOSS-Cloud_CA.cert.pem&lt;br /&gt;
 chown root:root /etc/ssl/certs/FOSS-Cloud_CA.cert.pem&lt;br /&gt;
 chmod 444 /etc/ssl/certs/FOSS-Cloud_CA.cert.pem&lt;br /&gt;
&lt;br /&gt;
Rebuild the CA hashes&lt;br /&gt;
 c_rehash /etc/ssl/certs/&lt;br /&gt;
&lt;br /&gt;
=== /etc/openldap/ldap.conf ===&lt;br /&gt;
Update the &amp;lt;code&amp;gt;/etc/openldap/ldap.conf&amp;lt;/code&amp;gt;LDAP configuration file/environment variables:&lt;br /&gt;
 /etc/openldap/ldap.conf&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Used to specify a size limit to use when performing searches. The number should be an&lt;br /&gt;
# non-negative integer. SIZELIMIT of zero (0) specifies unlimited search size.&lt;br /&gt;
SIZELIMIT       20000&lt;br /&gt;
&lt;br /&gt;
# Used to specify a time limit to use when performing searches. The number should be an&lt;br /&gt;
# non-negative integer. TIMELIMIT of zero (0) specifies unlimited search time to be used.&lt;br /&gt;
TIMELIMIT       45&lt;br /&gt;
&lt;br /&gt;
# Specify how aliases dereferencing is done. DEREF should be set to one of never, always, search,&lt;br /&gt;
# or find to specify that aliases are never dereferenced, always dereferenced, dereferenced when&lt;br /&gt;
# searching, or dereferenced only when locating the base object for the search. The default is to&lt;br /&gt;
# never dereference aliases.&lt;br /&gt;
DEREF           never&lt;br /&gt;
&lt;br /&gt;
# Specifies the URI(s) of an LDAP server(s) to which the LDAP library should connect. The URI&lt;br /&gt;
# scheme may be either ldapor ldaps which refer to LDAP over TCP and LDAP over SSL (TLS)&lt;br /&gt;
# respectively. Each server&#039;s name can be specified as a domain- style name or an IP address&lt;br /&gt;
# literal. Optionally, the server&#039;s name can followed by a &#039;:&#039; and the port number the LDAP&lt;br /&gt;
# server is listening on. If no port number is provided, the default port for the scheme is&lt;br /&gt;
# used (389 for ldap://, 636 for ldaps://). A space separated list of URIs may be provided.&lt;br /&gt;
URI             ldaps://ldapm.stoney-cloud.org&lt;br /&gt;
&lt;br /&gt;
# Used to specify the default base DN to use when performing ldap operations. The base must be&lt;br /&gt;
# specified as a Distinguished Name in LDAP format.&lt;br /&gt;
BASE            dc=stoney-cloud,dc=org&lt;br /&gt;
&lt;br /&gt;
# This is a local copy of the certificate of the certificate authority&lt;br /&gt;
# used to sign the server certificate for the LDAP server I am using&lt;br /&gt;
TLS_CACERT      /etc/ssl/certs/FOSS-Cloud_CA.cert.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check you configuration by doing a search:&lt;br /&gt;
 ldapsearch -v -H &amp;quot;ldaps://ldapm.stoney-cloud.org&amp;quot; \&lt;br /&gt;
               -b &amp;quot;dc=stoney-cloud,dc=org&amp;quot; \&lt;br /&gt;
               -D &amp;quot;cn=Manager,dc=stoney-cloud,dc=org&amp;quot; \&lt;br /&gt;
               -s one &amp;quot;(objectClass=*)&amp;quot; \&lt;br /&gt;
               -LLL -W&lt;br /&gt;
&lt;br /&gt;
The result should look something like:&lt;br /&gt;
 ldap_initialize( ldaps://ldapm.stoney-cloud.org:636/??base )&lt;br /&gt;
 filter: (objectClass=*)&lt;br /&gt;
 requesting: All userApplication attributes&lt;br /&gt;
 dn: ou=administration,dc=stoney-cloud,dc=org&lt;br /&gt;
 objectClass: top&lt;br /&gt;
 objectClass: organizationalUnit&lt;br /&gt;
 ou: administration&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Random Number Generator (haveged) ==&lt;br /&gt;
Tools like putty are dependent on random numbers to be able to create certificates.&lt;br /&gt;
&lt;br /&gt;
=== haveged - Generate random numbers and feed linux random device ===&lt;br /&gt;
The haveged daemon doesn&#039;t need any special configuration, therefore you can start it from the command line interface:&lt;br /&gt;
 /etc/init.d/haveged start&lt;br /&gt;
&lt;br /&gt;
Check, if the start was successful:&lt;br /&gt;
 ps auxf | grep haveged&lt;br /&gt;
&lt;br /&gt;
 root     18001  1.0  0.0   7420  3616 ?        Ss   08:48   0:00 /usr/sbin/haveged -r 0 -w 1024 -v 1&lt;br /&gt;
&lt;br /&gt;
Add the haveged daemon to the default run level:&lt;br /&gt;
 rc-update add haveged default&lt;br /&gt;
&lt;br /&gt;
== nss-pam-ldapd ==&lt;br /&gt;
=== nslcd.conf — configuration file for LDAP nameservice daemon ===&lt;br /&gt;
 /etc/nslcd.conf&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# This is the configuration file for the LDAP nameservice&lt;br /&gt;
# switch library&#039;s nslcd daemon. It configures the mapping&lt;br /&gt;
# between NSS names (see /etc/nsswitch.conf) and LDAP&lt;br /&gt;
# information in the directory.&lt;br /&gt;
# See the manual page nslcd.conf(5) for more information.&lt;br /&gt;
&lt;br /&gt;
# The user and group nslcd should run as.&lt;br /&gt;
uid nslcd&lt;br /&gt;
gid nslcd&lt;br /&gt;
&lt;br /&gt;
# The uri pointing to the LDAP server to use for name lookups.&lt;br /&gt;
# Multiple entries may be specified. The address that is used&lt;br /&gt;
# here should be resolvable without using LDAP (obviously).&lt;br /&gt;
#uri ldap://127.0.0.1/&lt;br /&gt;
#uri ldaps://127.0.0.1/&lt;br /&gt;
#uri ldapi://%2fvar%2frun%2fldapi_sock/&lt;br /&gt;
# Note: %2f encodes the &#039;/&#039; used as directory separator&lt;br /&gt;
uri ldaps://ldapm.tombstone.ch&lt;br /&gt;
&lt;br /&gt;
# The LDAP version to use (defaults to 3&lt;br /&gt;
# if supported by client library)&lt;br /&gt;
#ldap_version 3&lt;br /&gt;
&lt;br /&gt;
# The distinguished name of the search base.&lt;br /&gt;
base dc=stoney-cloud,dc=org&lt;br /&gt;
&lt;br /&gt;
# The distinguished name to bind to the server with.&lt;br /&gt;
# Optional: default is to bind anonymously.&lt;br /&gt;
binddn cn=Manager,dc=stoney-cloud,dc=org&lt;br /&gt;
&lt;br /&gt;
# The credentials to bind with.&lt;br /&gt;
# Optional: default is no credentials.&lt;br /&gt;
# Note that if you set a bindpw you should check the permissions of this file.&lt;br /&gt;
bindpw myverysecretpassword&lt;br /&gt;
&lt;br /&gt;
# The distinguished name to perform password modifications by root by.&lt;br /&gt;
#rootpwmoddn cn=admin,dc=example,dc=com&lt;br /&gt;
&lt;br /&gt;
# The default search scope.&lt;br /&gt;
#scope sub&lt;br /&gt;
#scope one&lt;br /&gt;
#scope base&lt;br /&gt;
&lt;br /&gt;
# Customize certain database lookups.&lt;br /&gt;
#base   group  ou=Groups,dc=example,dc=com&lt;br /&gt;
base   group  ou=groups,ou=backup,ou=services,dc=stoney-cloud,dc=org&lt;br /&gt;
base   passwd ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org&lt;br /&gt;
base   shadow ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org&lt;br /&gt;
#scope  group  onelevel&lt;br /&gt;
#scope  hosts  sub&lt;br /&gt;
&lt;br /&gt;
#filter group  (&amp;amp;(objectClass=posixGroup)(sstIsActive=TRUE))&lt;br /&gt;
filter passwd (&amp;amp;(objectClass=posixAccount)(sstIsActive=TRUE))&lt;br /&gt;
filter shadow (&amp;amp;(objectClass=shadowAccount)(sstIsActive=TRUE))&lt;br /&gt;
&lt;br /&gt;
# Bind/connect timelimit.&lt;br /&gt;
#bind_timelimit 30&lt;br /&gt;
&lt;br /&gt;
# Search timelimit.&lt;br /&gt;
#timelimit 30&lt;br /&gt;
&lt;br /&gt;
# Idle timelimit. nslcd will close connections if the&lt;br /&gt;
# server has not been contacted for the number of seconds.&lt;br /&gt;
#idle_timelimit 3600&lt;br /&gt;
&lt;br /&gt;
# Use StartTLS without verifying the server certificate.&lt;br /&gt;
#ssl start_tls&lt;br /&gt;
tls_reqcert never&lt;br /&gt;
&lt;br /&gt;
# CA certificates for server certificate verification&lt;br /&gt;
#tls_cacertdir /etc/ssl/certs&lt;br /&gt;
#tls_cacertfile /etc/ssl/ca.cert&lt;br /&gt;
&lt;br /&gt;
# Seed the PRNG if /dev/urandom is not provided&lt;br /&gt;
#tls_randfile /var/run/egd-pool&lt;br /&gt;
&lt;br /&gt;
# SSL cipher suite&lt;br /&gt;
# See man ciphers for syntax&lt;br /&gt;
#tls_ciphers TLSv1&lt;br /&gt;
&lt;br /&gt;
# Client certificate and key&lt;br /&gt;
# Use these, if your server requires client authentication.&lt;br /&gt;
#tls_cert&lt;br /&gt;
#tls_key&lt;br /&gt;
&lt;br /&gt;
# Mappings for Services for UNIX 3.5&lt;br /&gt;
#filter passwd (objectClass=User)&lt;br /&gt;
#map    passwd uid              msSFU30Name&lt;br /&gt;
#map    passwd userPassword     msSFU30Password&lt;br /&gt;
#map    passwd homeDirectory    msSFU30HomeDirectory&lt;br /&gt;
#map    passwd homeDirectory    msSFUHomeDirectory&lt;br /&gt;
#filter shadow (objectClass=User)&lt;br /&gt;
#map    shadow uid              msSFU30Name&lt;br /&gt;
#map    shadow userPassword     msSFU30Password&lt;br /&gt;
#filter group  (objectClass=Group)&lt;br /&gt;
#map    group  member           msSFU30PosixMember&lt;br /&gt;
&lt;br /&gt;
# Mappings for Services for UNIX 2.0&lt;br /&gt;
#filter passwd (objectClass=User)&lt;br /&gt;
#map    passwd uid              msSFUName&lt;br /&gt;
#map    passwd userPassword     msSFUPassword&lt;br /&gt;
#map    passwd homeDirectory    msSFUHomeDirectory&lt;br /&gt;
#map    passwd gecos            msSFUName&lt;br /&gt;
#filter shadow (objectClass=User)&lt;br /&gt;
#map    shadow uid              msSFUName&lt;br /&gt;
#map    shadow userPassword     msSFUPassword&lt;br /&gt;
#map    shadow shadowLastChange pwdLastSet&lt;br /&gt;
#filter group  (objectClass=Group)&lt;br /&gt;
#map    group  member           posixMember&lt;br /&gt;
&lt;br /&gt;
# Mappings for Active Directory&lt;br /&gt;
#pagesize 1000&lt;br /&gt;
#referrals off&lt;br /&gt;
#idle_timelimit 800&lt;br /&gt;
#filter passwd (&amp;amp;(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*))&lt;br /&gt;
#map    passwd uid              sAMAccountName&lt;br /&gt;
#map    passwd homeDirectory    unixHomeDirectory&lt;br /&gt;
#map    passwd gecos            displayName&lt;br /&gt;
#filter shadow (&amp;amp;(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*))&lt;br /&gt;
#map    shadow uid              sAMAccountName&lt;br /&gt;
#map    shadow shadowLastChange pwdLastSet&lt;br /&gt;
#filter group  (objectClass=group)&lt;br /&gt;
&lt;br /&gt;
# Alternative mappings for Active Directory&lt;br /&gt;
# (replace the SIDs in the objectSid mappings with the value for your domain)&lt;br /&gt;
#pagesize 1000&lt;br /&gt;
#referrals off&lt;br /&gt;
#idle_timelimit 800&lt;br /&gt;
#filter passwd (&amp;amp;(objectClass=user)(objectClass=person)(!(objectClass=computer)))&lt;br /&gt;
#map    passwd uid           cn&lt;br /&gt;
#map    passwd uidNumber     objectSid:S-1-5-21-3623811015-3361044348-30300820&lt;br /&gt;
#map    passwd gidNumber     objectSid:S-1-5-21-3623811015-3361044348-30300820&lt;br /&gt;
#map    passwd homeDirectory &amp;quot;/home/$cn&amp;quot;&lt;br /&gt;
#map    passwd gecos         displayName&lt;br /&gt;
#map    passwd loginShell    &amp;quot;/bin/bash&amp;quot;&lt;br /&gt;
#filter group (|(objectClass=group)(objectClass=person))&lt;br /&gt;
#map    group gidNumber      objectSid:S-1-5-21-3623811015-3361044348-30300820&lt;br /&gt;
&lt;br /&gt;
# Mappings for AIX SecureWay&lt;br /&gt;
#filter passwd (objectClass=aixAccount)&lt;br /&gt;
#map    passwd uid              userName&lt;br /&gt;
#map    passwd userPassword     passwordChar&lt;br /&gt;
#map    passwd uidNumber        uid&lt;br /&gt;
#map    passwd gidNumber        gid&lt;br /&gt;
#filter group  (objectClass=aixAccessGroup)&lt;br /&gt;
#map    group  cn               groupName&lt;br /&gt;
#map    group  gidNumber        gid&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== nsswitch.conf - Name Service Switch configuration file ===&lt;br /&gt;
 /etc/nsswitch.conf&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
passwd:      files ldap&lt;br /&gt;
shadow:      files ldap&lt;br /&gt;
group:       files ldap&lt;br /&gt;
&lt;br /&gt;
# passwd:    db files nis&lt;br /&gt;
# shadow:    db files nis&lt;br /&gt;
# group:     db files nis&lt;br /&gt;
&lt;br /&gt;
hosts:       files dns&lt;br /&gt;
networks:    files dns&lt;br /&gt;
&lt;br /&gt;
services:    db files&lt;br /&gt;
protocols:   db files&lt;br /&gt;
rpc:         db files&lt;br /&gt;
ethers:      db files&lt;br /&gt;
netmasks:    files&lt;br /&gt;
netgroup:    files&lt;br /&gt;
bootparams:  files&lt;br /&gt;
&lt;br /&gt;
automount:   files&lt;br /&gt;
aliases:     files&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== system-auth ===&lt;br /&gt;
 vi /etc/pam.d/system-auth&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth            required        pam_env.so&lt;br /&gt;
auth            sufficient      pam_unix.so try_first_pass likeauth nullok&lt;br /&gt;
auth            sufficient      pam_ldap.so minimum_uid=1000 use_first_pass&lt;br /&gt;
auth            required        pam_deny.so&lt;br /&gt;
&lt;br /&gt;
account         required        pam_unix.so&lt;br /&gt;
account         sufficient      pam_ldap.so minimum_uid=1000 use_first_pass&lt;br /&gt;
&lt;br /&gt;
password        required        pam_cracklib.so difok=2 minlen=8 dcredit=2 ocredit=2 retry=3&lt;br /&gt;
password        required        pam_unix.so try_first_pass use_authtok nullok sha512 shadow&lt;br /&gt;
password        sufficient      pam_ldap.so minimum_uid=1000 use_first_pass&lt;br /&gt;
password        required        pam_deny.so&lt;br /&gt;
&lt;br /&gt;
session         required        pam_limits.so&lt;br /&gt;
session         required        pam_env.so&lt;br /&gt;
session         required        pam_unix.so&lt;br /&gt;
session         sufficient      pam_ldap.so minimum_uid=1000 use_first_pass&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Test the Setup ===&lt;br /&gt;
 nslcd -d&lt;br /&gt;
&lt;br /&gt;
=== Update the Default Run Levels ===&lt;br /&gt;
 rc-update add nslcd default&lt;br /&gt;
 rc-update add nscd default&lt;br /&gt;
&lt;br /&gt;
=== Start the necessary Daemons ===&lt;br /&gt;
 /etc/init.d/nslcd start&lt;br /&gt;
 /etc/init.d/nscd start&lt;br /&gt;
&lt;br /&gt;
== Quota ==&lt;br /&gt;
=== 32-bit Project Identifier Support ===&lt;br /&gt;
We need to enable 32-bit project identifier support (PROJID32BIT feature) for our naming scheme (uid numbers larger than 65&#039;536), which is already the default on the stepping stone virtual machines:&lt;br /&gt;
 mkfs.xfs &#039;&#039;&#039;-i projid32bit=1&#039;&#039;&#039; /dev/vg-local-01/var&lt;br /&gt;
&lt;br /&gt;
=== Update /etc/fstab and Mount ===&lt;br /&gt;
Make sure, that you have user quota (uqota) and project quota (pquota) set as options on the chosen mount point in /etc/fstab. For example:&lt;br /&gt;
 LABEL=LV-VAR            /var            xfs             noatime,discard,inode64,uquota,pquota  0 2&lt;br /&gt;
&lt;br /&gt;
 reboot&lt;br /&gt;
&lt;br /&gt;
Check, if everything went ok:&lt;br /&gt;
 df -h | grep var&lt;br /&gt;
&lt;br /&gt;
 /dev/mapper/vg--local--01-var  1023G  220G  804G  22% /var&lt;br /&gt;
&lt;br /&gt;
=== Verify ===&lt;br /&gt;
Some important options for xfs_quota:&lt;br /&gt;
* -x: Enable expert mode.&lt;br /&gt;
* -c: Pass arguments on the command line. Multiple arguments may be given.&lt;br /&gt;
&lt;br /&gt;
Remount the file system /var and check, if /var has the desired values:&lt;br /&gt;
 xfs_quota -x -c state /var&lt;br /&gt;
&lt;br /&gt;
As you can see (items marked bold), we have achieved our goal:&lt;br /&gt;
 User quota state on /var (/dev/mapper/vg--local--01-var)&lt;br /&gt;
   Accounting: &#039;&#039;&#039;ON&#039;&#039;&#039;&lt;br /&gt;
   Enforcement: &#039;&#039;&#039;ON&#039;&#039;&#039;&lt;br /&gt;
   Inode: #131 (1 blocks, 1 extents)&lt;br /&gt;
 Group quota state on /var (/dev/mapper/vg--local--01-var)&lt;br /&gt;
   Accounting: OFF&lt;br /&gt;
   Enforcement: OFF&lt;br /&gt;
   Inode: #132 (1 blocks, 1 extents)&lt;br /&gt;
 Project quota state on /var (/dev/mapper/vg--local--01-var)&lt;br /&gt;
   Accounting: &#039;&#039;&#039;ON&#039;&#039;&#039;&lt;br /&gt;
   Enforcement: &#039;&#039;&#039;ON&#039;&#039;&#039;&lt;br /&gt;
   Inode: #132 (1 blocks, 1 extents)&lt;br /&gt;
 Blocks grace time: [7 days 00:00:30]&lt;br /&gt;
 Inodes grace time: [7 days 00:00:30]&lt;br /&gt;
 Realtime Blocks grace time: [7 days 00:00:30]&lt;br /&gt;
&lt;br /&gt;
=== User Quotas ===&lt;br /&gt;
==== Adding a User Quota ====&lt;br /&gt;
Set a quota of 1 Gigabyte for the user 4000187 (the values are in kilobytes, so 1048576 kilobyte are 1024 megabytes which corresponds to 1 gigabyte):&lt;br /&gt;
 xfs_quota -x -c &#039;limit bhard=1048576k 4000187&#039; /var&lt;br /&gt;
&lt;br /&gt;
Or in bytes:&lt;br /&gt;
 xfs_quota -x -c &#039;limit bhard=1073741824 4000187&#039; /var&lt;br /&gt;
&lt;br /&gt;
Read the quota information for the user 4000187:&lt;br /&gt;
 xfs_quota -x -c &#039;quota -v -N -u 4000187&#039; /var&lt;br /&gt;
&lt;br /&gt;
 /dev/mapper/vg--local--01-var                     0          0    1048576   00 [--------] /var&lt;br /&gt;
&lt;br /&gt;
If the user has data in the project, that belongs to him, the result will change:&lt;br /&gt;
 /dev/mapper/vg--local--01-var                512000          0    1048576   00 [--------] /var&lt;br /&gt;
&lt;br /&gt;
==== Modifiying a User Quota ====&lt;br /&gt;
To modify a users quota, you just set a new quota (limit):&lt;br /&gt;
 xfs_quota -x -c &#039;limit bhard=1048576k 4000187&#039; /var&lt;br /&gt;
&lt;br /&gt;
Read the quota information for the user 4000187:&lt;br /&gt;
 xfs_quota -x -c &#039;quota -v -N -u 4000187&#039; /var&lt;br /&gt;
&lt;br /&gt;
 /dev/mapper/vg--local--01-var                     0          0    1048576   00 [--------] /var&lt;br /&gt;
&lt;br /&gt;
If the user has data in the project, that belongs to him, the result will change:&lt;br /&gt;
 /dev/mapper/vg--local--01-var                512000          0    1048576   00 [--------] /var&lt;br /&gt;
&lt;br /&gt;
==== Removing a User Quota ====&lt;br /&gt;
Removing a quota for a user:&lt;br /&gt;
 xfs_quota -x -c &#039;limit bhard=0 4000187&#039; /var&lt;br /&gt;
&lt;br /&gt;
The following command should give you an empty result:&lt;br /&gt;
 xfs_quota -x -c &#039;quota -v -N -u 4000187&#039; /var&lt;br /&gt;
&lt;br /&gt;
=== Project (Directory) Quotas ===&lt;br /&gt;
==== Adding a Project (Directory) Quota ====&lt;br /&gt;
The XFS file system additionally allows you to set quotas on individual directory hierarchies in the file system that are known as managed trees. Each managed tree is uniquely identified by a project ID and an optional project name. We&#039;ll use the following values in the examples:&lt;br /&gt;
* project_ID: The uid of the online backup account (4000187).&lt;br /&gt;
* project_name: The uid of the online backup account (4000187). This could be a human readable name.&lt;br /&gt;
* mountpoint: The mountpoint of the xfs-filesystem (/var). See the &amp;lt;code&amp;gt;/etc/fstab&amp;lt;/code&amp;gt; entry from above.&lt;br /&gt;
* directory: The directory of the project (187/4000187), starting from the mountpoint of the xfs-filesystem (/var).&lt;br /&gt;
&lt;br /&gt;
Define a unique project ID for the directory hierarchy in the &amp;lt;code&amp;gt;/etc/projects&amp;lt;/code&amp;gt; file (project_ID:mountpoint/directory):&lt;br /&gt;
 echo &amp;quot;4000187:/var/backup/187/4000187/home/4000187&amp;quot; &amp;gt;&amp;gt; /etc/projects&lt;br /&gt;
&lt;br /&gt;
Create an entry in the &amp;lt;code&amp;gt;/etc/projid&amp;lt;/code&amp;gt; file that maps a project name to the project ID (project_name:project_ID):&lt;br /&gt;
 echo &amp;quot;4000187:4000187&amp;quot; &amp;gt;&amp;gt; /etc/projid&lt;br /&gt;
&lt;br /&gt;
Set Project:&lt;br /&gt;
 xfs_quota -x -c &#039;project -s -p /var/backup/187/4000187/home/4000187 4000187&#039; /var&lt;br /&gt;
&lt;br /&gt;
Set Quota (limit) on Project:&lt;br /&gt;
 xfs_quota -x -c &#039;limit -p bhard=1048576k 4000187&#039; /var&lt;br /&gt;
&lt;br /&gt;
Check your Quota (limit)&lt;br /&gt;
 xfs_quota -x -c &#039;quota -p 4000187&#039; /var&lt;br /&gt;
&lt;br /&gt;
Check the Quota:&lt;br /&gt;
* &amp;lt;code&amp;gt;-v&amp;lt;/code&amp;gt;: increase verbosity in reporting (also dumps zero values).&lt;br /&gt;
* &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt;: suppress the initial header.&lt;br /&gt;
* &amp;lt;code&amp;gt;-p&amp;lt;/code&amp;gt;: display project quota information.&lt;br /&gt;
* &amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt;: human readable format.&lt;br /&gt;
 xfs_quota -x -c &#039;quota -v -N -p 4000187&#039; /var&lt;br /&gt;
&lt;br /&gt;
 /dev/mapper/vg--local--01-var                     0          0    1048576   00 [--------] /var&lt;br /&gt;
&lt;br /&gt;
If you copied data into the project, the output will look something like:&lt;br /&gt;
 /dev/mapper/vg--local--01-var                512000          0    1048576   00 [--------] /var&lt;br /&gt;
&lt;br /&gt;
To give you an overall view of the whole system:&lt;br /&gt;
 xfs_quota -x -c report /var&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User quota on /var (/dev/mapper/vg--local--01-var)&lt;br /&gt;
                               Blocks                     &lt;br /&gt;
User ID          Used       Soft       Hard    Warn/Grace     &lt;br /&gt;
---------- -------------------------------------------------- &lt;br /&gt;
root          1024000          0          0     00 [--------]&lt;br /&gt;
4000187             0          0    1048576     00 [--------]&lt;br /&gt;
&lt;br /&gt;
Project quota on /var (/dev/mapper/vg--local--01-var)&lt;br /&gt;
                               Blocks                     &lt;br /&gt;
Project ID       Used       Soft       Hard    Warn/Grace     &lt;br /&gt;
---------- -------------------------------------------------- &lt;br /&gt;
4000187        512000          0    1048576     00 [--------]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Modifying a Project (Directory) Quota ====&lt;br /&gt;
To modify a project (directory) quota, you just set an new quota (limit) on the chosen project:&lt;br /&gt;
 xfs_quota -x -c &#039;limit -p bhard=1048576k 4000187&#039; /var&lt;br /&gt;
&lt;br /&gt;
Check your quota (limit)&lt;br /&gt;
 xfs_quota -x -c &#039;quota -p 4000187&#039; /var&lt;br /&gt;
&lt;br /&gt;
==== Removing a Project (Directory) Quota ====&lt;br /&gt;
Removing a quota from a project:&lt;br /&gt;
 xfs_quota -x -c &#039;limit -p bhard=0 4000187&#039; /var&lt;br /&gt;
&lt;br /&gt;
Chreck the results:&lt;br /&gt;
 xfs_quota -x -c report /var&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User quota on /var (/dev/mapper/vg--local--01-var)&lt;br /&gt;
                               Blocks                     &lt;br /&gt;
User ID          Used       Soft       Hard    Warn/Grace     &lt;br /&gt;
---------- -------------------------------------------------- &lt;br /&gt;
root           512000          0          0     00 [--------]&lt;br /&gt;
4000187             0          0       1024     00 [--------]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, the line with the Project ID 4000187 has disappeared:&lt;br /&gt;
 4000187        512000          0    1048576     00 [--------]&lt;br /&gt;
&lt;br /&gt;
Don&#039;t forget to remove the project from &amp;lt;code&amp;gt;/etc/projects&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/etc/projid&amp;lt;/code&amp;gt;:&lt;br /&gt;
 sed -i -e &#039;/4000187/d&#039; /etc/projects&lt;br /&gt;
 sed -i -e &#039;/4000187/d&#039; /etc/projid&lt;br /&gt;
&lt;br /&gt;
=== Some important notes concerning XFS ===&lt;br /&gt;
# The &#039;&#039;&#039;quotacheck&#039;&#039;&#039; command has no effect on XFS filesystems. The first time quota accounting is turned on (at mount time), XFS does an automatic quotacheck internally; afterwards, the quota system will always be completely consistent until quotas are manually turned off. &lt;br /&gt;
# There is &#039;&#039;&#039;no need for quota file(s)&#039;&#039;&#039; in the root of the XFS filesystem.&lt;br /&gt;
&lt;br /&gt;
== prov-backup-rsnapshot ==&lt;br /&gt;
Install the [[stoney_backup:_prov-backup-rsnapshot | prov-backup-rsnasphot ]] daemon script using the package manager:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
emerge -va sys-apps/sst-prov-backup-rsnapshot&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Configuration ===&lt;br /&gt;
If it is the first provisioning module running on this server (very likely) you first have to configure the provisioning daemon (you can skip this step if you have already another provisioning module running on this server)&lt;br /&gt;
&lt;br /&gt;
==== Provisioning global configuration ====&lt;br /&gt;
The global configuration for the provisioning daemon (which was installed with the first provisioning module and the &amp;lt;code&amp;gt;sys-apps/sst-provisioning&amp;lt;/code&amp;gt; package) applies to all provisioning modules running on the server. This configuration therefore contains information about the provisioning daemon itself and no information at all about the specific modules. &lt;br /&gt;
 /etc/Provisioning/Global.conf&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright (C) 2012 stepping stone GmbH&lt;br /&gt;
#                    Switzerland&lt;br /&gt;
#                    http://www.stepping-stone.ch&lt;br /&gt;
#                    support@stepping-stone.ch&lt;br /&gt;
#&lt;br /&gt;
# Authors:&lt;br /&gt;
#  Pat Kläy &amp;lt;pat.klaey@stepping-stone.ch&amp;gt;&lt;br /&gt;
#  &lt;br /&gt;
# Licensed under the EUPL, Version 1.1.&lt;br /&gt;
#&lt;br /&gt;
# You may not use this work except in compliance with the&lt;br /&gt;
# Licence.&lt;br /&gt;
# You may obtain a copy of the Licence at:&lt;br /&gt;
#&lt;br /&gt;
# http://www.osor.eu/eupl&lt;br /&gt;
#&lt;br /&gt;
# Unless required by applicable law or agreed to in&lt;br /&gt;
# writing, software distributed under the Licence is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; basis,&lt;br /&gt;
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either&lt;br /&gt;
# express or implied.&lt;br /&gt;
# See the Licence for the specific language governing&lt;br /&gt;
# permissions and limitations under the Licence.&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
[Global]&lt;br /&gt;
# If true the script logs every information to the log-file.&lt;br /&gt;
LOG_DEBUG = 0&lt;br /&gt;
&lt;br /&gt;
# If true the script logs additional information to the log-file.&lt;br /&gt;
LOG_INFO = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs warnings to the log-file.&lt;br /&gt;
LOG_WARNING = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs errors to the log-file.&lt;br /&gt;
LOG_ERR = 1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# The number of seconds to wait before retry contacting the backend server during startup.&lt;br /&gt;
SLEEP = 10&lt;br /&gt;
&lt;br /&gt;
# Number of backend server connection retries during startup.&lt;br /&gt;
ATTEMPTS = 3&lt;br /&gt;
&lt;br /&gt;
[Operation Mode]&lt;br /&gt;
# The number of seconds to wait before retry contacting the backend server in case of a service interruptions.&lt;br /&gt;
SLEEP = 30&lt;br /&gt;
&lt;br /&gt;
# Number of backend server connection retries in case of a service interruptions.&lt;br /&gt;
ATTEMPTS = 3&lt;br /&gt;
&lt;br /&gt;
[Mail]&lt;br /&gt;
# Error messages are sent to the mail configured below.&lt;br /&gt;
SENDTO = &amp;lt;YOUR-MAIL-ADDRESS&amp;gt;&lt;br /&gt;
HOST = mail.stepping-stone.ch&lt;br /&gt;
PORT = 587&lt;br /&gt;
USERNAME = &amp;lt;YOUR-NOTIFICATION-EMAIL-ADDRESS&amp;gt;&lt;br /&gt;
PASSWORD = &amp;lt;PASSWORD&amp;gt;&lt;br /&gt;
FROMNAME = Provisioning daemon&lt;br /&gt;
CA_DIR = /etc/ssl/certs&lt;br /&gt;
SSL = starttls&lt;br /&gt;
AUTH_METHOD = LOGIN&lt;br /&gt;
&lt;br /&gt;
# Additionally, you can be informed about creation, modification and deletion of services.&lt;br /&gt;
WANTINFOMAIL = 1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Provisioning daemon prov-backup-rsnapshot module ====&lt;br /&gt;
The module specific configuration is located in /etc/Provisioning/&amp;lt;Service&amp;gt;/&amp;lt;Type&amp;gt;.conf. In the case of the prov-backup-rsnapshot module this is &amp;lt;code&amp;gt;/etc/Provisioning/Backup/Rsnapshot.conf&amp;lt;/code&amp;gt;. (Note: Comments starting with /* are not in the configuration file, they are only in the wiki to add some additional information)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright (C) 2013 stepping stone GmbH&lt;br /&gt;
#                    Switzerland&lt;br /&gt;
#                    http://www.stepping-stone.ch&lt;br /&gt;
#                    support@stepping-stone.ch&lt;br /&gt;
#&lt;br /&gt;
# Authors:&lt;br /&gt;
#  Pat Kläy &amp;lt;pat.klaey@stepping-stone.ch&amp;gt;&lt;br /&gt;
#  &lt;br /&gt;
# Licensed under the EUPL, Version 1.1.&lt;br /&gt;
#&lt;br /&gt;
# You may not use this work except in compliance with the&lt;br /&gt;
# Licence.&lt;br /&gt;
# You may obtain a copy of the Licence at:&lt;br /&gt;
#&lt;br /&gt;
# http://www.osor.eu/eupl&lt;br /&gt;
#&lt;br /&gt;
# Unless required by applicable law or agreed to in&lt;br /&gt;
# writing, software distributed under the Licence is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; basis,&lt;br /&gt;
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either&lt;br /&gt;
# express or implied.&lt;br /&gt;
# See the Licence for the specific language governing&lt;br /&gt;
# permissions and limitations under the Licence.&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
/* If you want, you can override the log information from the global configuration file this might be useful for debugging */&lt;br /&gt;
[Global]&lt;br /&gt;
# If true the script logs every information to the log-file.&lt;br /&gt;
LOG_DEBUG = 1&lt;br /&gt;
&lt;br /&gt;
# If true the script logs additional information to the log-file.&lt;br /&gt;
LOG_INFO = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs warnings to the log-file.&lt;br /&gt;
LOG_WARNING = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs errors to the log-file.&lt;br /&gt;
LOG_ERR = 1&lt;br /&gt;
&lt;br /&gt;
/* Specify the hosts fully qualified domain name. This name will be used to perform some checks and also appear in the information and error mails */&lt;br /&gt;
ENVIRONMENT = &amp;lt;FQDN&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
[Database]&lt;br /&gt;
BACKEND = LDAP&lt;br /&gt;
SERVER = ldaps://ldapm.tombstone.org&lt;br /&gt;
PORT = 636&lt;br /&gt;
ADMIN_USER = cn=Manager,dc=stoney-cloud,dc=org&lt;br /&gt;
ADMIN_PASSWORD = &amp;lt;PASSWORD&amp;gt;&lt;br /&gt;
SERVICE_SUBTREE = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org&lt;br /&gt;
COOKIE_FILE = /etc/Provisioning/Backup/rsnapshot.cookie&lt;br /&gt;
DEFAULT_COOKIE = rid=001,csn=&lt;br /&gt;
SEARCH_FILTER = (&amp;amp;(entryCSN&amp;gt;=%entryCSN%)(sstProvisioningState=0))&lt;br /&gt;
&lt;br /&gt;
/* Specifies the service itself. As it is the prov-backup-rsnapshot module, the SERVICE is &amp;quot;Backup&amp;quot; and the TYPE is &amp;quot;Rsnapshot&amp;quot;.&lt;br /&gt;
 * The MODUS is as usual selfcare and the TRANSPORTAPI is LocalCLI. This is because the daemon is running on the same host as the&lt;br /&gt;
 * backup accounts are provisioned and the commands can be executed on this host using the cli.&lt;br /&gt;
 * For more information about MODUS and TRANSPORTAPI see https://int.stepping-stone.ch/wiki/provisioning.pl#Service_Konfiguration&lt;br /&gt;
 */&lt;br /&gt;
[Service]&lt;br /&gt;
MODUS = selfcare&lt;br /&gt;
TRANSPORTAPI = LocalCLI&lt;br /&gt;
SERVICE = Backup&lt;br /&gt;
TYPE = Rsnapshot&lt;br /&gt;
&lt;br /&gt;
SYSLOG = prov-backup-rsnapshot&lt;br /&gt;
&lt;br /&gt;
/* For the TRANSPORTAPI LocalCLI there is no gateway required because there is no connection to establish. So set HOST, USER and&lt;br /&gt;
 * DSA_FILE to whatever you want. Don&#039;t leave it blank, otherwise the provisioning daemon would log some error messages saying&lt;br /&gt;
 * these attributes are empty &lt;br /&gt;
 */&lt;br /&gt;
[Gateway]&lt;br /&gt;
HOST = localhost&lt;br /&gt;
USER = provisioning&lt;br /&gt;
DSA_FILE = none&lt;br /&gt;
&lt;br /&gt;
/* Information about the backup itself (how to setup everything). Note that the %uid% int the RSNAPSHOT_CONFIG_FILE parameter will&lt;br /&gt;
 * be replaced by the accounts UID. The script CREATE_CHROOT_CMD was installed with the prov-backup-rsnapshot module, so do not&lt;br /&gt;
 * change this parameter. The quota parameters (SET_QUOTA_CMD, MOUNTPOINT, QUOTA_FILE, PROJECTS_FILE and PROJID_FILE) represent &lt;br /&gt;
 * the quota setup as described on http://wiki.stoney-cloud.org/index.php/stoney_backup:_Server_set-up#Quota. If you followed this&lt;br /&gt;
 * manual, you can copy-paste them into your configuration file, otherwise adapt them according to your quota setup.&lt;br /&gt;
 */&lt;br /&gt;
[Backup]&lt;br /&gt;
RSNAPSHOT_CONFIG_FILE = /etc/rsnapshot/rsnapshot.conf.%uid%&lt;br /&gt;
SET_QUOTA_CMD = /usr/sbin/xfs_quota&lt;br /&gt;
CREATE_CHROOT_CMD = /usr/libexec/createBackupDirectory.sh&lt;br /&gt;
MOUNTPOINT = /var&lt;br /&gt;
QUOTA_FILE = /etc/backupSize&lt;br /&gt;
PROJECTS_FILE = /etc/projects&lt;br /&gt;
PROJID_FILE = /etc/projid&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== backup utils ==&lt;br /&gt;
Install the backup utils (multiple scripts which help you to manage and monitor your backup server and backup accounts) using the package manager. For more information about the scripts please see the [[stoney_backup:_Service_Software | stoney backup Service Software]] page. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
emerge -va sys-apps/sst-backup-utils&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== Configuration ===&lt;br /&gt;
Please refer to the configuration sections for the different scripts in [[stoney_backup:_Service_Software | stoney backup Service Software]].&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
* [http://www.openldap.org/ OpenLDAP], an open source implementation of the Lightweight Directory Access Protocol.&lt;br /&gt;
* [http://arthurdejong.org/nss-pam-ldapd/ nss-pam-ldapd], a Name Service Switch (NSS) module that allows your LDAP server to provide user account, group, host name, alias, netgroup, and basically any other information that you would normally get from /etc flat files or NIS.&lt;br /&gt;
* [http://www.gentoo.org/doc/de/ldap-howto.xml Gentoo Leitfaden zur OpenLDAP Authentifikation].&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Centralized_authentication_using_OpenLDAP Centralized authentication using OpenLDAP].&lt;br /&gt;
* [https://code.google.com/p/openssh-lpk/source/browse/trunk/schemas/openssh-lpk_openldap.schema openssh-lpk_openldap.schema] OpenSSH LDAP Public Keys.&lt;br /&gt;
* [http://sourceforge.net/projects/linuxquota/ linuxquota] Linux DiskQuota.&lt;br /&gt;
* [http://www.rsnapshot.org/ rsnapshot], a remote filesystem snapshot utility, based on rsync.&lt;br /&gt;
* [http://olivier.sessink.nl/jailkit/ Jailkit], set of utilities to limit user accounts to specific files using chroot() and or specific commands. Also includes a tool to build a chroot environment.&lt;br /&gt;
* [http://www.busybox.net/ Busybox] BusyBox combines tiny versions of many common UNIX utilities into a single small executable. Useful to reduce the number of files (and thus the complexity) when building a chroot. &lt;br /&gt;
&lt;br /&gt;
[[Category:stoney backup]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3770</id>
		<title>stoney conductor: VM Backup</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3770"/>
		<updated>2014-06-26T14:57:22Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Restore */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This page describes how the VMs and VM-Templates are backed-up and restored inside the [http://www.stoney-cloud.org stoney cloud].&lt;br /&gt;
&lt;br /&gt;
= Requirements =&lt;br /&gt;
* sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
** This directory might be a single partition which needs to have the same size as your partition for the live images (it&#039;s a &amp;quot;copy&amp;quot; of the live partition)&lt;br /&gt;
* sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
** This directory must be on the same partition as your life images are&lt;br /&gt;
* A working stoney cloud, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].&lt;br /&gt;
* The backup configuration must be set: [[stoney_conductor:_OpenLDAP_directory_data_organisation#Backup | stoney conductor: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Backup =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The main idea to backup a VM or a VM-Template is, to divide the task into three subtasks: &lt;br /&gt;
* createSnapshot: Create a disk only snapshot. A new overlay file is created, all write operations are performed to this file. The underlying disk-image is now read only.&lt;br /&gt;
* exportSnapshot: Copy the read only disk-image to the backup location.&lt;br /&gt;
* commitSnapshot: Commit the performed write operations from the overlay back to the underlying (original) disk image. Now the underlying image is read-write again and the overlay image can be deleted.&lt;br /&gt;
A more detailed and technical description for these three sub-processes can be found [[#Sub-Processes | here]].&lt;br /&gt;
&lt;br /&gt;
Furthermore there is an control instance, which can independently call these three sub-processes for a given machine. Like that, the stoney cloud is able to handle different cases:&lt;br /&gt;
=== Backup a single machine ===&lt;br /&gt;
The procedure for backing up a single machine is very simple. Just call the three sub-processes (create-, export- and commitSnapshot) one after the other. So the control instance would do some very basic stuff: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machine = args[0];&lt;br /&gt;
&lt;br /&gt;
if( createSsnapshot( machine ) )&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
    if ( exportSnapshot( machine ) )&lt;br /&gt;
    {&lt;br /&gt;
&lt;br /&gt;
        if ( commitSnapshot( machine ) )&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Successfully backed up machine %s\n&amp;quot;, machine);&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
} else&lt;br /&gt;
{&lt;br /&gt;
    printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Backup multiple machines at the same time ===&lt;br /&gt;
When backing up multiple machines at the same time, we need to make sure that the snapshots for the machines are as close together as possible. Therefore the control instance should call first the createSnapshot process for all machines. After every machine has been snapshotted, the control instance can call the exportSnapshot and commitSnapshot process for every machine. The most important part here is, that the control instance somehow remembers, if the snapshot for a given machine was successful or not. Because if the snapshot failed, it must not call the exportSnapshot and commitSnapshot process. So the control instance needs a little bit more logic: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machines[] = args[0];&lt;br /&gt;
object successful_snapshots[];&lt;br /&gt;
&lt;br /&gt;
# Snapshot all machines&lt;br /&gt;
for( int i = 0; i &amp;lt;  sizeof(machines) / sizeof(object) ; i++ )&lt;br /&gt;
{&lt;br /&gt;
    # If the snapshot was successful, put the machine into the &lt;br /&gt;
    # successful_snapshots array&lt;br /&gt;
    if ( createSnapshot( machines[i] ) )&lt;br /&gt;
    {&lt;br /&gt;
        successful_snapshots[machines[i]];&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machines[i],error);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# export and commit all successful_snapshot machines&lt;br /&gt;
for ( int i = 0; i &amp;lt;  sizeof(successful_snapshots) / sizeof(object) ; i++ ) )&lt;br /&gt;
{&lt;br /&gt;
    # Check if the element at this position is not null, then the snapshot &lt;br /&gt;
    # for this machine was successful&lt;br /&gt;
    if ( successful_snapshots[i] )&lt;br /&gt;
    {&lt;br /&gt;
        if ( exportSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
        {&lt;br /&gt;
            if ( commitSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
            {&lt;br /&gt;
              printf(&amp;quot;Successfully backed-up machine %s\n&amp;quot;, successful_snapshots[i]);&lt;br /&gt;
            } else&lt;br /&gt;
            {&lt;br /&gt;
                printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sub-Processes ===&lt;br /&gt;
See also [[Libvirt_external_snapshot_with_GlusterFS]]&lt;br /&gt;
==== createSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Part_2:_Create_the_snapshot_using_virsh]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#createSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== exportSnapshot ====&lt;br /&gt;
# Simply copy the underlying image to the backup location&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;image&amp;gt;.qcow2 /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;backup&amp;gt;/&amp;lt;location&amp;gt;/.&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#exportSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== commitSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Cleanup.2FCommit_.28Online.29]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#commitSnapshot]]&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
Since the stoney cloud is (as the name says already) a cloud solution, it makes sense to have a backend (in our case openLDAP) involved in the whole process. Like that it is possible to run the backup jobs decentralized on every vm-node. The control instance can then modify the backend, and theses changes are seen by the diffenrent backup daemons on the vm-nodes. So the communication could look like shown in the following picture (Figure 1): &lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-communication.png|800px|thumbnail|none|Figure 1: Communication between the control instance and the prov-backup-kvm daemon through the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
=== Control-Instance Daemon Interaction for creating a Backup with LDIF Examples ===&lt;br /&gt;
The step numbers correspond with the graphical overview from above.&lt;br /&gt;
&lt;br /&gt;
==== Step 00: Backup Configuration for a virtual machine ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The following backup configuration says, that the backup should be done daily, at 03:00 hours (localtime).&lt;br /&gt;
# * * * * * command to be executed&lt;br /&gt;
# - - - - -&lt;br /&gt;
# | | | | |&lt;br /&gt;
# | | | | +----- day of week (0 - 6) (Sunday=0)&lt;br /&gt;
# | | | +------- month (1 - 12)&lt;br /&gt;
# | | +--------- day of month (1 - 31)&lt;br /&gt;
# | +----------- hour (0 - 23)&lt;br /&gt;
# +------------- min (0 - 59)&lt;br /&gt;
# localtime in the crontab entry&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
objectclass: sstCronObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
description: This sub tree contains the backup plan for the virtual machine kvm-005.&lt;br /&gt;
sstCronMinute: 0&lt;br /&gt;
sstCronHour: 3&lt;br /&gt;
sstCronDay: *&lt;br /&gt;
sstCronMonth: *&lt;br /&gt;
sstCronDayOfWeek: *&lt;br /&gt;
sstCronActive: TRUE&lt;br /&gt;
sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
sstBackupRamDiskLocation: file:///mnt/ramdisk-test&lt;br /&gt;
sstVirtualizationDiskImageFormat: qcow2&lt;br /&gt;
sstVirtualizationDiskImageOwner: root&lt;br /&gt;
sstVirtualizationDiskImageGroup: vm-storage&lt;br /&gt;
sstVirtualizationDiskImagePermission: 0660&lt;br /&gt;
sstBackupNumberOfIterations: 1&lt;br /&gt;
sstVirtualizationVirtualMachineForceStart: FALSE&lt;br /&gt;
sstVirtualizationBandwidthMerge: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 01: Initialize Backup Sub Tree (Control instance daemon) ====&lt;br /&gt;
The sub tree &#039;&#039;&#039; ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&#039;&#039;&#039; reflects the time, when the backup is planned (in the form of [YYYY][MM][DD]T[hh][mm][ss]Z ([http://en.wikipedia.org/wiki/ISO_8601 ISO 8601]) and it should be written at the time, when the backup is planned and should be executed. The section &#039;&#039;&#039;20121002T010000Z&#039;&#039;&#039; means the following:&lt;br /&gt;
* Year: 2012&lt;br /&gt;
* Month: 10&lt;br /&gt;
* Day of Month: 02&lt;br /&gt;
* Hour of Day: 01&lt;br /&gt;
* Minutes: 00&lt;br /&gt;
* Seconds: 00&lt;br /&gt;
Please be aware the the time is to be written in UTC (see also the comment in the LDIF example below).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# This entry is the place holder for the backup, which is to be executed at 03:00 hours (localtime with daylight-saving). This&lt;br /&gt;
# leads to the 20121002T010000Z timestamp (which is written in UTC).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: sstProvisioning&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
ou: 20121002T010000Z&lt;br /&gt;
sstProvisioningExecutionDate: 0&lt;br /&gt;
sstProvisioningMode: initialize&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
sstProvisioningState: 20121002T014513Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Finalize the Initialization (Control instance daemon) ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is modified.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: initialized&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Start the Snapshot Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshot&#039;&#039;&#039;, the actual backup process is kicked off by the Control instance daemon.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# snapshot (this way the Provisioning-Backup-VKM daemon knows, that it must start the snapshotting process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshot&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 04: Starting the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is snapshotting the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to snapshotting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Finalizing the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the snapshot of the virtual machine or virtual machine template is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010011Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Start the export Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;export&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to export the disk image to the backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# export (this way the Provisioning-Backup-VKM daemon knows, that it must start the export process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: export&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Starting the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exporting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is exporting the virtual machine or virtual machine template disk images.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to exporting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exporting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 08: Finalizing the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exported&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the export of the virtual machine or virtual machine template disk-images is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010500Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exported&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the commit Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;commit&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to commit the changes from the overlay file to the underlying disk-image&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# commit (this way the Provisioning-Backup-VKM daemon knows, that it must start the commit process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: commit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the commit Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the commit command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;comitting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is committing changes from the overlay disk-images back to the underlying ones.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to comitting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: committing&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the commit Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commit command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;comitted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the comitting of the changes from the overlay disk-images back to the underlying ones is done. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: comitted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the Backup Process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;committed&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Backup) ==&lt;br /&gt;
Since we do not have a working control instance, we need to have a workaround for backing up the machines: &lt;br /&gt;
&lt;br /&gt;
* We do already have a BackupKVMWrapper.pl script (File-Backend) which executes the three [[#Sub-Processes | sub-processes ]] in the correct order for a given list of machines (see [[#Backup multiple machines at the same_time]]).&lt;br /&gt;
* We do already have the implementation for the whole backup with the LDAP-Backend (see [[ stoney conductor: prov backup kvm ]]).&lt;br /&gt;
* We can now combine these two existing scripts and create a wrapper (lets call it LDAPKVMWrapper) which, in some way, adds some logic to the BackupKVMWrapper.pl. In fact the LDAPKVMWrapper wrapper will generate the list of machines which need a backup.&lt;br /&gt;
&lt;br /&gt;
The behaviour on our servers is as follows (c.f. Figure 2):&lt;br /&gt;
# The (decentralized) LDAPKVMWrapper wrapper (which is executed everyday via cronjob) generates a list off all machines running on the current host.&lt;br /&gt;
#* Currently on the hosts the cronjobs looks like: &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
#* For each of these machines:&lt;br /&gt;
#** Check if the machine is excluded from the backup, if yes, remove the machine from the list&lt;br /&gt;
#** Check if the last backup was successful, if not, remove the machine from the list&lt;br /&gt;
# Update the backup subtree for each machine in the list&lt;br /&gt;
#* Remove the old backup leaf (the &amp;quot;yesterday-leaf&amp;quot;), and add a new one (the &amp;quot;today-leaf&amp;quot;) &lt;br /&gt;
#* After this step, the machines are ready to be backed up&lt;br /&gt;
# Call the KVMBackupWrapper.pl script with the machines list as a parameter&lt;br /&gt;
# Wait for the KVMBackupWrapper.pl script to finish&lt;br /&gt;
# Go again through all machines and update the backup subtree a last time&lt;br /&gt;
#* Check if the backup was successful, if yes, set sstProvisioningMode = finished (see also TBD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:wrapper-interaction.png|500px|thumbnail|none|Figure 2: How the two wrapper interact with the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
* If for some reason something does not work at all, the whole backup process can be deactivated by simply disabling the LDAPKVMWrapper cronjob&lt;br /&gt;
** &amp;lt;code&amp;gt;crontab -e&amp;lt;/code&amp;gt;&lt;br /&gt;
** Comment the LDAPKVMWrapper cronjob line: &amp;lt;code&amp;gt;#00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
=== How to exclude a machine from the backup ===&lt;br /&gt;
Login to one of the [[VM-Node | vm-nodes]] and execute the following command&lt;br /&gt;
&lt;br /&gt;
If you want to exclude a machine from the backup run you simply need to add the following entry to your LDAP directory: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the backup subtree in the LDAP directory already exists, you need to add the sstbackupexcludefrombackup attribute: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
add: objectClass&lt;br /&gt;
objectClass: sstVirtualizationBackupObjectClass&lt;br /&gt;
-&lt;br /&gt;
add: sstbackupexcludefrombackup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Re-include the machine to the backup ====&lt;br /&gt;
If you want to re include a machine, simply delete the machines whole backup subtree. It will be recreated during the next backup run.&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
= Restore =&lt;br /&gt;
&#039;&#039;&#039;Attention:&#039;&#039;&#039; The restore process is not yet defined / nor implemented. The following documentation is about the old restore process. &lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The restore process, similar to the backup process, can be divided into three sub-processes: &lt;br /&gt;
* Unretain the small files: Copy the small files (backend entry, XML description) from the backup directory to the retain directory&lt;br /&gt;
* Unretain the big files: Copy the big files (state file, disk image(s)) form the backup directory to the retain directory&lt;br /&gt;
* Restore the machine: Replace the live disk image(s) by the one(s) from the backup and restore the machine from the state file&lt;br /&gt;
&lt;br /&gt;
Additionally the restore process can also be divided into two phases: &lt;br /&gt;
* User-Interaction phase: After the &amp;quot;unretain small files&amp;quot; the user needs to decide two things:&lt;br /&gt;
** On conflicts between the backend entry file and the XML description, the user need to decide how to resolve this conflict(s)&lt;br /&gt;
** The user can also abort the restore process up to this point. After that the restore can not be aborted or undone! &lt;br /&gt;
* Non-User-Interaction phase: The daemons communicate through the backend between each other and the restore process continues without further user input (c.f. [[#Communication_through_backend_2 | Communication through backend]])&lt;br /&gt;
&lt;br /&gt;
=== Sub Processes ===&lt;br /&gt;
==== Unretain small files ====&lt;br /&gt;
This workflow assumes that the backup directory is on the same physical server as the retain directory (protocol is file://)&lt;br /&gt;
# Copy the backend-entry file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.backend /path/to/retain/vm-001.backend&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the XML description from the from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.xml /path/to/retain/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Compare the backend-entry file (the one in the retain directory) with the live-backend entry&lt;br /&gt;
#* Resolve all conflicts between these two backend entries&lt;br /&gt;
#** Modify the backend entry at the retain location accordingly&lt;br /&gt;
# Apply the same changes for the XML description at the retain location (backend entry and XML description need to be consistent).&lt;br /&gt;
&lt;br /&gt;
==== Unretain large files ====&lt;br /&gt;
# Copy the state file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.state /path/to/retain/vm-001.state&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the disk image(s) from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.qcow2 /path/to/retain/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
&lt;br /&gt;
==== Restore the VM ====&lt;br /&gt;
# Shutdown the VM if it is running:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh shutdown vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Undefine the VM if it is still defined: &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh undefine vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Overwrite the original disk image:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;mv /path/to/retain/vm-001.qcow2 /path/to/images/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
# Restore the VMs backend entry: &lt;br /&gt;
#* Write the backend entry from the retain location (&amp;lt;code&amp;gt;/path/to/retain/vm-001.backend&amp;lt;/code&amp;gt;) to the backend&lt;br /&gt;
# Overwrite the VMs XML description with the one from the retain location &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/retain/vm-001.xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Restore the VM from the state file with the corrected XML&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh restore /path/to/retain/vm-001.state --xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
The actual KVM-Restore process is controlled completely by the Control instance daemon via the OpenLDAP directory. See [[#OpenLDAP Directory Integration|OpenLDAP Directory Integration]] the involved attributes and possible values.&lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-interaction-restore.png|thumb|500px|none|Figure 3: Communication between all involved parties during the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update these interactions by editing [[File:Restore-Interaction.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== Control instance Daemon Interaction for restoring a Backup with LDIF Examples ===&lt;br /&gt;
==== Step 01: Start the unretainSmallFiles process (Control instance daemon) ====&lt;br /&gt;
The first step of the restore process is to copy the small files (in this case the XML file and the LDIF) from the configured backup location to the configured retain location. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainSmallFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainSmallFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Starting the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingSmallFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the small files for the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Finalizing the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedSmallFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the small files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Start the unretainLargeFiles process (Control instance daemon) ====&lt;br /&gt;
Next step in the restore process is to copy the large files (state file and disk images) from the configured backup directory to the configured retain directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainLargeFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainLargeFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Starting the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingLargeFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the large files for the virtual machine or virtual machine template.&lt;br /&gt;
&lt;br /&gt;
In the meantime the vm-manager merges the LDIF we have unretained in [[#Step_02:_Starting_the_unretainSmallFiles_process_.28Provisioning-Backup-KVM_daemon.29 | step 02]] with the one in the live directory to sort out possible differences in the configuration of the virtual machine.  &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Finalizing the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedLargeFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the large files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the restore process (Control instance daemon) ====&lt;br /&gt;
Since we now have all necessary files in the configured retain location, the restore process can be started. There we simply copy the disk images back to their original location and restore the VM from the state file (which is also at the configured retain location)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# restore (this way the Provisioning-Backup-VKM daemon knows, that it must start the restore process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restore&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restoring&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is restoring the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to restoring by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restoring&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restored&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restored&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the restore process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;restored&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the Control instance daemon, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the restore process is finished.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Restore) ==&lt;br /&gt;
&#039;&#039;&#039;Attention&#039;&#039;&#039;: The restore process is not yet defined / nor implemented. The following documentation is about the old restore process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Since the prov-backup-kvm daemon is not running on the vm-nodes (c.f. [[stoney_conductor:_Backup#Current_Implementation_.28Backup.29]]), the restore process does not work when clicking the icon in the webinterface. &lt;br /&gt;
&lt;br /&gt;
=== How to manually restore a machine from backup ===&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: Before you continue with this guide, make sure that you have no other possibility to restore the machine. It might be easier and safer to get lost files from the online backup if the machine has one set up.&lt;br /&gt;
&lt;br /&gt;
If you really have to restore the machine from the backup:&lt;br /&gt;
# Stop the machine from via the [https://cloud.stepping-stone.ch/vm-manager/ web interface]&lt;br /&gt;
# Login (as root) on the [[VM-Node]] the machine was running on&lt;br /&gt;
&lt;br /&gt;
As a first step, you would like to set some useful bash variables to be able to copy paste the following guide:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Double check all variables you are setting here. If one is not correct, you will restore a running machine or overwrite a live-disk image!&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
machinename=&amp;quot;&amp;lt;MACHINE-NAME&amp;gt;&amp;quot; # For example: machinename=&amp;quot;b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6&amp;quot;&lt;br /&gt;
vmpool=&amp;quot;&amp;lt;VM-POOL&amp;gt;&amp;quot; # For example vmpool=&amp;quot;0f83f084-8080-413e-b558-b678e504836e&amp;quot;&lt;br /&gt;
vmtype=&amp;quot;&amp;lt;VM-TYPE&amp;gt;&amp;quot; # For example vmtype=&amp;quot;vm-persistent&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change to the backup directory for the given machine and check the iterations:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change into the most recent iteration&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd 2014...&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In there you should have: &lt;br /&gt;
* The state file &amp;lt;MACHINE-NAME&amp;gt;.state.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.state.20140109T134445Z)&lt;br /&gt;
* The XML description &amp;lt;MACHINE-NAME&amp;gt;.xml.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.xml.20140109T134445Z)&lt;br /&gt;
* The ldif file &amp;lt;MACHINE-NAME&amp;gt;.ldif.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.ldif.20140109T134445Z)&lt;br /&gt;
* And at least one disk image &amp;lt;DISK-IMAGE&amp;gt;.qcow2.&amp;lt;BACKUP-DATE&amp;gt; (for example 8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2.20140109T134445Z)&lt;br /&gt;
Now you should save the backup date and the disk image(s) in a variable&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
backupdate=&amp;quot;&amp;lt;BACKUP-DATE&amp;gt;&amp;quot; # For example: backupdate=&amp;quot;20140109T134445Z&amp;quot;&lt;br /&gt;
diskimage1=&amp;quot;&amp;lt;DISK-IMAGE-1&amp;gt;.qcow2&amp;quot; # For example: diskimage1=&amp;quot;8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2&amp;quot;&lt;br /&gt;
diskimage2=&amp;quot;&amp;lt;DISK-IMAGE-2&amp;gt;.qcow2&amp;quot; # For example: diskimage2=&amp;quot;aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee.qcow2&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Have again a look at the different variables and &#039;&#039;&#039;double check them again&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
echo &amp;quot;Machine Name = ${machinename}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Pool = ${vmpool}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Type = ${vmtype}&amp;quot;&lt;br /&gt;
echo &amp;quot;Backup date = ${backupdate}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 1 = ${diskimage1}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 2 = ${diskimage2}&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all these files to the retain location:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
currentdate=`date --utc +&#039;%Y%m%dT%H%M%SZ&#039;`&lt;br /&gt;
mkdir -p /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.ldif.${backupdate} /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Check if there is a difference between the current XML file and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
diff -Naur /etc/libvirt/qemu/${machinename}.xml /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.xml.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Now you are entering the critical part. You won&#039;t be able to undo the following steps&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Check if there is a difference between the current LDAP entry and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
domain=&amp;quot;&amp;lt;DOMAIN&amp;gt;&amp;quot; # For example domain=&amp;quot;stoney-cloud.org&amp;quot;&lt;br /&gt;
ldapbase=&amp;quot;&amp;lt;LDAPBASE&amp;gt;&amp;quot; # For expample ldapbase=&amp;quot;dc=stoney-cloud,dc=org&amp;quot;&lt;br /&gt;
ldapsearch -H ldaps://ldapm.${domain} -b &amp;quot;sstVirtualMachine=${machinename},ou=virtual machines,ou=virtualization,ou=services,${ldapbase}&amp;quot; -s sub -x -LLL -o ldif-wrap=no -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W  &amp;quot;(objectclass=*)&amp;quot; &amp;gt; /tmp/${machinename}.ldif&lt;br /&gt;
diff -Naur /tmp/${machinename}.ldif /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.&lt;br /&gt;
&lt;br /&gt;
If there are no differences (or the differences are not important) you can skip the following step. Otherwise use the [https://cloud.stepping-stone.ch/phpldapadmin PhpLdapAdmin] to delete the machine from the LDAP directory (do not forget to delete the dhcp entry &amp;lt;code&amp;gt;dn: cn=&amp;lt;MACHINE-NAME&amp;gt;,ou=virtual machines,cn=192.168.140.0,cn=config-01,ou=dhcp,ou=networks,ou=virtualization,ou=services,dc=stoney-cloud,dc=org&amp;lt;/code&amp;gt;). Then add the LDIF (the one you just edited) to the LDAP (first do some general replacement)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sed -i\&lt;br /&gt;
 -e &#039;s/snapshotting/finished/&#039;\&lt;br /&gt;
 -e &#039;/member.*/d&#039;\&lt;br /&gt;
 /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&lt;br /&gt;
/usr/bin/ldapadd -H &amp;quot;ldaps://ldapm.${domain}&amp;quot; -x -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W -f /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Undefine the machine&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh undefine ${machinename}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all the disk images from the backup location back to their original location&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage1}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage1}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage2}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage2}&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And restore the domain from the state file from the backup location with the XML from the retain location (the one you might have edited)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh restore /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.state.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now the machine should be up and running again. Continuing where it was stopped when taking the backup.&lt;br /&gt;
&lt;br /&gt;
If everything is OK, you can cleanup the created files and directories&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rm -rf /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
rm /tmp/${machinename}.ldif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: stoney conductor]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3769</id>
		<title>stoney conductor: prov-backup-kvm</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3769"/>
		<updated>2014-06-26T14:55:54Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Backend */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
The &#039;&#039;&#039;Provisioning-Backup-KVM Daemon&#039;&#039;&#039; is written in Perl and uses the mechanisms described under [[stoney core: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Workflow =&lt;br /&gt;
== Backup ==&lt;br /&gt;
This is the simplified workflow for the Provisioning-Backup-KVM Daemon. The Subroutines (create-, export- and commitSnapshot) are shown later.&lt;br /&gt;
&lt;br /&gt;
[[File:KVM-Backup-Workflow.png|thumb|none|400px|Figure 1: Simplified prov-backup-kvm workflow]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update this workflow by editing [[File:KVM-Backup-simple.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== createSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Snapshot.png|thumb|none|500px|Figure 2: Detailed workflow for the createsSnaphshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[stoney_conductor:_Backup#createSnapshot | createSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== exportSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Merge.png|thumb|none|500px|Figure 2: Detailed workflow for the exportSnapshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[ stoney_conductor:_Backup#exportSnapshot | exportSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== commitSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Retain.png|thumb|none|500px|Figure 3: Detailed workflow for the commitSnapshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#commitSnaphsot | commitSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
== Restore ==&lt;br /&gt;
&#039;&#039;&#039; Restore is currently not implemented &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Global ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright (C) 2013 stepping stone GmbH&lt;br /&gt;
#                    Switzerland&lt;br /&gt;
#                    http://www.stepping-stone.ch&lt;br /&gt;
#                    support@stepping-stone.ch&lt;br /&gt;
#&lt;br /&gt;
# Authors:&lt;br /&gt;
#  Pat Kläy &amp;lt;pat.klaey@stepping-stone.ch&amp;gt;&lt;br /&gt;
#  &lt;br /&gt;
# Licensed under the EUPL, Version 1.1.&lt;br /&gt;
#&lt;br /&gt;
# You may not use this work except in compliance with the&lt;br /&gt;
# Licence.&lt;br /&gt;
# You may obtain a copy of the Licence at:&lt;br /&gt;
#&lt;br /&gt;
# http://www.osor.eu/eupl&lt;br /&gt;
#&lt;br /&gt;
# Unless required by applicable law or agreed to in&lt;br /&gt;
# writing, software distributed under the Licence is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; basis,&lt;br /&gt;
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either&lt;br /&gt;
# express or implied.&lt;br /&gt;
# See the Licence for the specific language governing&lt;br /&gt;
# permissions and limitations under the Licence.&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Global]&lt;br /&gt;
# If true the script logs every information to the log-file.&lt;br /&gt;
LOG_DEBUG = 1&lt;br /&gt;
&lt;br /&gt;
# If true the script logs additional information to the log-file.&lt;br /&gt;
LOG_INFO = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs warnings to the log-file.&lt;br /&gt;
LOG_WARNING = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs errors to the log-file.&lt;br /&gt;
LOG_ERR = 1&lt;br /&gt;
&lt;br /&gt;
# The environment indicates the hostname (fqdn) on which the prov-backup-kvm &lt;br /&gt;
# daemon is running&lt;br /&gt;
ENVIRONMENT = &amp;lt;STONEY-CLOUD-NODE-NAME&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
# All information related to the database (backend) the daemon connects to&lt;br /&gt;
[Database]&lt;br /&gt;
BACKEND = LDAP&lt;br /&gt;
SERVER = &amp;lt;STONEY-CLOUD-LDAP-SERVER&amp;gt;&lt;br /&gt;
PORT = &amp;lt;STONEY-CLOUD-LDAP-PORT&amp;gt;&lt;br /&gt;
ADMIN_USER = &amp;lt;STONEY-CLOUD-LDAP-BINDDN&amp;gt;&lt;br /&gt;
ADMIN_PASSWORD = &amp;lt;STONEY-CLOUD-LDAP-BIND-PASSWORD&amp;gt;&lt;br /&gt;
SERVICE_SUBTREE = &amp;lt;STONEY-CLOUD-LDAP-SERVICE-SUBTREE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# A cookie file will be used to be able to restart the daemon without&lt;br /&gt;
# processing every entry again (they appear as new if the daemon is started) &lt;br /&gt;
COOKIE_FILE = &amp;lt;STONEY-CLOUD-LDAP-COOKIE-FILE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# The default cookie just contains an empty CSN, in that way, all entries&lt;br /&gt;
# are processed&lt;br /&gt;
DEFAULT_COOKIE = rid=001,csn=&lt;br /&gt;
&lt;br /&gt;
# The search filter for the database. Only process entries found with this&lt;br /&gt;
# filter&lt;br /&gt;
SEARCH_FILTER = (&amp;amp;(entryCSN&amp;gt;=%entryCSN%)(objectClass=*))&lt;br /&gt;
&lt;br /&gt;
# Indicates the prov-backup-kvm configuration which applies for every&lt;br /&gt;
# VM-Pool and every VM if not overwritten by a VM-Pool- or VM-specific &lt;br /&gt;
# configuration&lt;br /&gt;
STONEY_CLOUD_WIDE_CONFIGURATION = &amp;lt;STONEY-CLOUD-LDAP-PROV-BACKUP-KVM-DEFAULT-CONFIGURATION&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Configuration concerining the provisioning module&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
# The modus should always be selfcare&lt;br /&gt;
MODUS = selfcare&lt;br /&gt;
&lt;br /&gt;
# Which TransportApi is used to execute the commands on the destination system&lt;br /&gt;
# TransportApi can be &amp;quot;LocalCLI&amp;quot; or &amp;quot;CLISSH&amp;quot;&lt;br /&gt;
TRANSPORTAPI = LocalCLI&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning service&lt;br /&gt;
SERVICE = Backup&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning type&lt;br /&gt;
TYPE = KVM&lt;br /&gt;
&lt;br /&gt;
# The syslog tag (normally service-type)&lt;br /&gt;
SYSLOG = Backup-KVM&lt;br /&gt;
&lt;br /&gt;
# All information concerning the gateway (TransportApi)&lt;br /&gt;
[Gateway]&lt;br /&gt;
HOST = localhost&lt;br /&gt;
USER = provisioning&lt;br /&gt;
DSA_FILE = none&lt;br /&gt;
&lt;br /&gt;
# Service specific configuration which is not present in the backend&lt;br /&gt;
[Backup]&lt;br /&gt;
&lt;br /&gt;
# Which command is used to export files&lt;br /&gt;
EXPORT_COMMAND = cp -p&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Backend ==&lt;br /&gt;
&#039;&#039;&#039; Currently the backend configuration is not active. If it is active it will be (maybe some minor modifications) as the following: &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In the backend, you need to have at least one configuration which applies for the whole stoney cloud. This configuration is referenced in the [[#Global|global configuration]]. You are able to overwrite the stoney-cloud-wide configuration for&lt;br /&gt;
* A VM-Pool&lt;br /&gt;
* A single VM&lt;br /&gt;
The configuration which applies for the VM is evaluated in the following way:&lt;br /&gt;
# Check if the VM has a VM-specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# Check if the VM-Pool has a specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# The stoney-cloud-wide configuration applies&lt;br /&gt;
&lt;br /&gt;
=== Mandatory Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupNumberOfIterations&#039;&#039;&#039;: An integer value how many backup iterations should be kept. Default is 1 (for disaster recovery).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRootDirectory&#039;&#039;&#039;: The path to the backup root directory where all iterations of disk-images and state files are stored. Default is file:///var/backup/virtualization.&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRetainDirectory&#039;&#039;&#039;: The path to the local retain directory where the temporary snapshots (disk-image and state file) are stored. Default is file:///var/virtualization/retain.&lt;br /&gt;
* &#039;&#039;&#039;sstRestoreVMWithoutState&#039;&#039;&#039;: Boolean value which indicates whether or not to restore a virtual machine without the state. Default is FALSE (most often we want to restore the state together with the virtual machine).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRamDiskLocation&#039;&#039;&#039;: Path to the RAM-Disk. Default is /mnt/ramdisk. Because this attribute can be set for the whole FOSS-Cloud, for a specific VM-Pool, for a specific virtual machine or a specific virtual machine template, this attribute is independent from the VM-Nodes. There for no guarantee can be given, that this RAM-Disk exists on all the VM-Nodes. A check for its existence is mandatory!&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineForceStart&#039;&#039;&#039;: Force start VM in the case of not being able to restore the VM State during the backup process. TRUE or FALSE, default is FALSE. Attention: If set to TRUE, this could lead to file system inconsistencies in the virtual machine.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationBandwidthMerge&#039;&#039;&#039;: Bandwidth of the disk merging process (specifies the maximum I/O rate to allow in Megabyte/s). Default is 0 (unlimited). Integer Attribute, single value.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageFormat&#039;&#039;&#039;: The format for the new disk image that is created during the backup process. Default is qcow2.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageOwner&#039;&#039;&#039;: The owner for the new disk image that is created during the backup process. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageGroup&#039;&#039;&#039; : The group for the new disk image that is created during the backup process. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImagePermission&#039;&#039;&#039;: The permission (in octal representation) for the new disk image that is created during the backup process. Default is 660 (equivalent to 0660).&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryOwner&#039;&#039;&#039;: The owner for the new directory where the disk image is located. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryGroup&#039;&#039;&#039;: The group for the new directory where the disk image is located. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryPermission&#039;&#039;&#039;: The permission (in octal representation) for the new directory where the disk image is located. Default is 770 (equivalent to 0770).&lt;br /&gt;
&lt;br /&gt;
=== Optional Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupExcludeFromBackup&#039;&#039;&#039;: Do we want to exclude a virtual machine from the default backup plan? Default is FALSE.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStop&#039;&#039;&#039;: Multiple dependencies for the stopping order can be defined. Example: a web VM depends on the corresponding database VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be stopped in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID1&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID3&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStart&#039;&#039;&#039;: Multiple dependencies for the starting order can be defined. Example: a database VM must be started before the corresponding web VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be started in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID3&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID1&lt;br /&gt;
&lt;br /&gt;
= Exit codes =&lt;br /&gt;
The following list defines the return codes and their meaning for the KVM-Backup script:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
use constant &lt;br /&gt;
{&lt;br /&gt;
    SUCCESS_CODE =&amp;gt; 0,&lt;br /&gt;
    ERROR_CODE =&amp;gt; 1,&lt;br /&gt;
    &lt;br /&gt;
    TRUE =&amp;gt; 1,&lt;br /&gt;
    FALSE =&amp;gt; 0,&lt;br /&gt;
    &lt;br /&gt;
    # Specific error codes&lt;br /&gt;
    TEMPLATE_NOT_READABLE =&amp;gt; 101,&lt;br /&gt;
    NO_MACHINE =&amp;gt; 102,&lt;br /&gt;
    NO_XML_DESCRIPTION =&amp;gt; 103,&lt;br /&gt;
    CANNOT_CREATE_SNAPSHOT =&amp;gt; 104,&lt;br /&gt;
    NO_STATE_INFORMATION =&amp;gt; 105,&lt;br /&gt;
    CANNOT_START_MACHINE =&amp;gt; 106,&lt;br /&gt;
    NO_BACKUP_LOCATION =&amp;gt; 107,&lt;br /&gt;
    UNCONSISTENT_BACKUP =&amp;gt; 108,&lt;br /&gt;
    CANNOT_GET_SNAPSHOT =&amp;gt; 109,&lt;br /&gt;
    CANNOT_DELETE_SNAPSHOT =&amp;gt; 110,&lt;br /&gt;
    CANNOT_UPDATE_XML =&amp;gt; 111,&lt;br /&gt;
    NO_SPACE_LEFT =&amp;gt; 112,&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Next steps =&lt;br /&gt;
* Implement restore&lt;br /&gt;
&lt;br /&gt;
= Source Code =&lt;br /&gt;
The source code is located in our GitHub Repository:&lt;br /&gt;
&lt;br /&gt;
https://github.com/stoney-cloud/stoney-conductor/tree/master/prov-backup-kvm&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney conductor]][[Category:Provisioning Modules]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3768</id>
		<title>stoney conductor: prov-backup-kvm</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3768"/>
		<updated>2014-06-26T14:54:39Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Exit codes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
The &#039;&#039;&#039;Provisioning-Backup-KVM Daemon&#039;&#039;&#039; is written in Perl and uses the mechanisms described under [[stoney core: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Workflow =&lt;br /&gt;
== Backup ==&lt;br /&gt;
This is the simplified workflow for the Provisioning-Backup-KVM Daemon. The Subroutines (create-, export- and commitSnapshot) are shown later.&lt;br /&gt;
&lt;br /&gt;
[[File:KVM-Backup-Workflow.png|thumb|none|400px|Figure 1: Simplified prov-backup-kvm workflow]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update this workflow by editing [[File:KVM-Backup-simple.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== createSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Snapshot.png|thumb|none|500px|Figure 2: Detailed workflow for the createsSnaphshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[stoney_conductor:_Backup#createSnapshot | createSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== exportSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Merge.png|thumb|none|500px|Figure 2: Detailed workflow for the exportSnapshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[ stoney_conductor:_Backup#exportSnapshot | exportSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== commitSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Retain.png|thumb|none|500px|Figure 3: Detailed workflow for the commitSnapshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#commitSnaphsot | commitSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
== Restore ==&lt;br /&gt;
&#039;&#039;&#039; Restore is currently not implemented &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Global ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright (C) 2013 stepping stone GmbH&lt;br /&gt;
#                    Switzerland&lt;br /&gt;
#                    http://www.stepping-stone.ch&lt;br /&gt;
#                    support@stepping-stone.ch&lt;br /&gt;
#&lt;br /&gt;
# Authors:&lt;br /&gt;
#  Pat Kläy &amp;lt;pat.klaey@stepping-stone.ch&amp;gt;&lt;br /&gt;
#  &lt;br /&gt;
# Licensed under the EUPL, Version 1.1.&lt;br /&gt;
#&lt;br /&gt;
# You may not use this work except in compliance with the&lt;br /&gt;
# Licence.&lt;br /&gt;
# You may obtain a copy of the Licence at:&lt;br /&gt;
#&lt;br /&gt;
# http://www.osor.eu/eupl&lt;br /&gt;
#&lt;br /&gt;
# Unless required by applicable law or agreed to in&lt;br /&gt;
# writing, software distributed under the Licence is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; basis,&lt;br /&gt;
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either&lt;br /&gt;
# express or implied.&lt;br /&gt;
# See the Licence for the specific language governing&lt;br /&gt;
# permissions and limitations under the Licence.&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Global]&lt;br /&gt;
# If true the script logs every information to the log-file.&lt;br /&gt;
LOG_DEBUG = 1&lt;br /&gt;
&lt;br /&gt;
# If true the script logs additional information to the log-file.&lt;br /&gt;
LOG_INFO = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs warnings to the log-file.&lt;br /&gt;
LOG_WARNING = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs errors to the log-file.&lt;br /&gt;
LOG_ERR = 1&lt;br /&gt;
&lt;br /&gt;
# The environment indicates the hostname (fqdn) on which the prov-backup-kvm &lt;br /&gt;
# daemon is running&lt;br /&gt;
ENVIRONMENT = &amp;lt;STONEY-CLOUD-NODE-NAME&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
# All information related to the database (backend) the daemon connects to&lt;br /&gt;
[Database]&lt;br /&gt;
BACKEND = LDAP&lt;br /&gt;
SERVER = &amp;lt;STONEY-CLOUD-LDAP-SERVER&amp;gt;&lt;br /&gt;
PORT = &amp;lt;STONEY-CLOUD-LDAP-PORT&amp;gt;&lt;br /&gt;
ADMIN_USER = &amp;lt;STONEY-CLOUD-LDAP-BINDDN&amp;gt;&lt;br /&gt;
ADMIN_PASSWORD = &amp;lt;STONEY-CLOUD-LDAP-BIND-PASSWORD&amp;gt;&lt;br /&gt;
SERVICE_SUBTREE = &amp;lt;STONEY-CLOUD-LDAP-SERVICE-SUBTREE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# A cookie file will be used to be able to restart the daemon without&lt;br /&gt;
# processing every entry again (they appear as new if the daemon is started) &lt;br /&gt;
COOKIE_FILE = &amp;lt;STONEY-CLOUD-LDAP-COOKIE-FILE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# The default cookie just contains an empty CSN, in that way, all entries&lt;br /&gt;
# are processed&lt;br /&gt;
DEFAULT_COOKIE = rid=001,csn=&lt;br /&gt;
&lt;br /&gt;
# The search filter for the database. Only process entries found with this&lt;br /&gt;
# filter&lt;br /&gt;
SEARCH_FILTER = (&amp;amp;(entryCSN&amp;gt;=%entryCSN%)(objectClass=*))&lt;br /&gt;
&lt;br /&gt;
# Indicates the prov-backup-kvm configuration which applies for every&lt;br /&gt;
# VM-Pool and every VM if not overwritten by a VM-Pool- or VM-specific &lt;br /&gt;
# configuration&lt;br /&gt;
STONEY_CLOUD_WIDE_CONFIGURATION = &amp;lt;STONEY-CLOUD-LDAP-PROV-BACKUP-KVM-DEFAULT-CONFIGURATION&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Configuration concerining the provisioning module&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
# The modus should always be selfcare&lt;br /&gt;
MODUS = selfcare&lt;br /&gt;
&lt;br /&gt;
# Which TransportApi is used to execute the commands on the destination system&lt;br /&gt;
# TransportApi can be &amp;quot;LocalCLI&amp;quot; or &amp;quot;CLISSH&amp;quot;&lt;br /&gt;
TRANSPORTAPI = LocalCLI&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning service&lt;br /&gt;
SERVICE = Backup&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning type&lt;br /&gt;
TYPE = KVM&lt;br /&gt;
&lt;br /&gt;
# The syslog tag (normally service-type)&lt;br /&gt;
SYSLOG = Backup-KVM&lt;br /&gt;
&lt;br /&gt;
# All information concerning the gateway (TransportApi)&lt;br /&gt;
[Gateway]&lt;br /&gt;
HOST = localhost&lt;br /&gt;
USER = provisioning&lt;br /&gt;
DSA_FILE = none&lt;br /&gt;
&lt;br /&gt;
# Service specific configuration which is not present in the backend&lt;br /&gt;
[Backup]&lt;br /&gt;
&lt;br /&gt;
# Which command is used to export files&lt;br /&gt;
EXPORT_COMMAND = cp -p&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Backend ==&lt;br /&gt;
In the backend, you need to have at least one configuration which applies for the whole stoney cloud. This configuration is referenced in the [[#Global|global configuration]]. You are able to overwrite the stoney-cloud-wide configuration for&lt;br /&gt;
* A VM-Pool&lt;br /&gt;
* A single VM&lt;br /&gt;
The configuration which applies for the VM is evaluated in the following way:&lt;br /&gt;
# Check if the VM has a VM-specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# Check if the VM-Pool has a specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# The stoney-cloud-wide configuration applies&lt;br /&gt;
&lt;br /&gt;
=== Mandatory Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupNumberOfIterations&#039;&#039;&#039;: An integer value how many backup iterations should be kept. Default is 1 (for disaster recovery).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRootDirectory&#039;&#039;&#039;: The path to the backup root directory where all iterations of disk-images and state files are stored. Default is file:///var/backup/virtualization.&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRetainDirectory&#039;&#039;&#039;: The path to the local retain directory where the temporary snapshots (disk-image and state file) are stored. Default is file:///var/virtualization/retain.&lt;br /&gt;
* &#039;&#039;&#039;sstRestoreVMWithoutState&#039;&#039;&#039;: Boolean value which indicates whether or not to restore a virtual machine without the state. Default is FALSE (most often we want to restore the state together with the virtual machine).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRamDiskLocation&#039;&#039;&#039;: Path to the RAM-Disk. Default is /mnt/ramdisk. Because this attribute can be set for the whole FOSS-Cloud, for a specific VM-Pool, for a specific virtual machine or a specific virtual machine template, this attribute is independent from the VM-Nodes. There for no guarantee can be given, that this RAM-Disk exists on all the VM-Nodes. A check for its existence is mandatory!&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineForceStart&#039;&#039;&#039;: Force start VM in the case of not being able to restore the VM State during the backup process. TRUE or FALSE, default is FALSE. Attention: If set to TRUE, this could lead to file system inconsistencies in the virtual machine.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationBandwidthMerge&#039;&#039;&#039;: Bandwidth of the disk merging process (specifies the maximum I/O rate to allow in Megabyte/s). Default is 0 (unlimited). Integer Attribute, single value.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageFormat&#039;&#039;&#039;: The format for the new disk image that is created during the backup process. Default is qcow2.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageOwner&#039;&#039;&#039;: The owner for the new disk image that is created during the backup process. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageGroup&#039;&#039;&#039; : The group for the new disk image that is created during the backup process. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImagePermission&#039;&#039;&#039;: The permission (in octal representation) for the new disk image that is created during the backup process. Default is 660 (equivalent to 0660).&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryOwner&#039;&#039;&#039;: The owner for the new directory where the disk image is located. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryGroup&#039;&#039;&#039;: The group for the new directory where the disk image is located. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryPermission&#039;&#039;&#039;: The permission (in octal representation) for the new directory where the disk image is located. Default is 770 (equivalent to 0770).&lt;br /&gt;
&lt;br /&gt;
=== Optional Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupExcludeFromBackup&#039;&#039;&#039;: Do we want to exclude a virtual machine from the default backup plan? Default is FALSE.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStop&#039;&#039;&#039;: Multiple dependencies for the stopping order can be defined. Example: a web VM depends on the corresponding database VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be stopped in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID1&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID3&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStart&#039;&#039;&#039;: Multiple dependencies for the starting order can be defined. Example: a database VM must be started before the corresponding web VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be started in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID3&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID1&lt;br /&gt;
&lt;br /&gt;
= Exit codes =&lt;br /&gt;
The following list defines the return codes and their meaning for the KVM-Backup script:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
use constant &lt;br /&gt;
{&lt;br /&gt;
    SUCCESS_CODE =&amp;gt; 0,&lt;br /&gt;
    ERROR_CODE =&amp;gt; 1,&lt;br /&gt;
    &lt;br /&gt;
    TRUE =&amp;gt; 1,&lt;br /&gt;
    FALSE =&amp;gt; 0,&lt;br /&gt;
    &lt;br /&gt;
    # Specific error codes&lt;br /&gt;
    TEMPLATE_NOT_READABLE =&amp;gt; 101,&lt;br /&gt;
    NO_MACHINE =&amp;gt; 102,&lt;br /&gt;
    NO_XML_DESCRIPTION =&amp;gt; 103,&lt;br /&gt;
    CANNOT_CREATE_SNAPSHOT =&amp;gt; 104,&lt;br /&gt;
    NO_STATE_INFORMATION =&amp;gt; 105,&lt;br /&gt;
    CANNOT_START_MACHINE =&amp;gt; 106,&lt;br /&gt;
    NO_BACKUP_LOCATION =&amp;gt; 107,&lt;br /&gt;
    UNCONSISTENT_BACKUP =&amp;gt; 108,&lt;br /&gt;
    CANNOT_GET_SNAPSHOT =&amp;gt; 109,&lt;br /&gt;
    CANNOT_DELETE_SNAPSHOT =&amp;gt; 110,&lt;br /&gt;
    CANNOT_UPDATE_XML =&amp;gt; 111,&lt;br /&gt;
    NO_SPACE_LEFT =&amp;gt; 112,&lt;br /&gt;
};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Next steps =&lt;br /&gt;
* Implement restore&lt;br /&gt;
&lt;br /&gt;
= Source Code =&lt;br /&gt;
The source code is located in our GitHub Repository:&lt;br /&gt;
&lt;br /&gt;
https://github.com/stoney-cloud/stoney-conductor/tree/master/prov-backup-kvm&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney conductor]][[Category:Provisioning Modules]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3767</id>
		<title>stoney conductor: prov-backup-kvm</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3767"/>
		<updated>2014-06-26T14:53:47Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Source Code */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
The &#039;&#039;&#039;Provisioning-Backup-KVM Daemon&#039;&#039;&#039; is written in Perl and uses the mechanisms described under [[stoney core: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Workflow =&lt;br /&gt;
== Backup ==&lt;br /&gt;
This is the simplified workflow for the Provisioning-Backup-KVM Daemon. The Subroutines (create-, export- and commitSnapshot) are shown later.&lt;br /&gt;
&lt;br /&gt;
[[File:KVM-Backup-Workflow.png|thumb|none|400px|Figure 1: Simplified prov-backup-kvm workflow]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update this workflow by editing [[File:KVM-Backup-simple.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== createSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Snapshot.png|thumb|none|500px|Figure 2: Detailed workflow for the createsSnaphshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[stoney_conductor:_Backup#createSnapshot | createSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== exportSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Merge.png|thumb|none|500px|Figure 2: Detailed workflow for the exportSnapshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[ stoney_conductor:_Backup#exportSnapshot | exportSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== commitSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Retain.png|thumb|none|500px|Figure 3: Detailed workflow for the commitSnapshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#commitSnaphsot | commitSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
== Restore ==&lt;br /&gt;
&#039;&#039;&#039; Restore is currently not implemented &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Global ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright (C) 2013 stepping stone GmbH&lt;br /&gt;
#                    Switzerland&lt;br /&gt;
#                    http://www.stepping-stone.ch&lt;br /&gt;
#                    support@stepping-stone.ch&lt;br /&gt;
#&lt;br /&gt;
# Authors:&lt;br /&gt;
#  Pat Kläy &amp;lt;pat.klaey@stepping-stone.ch&amp;gt;&lt;br /&gt;
#  &lt;br /&gt;
# Licensed under the EUPL, Version 1.1.&lt;br /&gt;
#&lt;br /&gt;
# You may not use this work except in compliance with the&lt;br /&gt;
# Licence.&lt;br /&gt;
# You may obtain a copy of the Licence at:&lt;br /&gt;
#&lt;br /&gt;
# http://www.osor.eu/eupl&lt;br /&gt;
#&lt;br /&gt;
# Unless required by applicable law or agreed to in&lt;br /&gt;
# writing, software distributed under the Licence is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; basis,&lt;br /&gt;
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either&lt;br /&gt;
# express or implied.&lt;br /&gt;
# See the Licence for the specific language governing&lt;br /&gt;
# permissions and limitations under the Licence.&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Global]&lt;br /&gt;
# If true the script logs every information to the log-file.&lt;br /&gt;
LOG_DEBUG = 1&lt;br /&gt;
&lt;br /&gt;
# If true the script logs additional information to the log-file.&lt;br /&gt;
LOG_INFO = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs warnings to the log-file.&lt;br /&gt;
LOG_WARNING = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs errors to the log-file.&lt;br /&gt;
LOG_ERR = 1&lt;br /&gt;
&lt;br /&gt;
# The environment indicates the hostname (fqdn) on which the prov-backup-kvm &lt;br /&gt;
# daemon is running&lt;br /&gt;
ENVIRONMENT = &amp;lt;STONEY-CLOUD-NODE-NAME&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
# All information related to the database (backend) the daemon connects to&lt;br /&gt;
[Database]&lt;br /&gt;
BACKEND = LDAP&lt;br /&gt;
SERVER = &amp;lt;STONEY-CLOUD-LDAP-SERVER&amp;gt;&lt;br /&gt;
PORT = &amp;lt;STONEY-CLOUD-LDAP-PORT&amp;gt;&lt;br /&gt;
ADMIN_USER = &amp;lt;STONEY-CLOUD-LDAP-BINDDN&amp;gt;&lt;br /&gt;
ADMIN_PASSWORD = &amp;lt;STONEY-CLOUD-LDAP-BIND-PASSWORD&amp;gt;&lt;br /&gt;
SERVICE_SUBTREE = &amp;lt;STONEY-CLOUD-LDAP-SERVICE-SUBTREE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# A cookie file will be used to be able to restart the daemon without&lt;br /&gt;
# processing every entry again (they appear as new if the daemon is started) &lt;br /&gt;
COOKIE_FILE = &amp;lt;STONEY-CLOUD-LDAP-COOKIE-FILE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# The default cookie just contains an empty CSN, in that way, all entries&lt;br /&gt;
# are processed&lt;br /&gt;
DEFAULT_COOKIE = rid=001,csn=&lt;br /&gt;
&lt;br /&gt;
# The search filter for the database. Only process entries found with this&lt;br /&gt;
# filter&lt;br /&gt;
SEARCH_FILTER = (&amp;amp;(entryCSN&amp;gt;=%entryCSN%)(objectClass=*))&lt;br /&gt;
&lt;br /&gt;
# Indicates the prov-backup-kvm configuration which applies for every&lt;br /&gt;
# VM-Pool and every VM if not overwritten by a VM-Pool- or VM-specific &lt;br /&gt;
# configuration&lt;br /&gt;
STONEY_CLOUD_WIDE_CONFIGURATION = &amp;lt;STONEY-CLOUD-LDAP-PROV-BACKUP-KVM-DEFAULT-CONFIGURATION&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Configuration concerining the provisioning module&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
# The modus should always be selfcare&lt;br /&gt;
MODUS = selfcare&lt;br /&gt;
&lt;br /&gt;
# Which TransportApi is used to execute the commands on the destination system&lt;br /&gt;
# TransportApi can be &amp;quot;LocalCLI&amp;quot; or &amp;quot;CLISSH&amp;quot;&lt;br /&gt;
TRANSPORTAPI = LocalCLI&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning service&lt;br /&gt;
SERVICE = Backup&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning type&lt;br /&gt;
TYPE = KVM&lt;br /&gt;
&lt;br /&gt;
# The syslog tag (normally service-type)&lt;br /&gt;
SYSLOG = Backup-KVM&lt;br /&gt;
&lt;br /&gt;
# All information concerning the gateway (TransportApi)&lt;br /&gt;
[Gateway]&lt;br /&gt;
HOST = localhost&lt;br /&gt;
USER = provisioning&lt;br /&gt;
DSA_FILE = none&lt;br /&gt;
&lt;br /&gt;
# Service specific configuration which is not present in the backend&lt;br /&gt;
[Backup]&lt;br /&gt;
&lt;br /&gt;
# Which command is used to export files&lt;br /&gt;
EXPORT_COMMAND = cp -p&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Backend ==&lt;br /&gt;
In the backend, you need to have at least one configuration which applies for the whole stoney cloud. This configuration is referenced in the [[#Global|global configuration]]. You are able to overwrite the stoney-cloud-wide configuration for&lt;br /&gt;
* A VM-Pool&lt;br /&gt;
* A single VM&lt;br /&gt;
The configuration which applies for the VM is evaluated in the following way:&lt;br /&gt;
# Check if the VM has a VM-specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# Check if the VM-Pool has a specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# The stoney-cloud-wide configuration applies&lt;br /&gt;
&lt;br /&gt;
=== Mandatory Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupNumberOfIterations&#039;&#039;&#039;: An integer value how many backup iterations should be kept. Default is 1 (for disaster recovery).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRootDirectory&#039;&#039;&#039;: The path to the backup root directory where all iterations of disk-images and state files are stored. Default is file:///var/backup/virtualization.&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRetainDirectory&#039;&#039;&#039;: The path to the local retain directory where the temporary snapshots (disk-image and state file) are stored. Default is file:///var/virtualization/retain.&lt;br /&gt;
* &#039;&#039;&#039;sstRestoreVMWithoutState&#039;&#039;&#039;: Boolean value which indicates whether or not to restore a virtual machine without the state. Default is FALSE (most often we want to restore the state together with the virtual machine).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRamDiskLocation&#039;&#039;&#039;: Path to the RAM-Disk. Default is /mnt/ramdisk. Because this attribute can be set for the whole FOSS-Cloud, for a specific VM-Pool, for a specific virtual machine or a specific virtual machine template, this attribute is independent from the VM-Nodes. There for no guarantee can be given, that this RAM-Disk exists on all the VM-Nodes. A check for its existence is mandatory!&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineForceStart&#039;&#039;&#039;: Force start VM in the case of not being able to restore the VM State during the backup process. TRUE or FALSE, default is FALSE. Attention: If set to TRUE, this could lead to file system inconsistencies in the virtual machine.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationBandwidthMerge&#039;&#039;&#039;: Bandwidth of the disk merging process (specifies the maximum I/O rate to allow in Megabyte/s). Default is 0 (unlimited). Integer Attribute, single value.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageFormat&#039;&#039;&#039;: The format for the new disk image that is created during the backup process. Default is qcow2.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageOwner&#039;&#039;&#039;: The owner for the new disk image that is created during the backup process. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageGroup&#039;&#039;&#039; : The group for the new disk image that is created during the backup process. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImagePermission&#039;&#039;&#039;: The permission (in octal representation) for the new disk image that is created during the backup process. Default is 660 (equivalent to 0660).&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryOwner&#039;&#039;&#039;: The owner for the new directory where the disk image is located. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryGroup&#039;&#039;&#039;: The group for the new directory where the disk image is located. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryPermission&#039;&#039;&#039;: The permission (in octal representation) for the new directory where the disk image is located. Default is 770 (equivalent to 0770).&lt;br /&gt;
&lt;br /&gt;
=== Optional Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupExcludeFromBackup&#039;&#039;&#039;: Do we want to exclude a virtual machine from the default backup plan? Default is FALSE.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStop&#039;&#039;&#039;: Multiple dependencies for the stopping order can be defined. Example: a web VM depends on the corresponding database VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be stopped in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID1&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID3&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStart&#039;&#039;&#039;: Multiple dependencies for the starting order can be defined. Example: a database VM must be started before the corresponding web VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be started in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID3&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID1&lt;br /&gt;
&lt;br /&gt;
= Exit codes =&lt;br /&gt;
The following list defines the return codes and their meaning for the KVM-Backup script see also [https://github.com/stepping-stone/prov-backup-kvm/blob/master/lib/Provisioning/Backup/KVM/Constants.pm KVMConstants.pm]:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
use constant SUCCESS_CODE                               =&amp;gt; 0;&lt;br /&gt;
&lt;br /&gt;
### Error codes constants&lt;br /&gt;
use constant UNDEFINED_ERROR                            =&amp;gt; 1; # Always the first!&lt;br /&gt;
use constant MISSING_PARAMETER_IN_CONFIG_FILE           =&amp;gt; 2;&lt;br /&gt;
use constant CONFIGURED_RAM_DISK_IS_NOT_VALUD           =&amp;gt; 3;&lt;br /&gt;
use constant NOT_ENOUGH_SPACE_ON_RAM_DISK               =&amp;gt; 4;&lt;br /&gt;
use constant CANNOT_SAVE_MACHINE_STATE                  =&amp;gt; 5;&lt;br /&gt;
use constant CANNOT_WRITE_TO_BACKUP_LOCATION            =&amp;gt; 6;&lt;br /&gt;
use constant CANNOT_COPY_FILE_TO_BACKUP_LOCATION        =&amp;gt; 7;&lt;br /&gt;
use constant CANNOT_COPY_IMAGE_TO_BACKUP_LOCATION       =&amp;gt; 8;&lt;br /&gt;
use constant CANNOT_COPY_XML_TO_BACKUP_LOCATION         =&amp;gt; 9;&lt;br /&gt;
use constant CANNOT_COPY_BACKEND_FILE_TO_BACKUP_LOCATION=&amp;gt; 10;&lt;br /&gt;
use constant CANNOT_MERGE_DISK_IMAGES                   =&amp;gt; 11;&lt;br /&gt;
use constant CANNOT_REMOVE_OLD_DISK_IMAGE               =&amp;gt; 12;&lt;br /&gt;
use constant CANNOT_REMOVE_FILE                         =&amp;gt; 13;&lt;br /&gt;
use constant CANNOT_CREATE_EMPTY_DISK_IMAGE             =&amp;gt; 15;&lt;br /&gt;
use constant CANNOT_RENAME_DISK_IMAGE                   =&amp;gt; 16;&lt;br /&gt;
use constant CANNOT_CONNECT_TO_BACKEND                  =&amp;gt; 17;&lt;br /&gt;
use constant WRONG_STATE_INFORMATION                    =&amp;gt; 18;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_OWNERSHIP            =&amp;gt; 19;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_PERMISSION           =&amp;gt; 20;&lt;br /&gt;
use constant CANNOT_RESTORE_MACHINE                     =&amp;gt; 21;&lt;br /&gt;
use constant CANNOT_LOCK_MACHINE                        =&amp;gt; 22;&lt;br /&gt;
use constant CANNOT_FIND_MACHINE                        =&amp;gt; 23;&lt;br /&gt;
use constant CANNOT_COPY_STATE_FILE_TO_RETAIN           =&amp;gt; 24;&lt;br /&gt;
use constant RETAIN_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 25;&lt;br /&gt;
use constant BACKUP_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 26;&lt;br /&gt;
use constant CANNOT_CREATE_DIRECTORY                    =&amp;gt; 27;&lt;br /&gt;
use constant CANNOT_SAVE_XML                            =&amp;gt; 28;&lt;br /&gt;
use constant CANNOT_SAVE_BACKEND_ENTRY                  =&amp;gt; 29;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_OWNERSHIP             =&amp;gt; 30;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_PERMISSION            =&amp;gt; 31;&lt;br /&gt;
use constant CANNOT_FIND_CONFIGURATION_ENTRY            =&amp;gt; 32;&lt;br /&gt;
use constant BACKEND_XML_UNCONSISTENCY                  =&amp;gt; 33;&lt;br /&gt;
use constant CANNOT_CREATE_TARBALL                      =&amp;gt; 34;&lt;br /&gt;
use constant UNSUPPORTED_FILE_TRANSFER_PROTOCOL         =&amp;gt; 35;&lt;br /&gt;
use constant UNKNOWN_BACKEND_TYPE                       =&amp;gt; 36;&lt;br /&gt;
use constant MISSING_NECESSARY_FILES                    =&amp;gt; 37;&lt;br /&gt;
use constant CORRUPT_DISK_IMAGE_FOUND                   =&amp;gt; 38;&lt;br /&gt;
use constant UNSUPPORTED_CONFIGURATION_PARAMETER        =&amp;gt; 39;&lt;br /&gt;
use constant CANNOT_MOVE_DISK_IMAGE_TO_ORIGINAL_LOCATION=&amp;gt; 40;&lt;br /&gt;
use constant CANNOT_DEFINE_MACHINE                      =&amp;gt; 41;&lt;br /&gt;
use constant CANNOT_START_MACHINE                       =&amp;gt; 42;&lt;br /&gt;
use constant CANNOT_WORK_ON_UNDEFINED_OBJECT            =&amp;gt; 43;&lt;br /&gt;
use constant CANNOT_READ_STATE_FILE                     =&amp;gt; 44;&lt;br /&gt;
use constant CANNOT_READ_XML_FILE                       =&amp;gt; 45;&lt;br /&gt;
use constant NOT_ALL_FILES_DELETED_FROM_RETAIN_LOCATION =&amp;gt; 46;&lt;br /&gt;
use constant NOT_ENOUGH_DISK_SPACE                      =&amp;gt; 47;&lt;br /&gt;
use constant NO_DISK_SPACE_INFORMATION                  =&amp;gt; 48;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Next steps =&lt;br /&gt;
* Implement restore&lt;br /&gt;
&lt;br /&gt;
= Source Code =&lt;br /&gt;
The source code is located in our GitHub Repository:&lt;br /&gt;
&lt;br /&gt;
https://github.com/stoney-cloud/stoney-conductor/tree/master/prov-backup-kvm&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney conductor]][[Category:Provisioning Modules]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3766</id>
		<title>stoney conductor: prov-backup-kvm</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3766"/>
		<updated>2014-06-26T14:53:00Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Next steps */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
The &#039;&#039;&#039;Provisioning-Backup-KVM Daemon&#039;&#039;&#039; is written in Perl and uses the mechanisms described under [[stoney core: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Workflow =&lt;br /&gt;
== Backup ==&lt;br /&gt;
This is the simplified workflow for the Provisioning-Backup-KVM Daemon. The Subroutines (create-, export- and commitSnapshot) are shown later.&lt;br /&gt;
&lt;br /&gt;
[[File:KVM-Backup-Workflow.png|thumb|none|400px|Figure 1: Simplified prov-backup-kvm workflow]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update this workflow by editing [[File:KVM-Backup-simple.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== createSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Snapshot.png|thumb|none|500px|Figure 2: Detailed workflow for the createsSnaphshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[stoney_conductor:_Backup#createSnapshot | createSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== exportSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Merge.png|thumb|none|500px|Figure 2: Detailed workflow for the exportSnapshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[ stoney_conductor:_Backup#exportSnapshot | exportSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== commitSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Retain.png|thumb|none|500px|Figure 3: Detailed workflow for the commitSnapshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#commitSnaphsot | commitSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
== Restore ==&lt;br /&gt;
&#039;&#039;&#039; Restore is currently not implemented &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Global ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright (C) 2013 stepping stone GmbH&lt;br /&gt;
#                    Switzerland&lt;br /&gt;
#                    http://www.stepping-stone.ch&lt;br /&gt;
#                    support@stepping-stone.ch&lt;br /&gt;
#&lt;br /&gt;
# Authors:&lt;br /&gt;
#  Pat Kläy &amp;lt;pat.klaey@stepping-stone.ch&amp;gt;&lt;br /&gt;
#  &lt;br /&gt;
# Licensed under the EUPL, Version 1.1.&lt;br /&gt;
#&lt;br /&gt;
# You may not use this work except in compliance with the&lt;br /&gt;
# Licence.&lt;br /&gt;
# You may obtain a copy of the Licence at:&lt;br /&gt;
#&lt;br /&gt;
# http://www.osor.eu/eupl&lt;br /&gt;
#&lt;br /&gt;
# Unless required by applicable law or agreed to in&lt;br /&gt;
# writing, software distributed under the Licence is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; basis,&lt;br /&gt;
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either&lt;br /&gt;
# express or implied.&lt;br /&gt;
# See the Licence for the specific language governing&lt;br /&gt;
# permissions and limitations under the Licence.&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Global]&lt;br /&gt;
# If true the script logs every information to the log-file.&lt;br /&gt;
LOG_DEBUG = 1&lt;br /&gt;
&lt;br /&gt;
# If true the script logs additional information to the log-file.&lt;br /&gt;
LOG_INFO = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs warnings to the log-file.&lt;br /&gt;
LOG_WARNING = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs errors to the log-file.&lt;br /&gt;
LOG_ERR = 1&lt;br /&gt;
&lt;br /&gt;
# The environment indicates the hostname (fqdn) on which the prov-backup-kvm &lt;br /&gt;
# daemon is running&lt;br /&gt;
ENVIRONMENT = &amp;lt;STONEY-CLOUD-NODE-NAME&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
# All information related to the database (backend) the daemon connects to&lt;br /&gt;
[Database]&lt;br /&gt;
BACKEND = LDAP&lt;br /&gt;
SERVER = &amp;lt;STONEY-CLOUD-LDAP-SERVER&amp;gt;&lt;br /&gt;
PORT = &amp;lt;STONEY-CLOUD-LDAP-PORT&amp;gt;&lt;br /&gt;
ADMIN_USER = &amp;lt;STONEY-CLOUD-LDAP-BINDDN&amp;gt;&lt;br /&gt;
ADMIN_PASSWORD = &amp;lt;STONEY-CLOUD-LDAP-BIND-PASSWORD&amp;gt;&lt;br /&gt;
SERVICE_SUBTREE = &amp;lt;STONEY-CLOUD-LDAP-SERVICE-SUBTREE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# A cookie file will be used to be able to restart the daemon without&lt;br /&gt;
# processing every entry again (they appear as new if the daemon is started) &lt;br /&gt;
COOKIE_FILE = &amp;lt;STONEY-CLOUD-LDAP-COOKIE-FILE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# The default cookie just contains an empty CSN, in that way, all entries&lt;br /&gt;
# are processed&lt;br /&gt;
DEFAULT_COOKIE = rid=001,csn=&lt;br /&gt;
&lt;br /&gt;
# The search filter for the database. Only process entries found with this&lt;br /&gt;
# filter&lt;br /&gt;
SEARCH_FILTER = (&amp;amp;(entryCSN&amp;gt;=%entryCSN%)(objectClass=*))&lt;br /&gt;
&lt;br /&gt;
# Indicates the prov-backup-kvm configuration which applies for every&lt;br /&gt;
# VM-Pool and every VM if not overwritten by a VM-Pool- or VM-specific &lt;br /&gt;
# configuration&lt;br /&gt;
STONEY_CLOUD_WIDE_CONFIGURATION = &amp;lt;STONEY-CLOUD-LDAP-PROV-BACKUP-KVM-DEFAULT-CONFIGURATION&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Configuration concerining the provisioning module&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
# The modus should always be selfcare&lt;br /&gt;
MODUS = selfcare&lt;br /&gt;
&lt;br /&gt;
# Which TransportApi is used to execute the commands on the destination system&lt;br /&gt;
# TransportApi can be &amp;quot;LocalCLI&amp;quot; or &amp;quot;CLISSH&amp;quot;&lt;br /&gt;
TRANSPORTAPI = LocalCLI&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning service&lt;br /&gt;
SERVICE = Backup&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning type&lt;br /&gt;
TYPE = KVM&lt;br /&gt;
&lt;br /&gt;
# The syslog tag (normally service-type)&lt;br /&gt;
SYSLOG = Backup-KVM&lt;br /&gt;
&lt;br /&gt;
# All information concerning the gateway (TransportApi)&lt;br /&gt;
[Gateway]&lt;br /&gt;
HOST = localhost&lt;br /&gt;
USER = provisioning&lt;br /&gt;
DSA_FILE = none&lt;br /&gt;
&lt;br /&gt;
# Service specific configuration which is not present in the backend&lt;br /&gt;
[Backup]&lt;br /&gt;
&lt;br /&gt;
# Which command is used to export files&lt;br /&gt;
EXPORT_COMMAND = cp -p&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Backend ==&lt;br /&gt;
In the backend, you need to have at least one configuration which applies for the whole stoney cloud. This configuration is referenced in the [[#Global|global configuration]]. You are able to overwrite the stoney-cloud-wide configuration for&lt;br /&gt;
* A VM-Pool&lt;br /&gt;
* A single VM&lt;br /&gt;
The configuration which applies for the VM is evaluated in the following way:&lt;br /&gt;
# Check if the VM has a VM-specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# Check if the VM-Pool has a specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# The stoney-cloud-wide configuration applies&lt;br /&gt;
&lt;br /&gt;
=== Mandatory Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupNumberOfIterations&#039;&#039;&#039;: An integer value how many backup iterations should be kept. Default is 1 (for disaster recovery).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRootDirectory&#039;&#039;&#039;: The path to the backup root directory where all iterations of disk-images and state files are stored. Default is file:///var/backup/virtualization.&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRetainDirectory&#039;&#039;&#039;: The path to the local retain directory where the temporary snapshots (disk-image and state file) are stored. Default is file:///var/virtualization/retain.&lt;br /&gt;
* &#039;&#039;&#039;sstRestoreVMWithoutState&#039;&#039;&#039;: Boolean value which indicates whether or not to restore a virtual machine without the state. Default is FALSE (most often we want to restore the state together with the virtual machine).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRamDiskLocation&#039;&#039;&#039;: Path to the RAM-Disk. Default is /mnt/ramdisk. Because this attribute can be set for the whole FOSS-Cloud, for a specific VM-Pool, for a specific virtual machine or a specific virtual machine template, this attribute is independent from the VM-Nodes. There for no guarantee can be given, that this RAM-Disk exists on all the VM-Nodes. A check for its existence is mandatory!&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineForceStart&#039;&#039;&#039;: Force start VM in the case of not being able to restore the VM State during the backup process. TRUE or FALSE, default is FALSE. Attention: If set to TRUE, this could lead to file system inconsistencies in the virtual machine.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationBandwidthMerge&#039;&#039;&#039;: Bandwidth of the disk merging process (specifies the maximum I/O rate to allow in Megabyte/s). Default is 0 (unlimited). Integer Attribute, single value.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageFormat&#039;&#039;&#039;: The format for the new disk image that is created during the backup process. Default is qcow2.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageOwner&#039;&#039;&#039;: The owner for the new disk image that is created during the backup process. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageGroup&#039;&#039;&#039; : The group for the new disk image that is created during the backup process. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImagePermission&#039;&#039;&#039;: The permission (in octal representation) for the new disk image that is created during the backup process. Default is 660 (equivalent to 0660).&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryOwner&#039;&#039;&#039;: The owner for the new directory where the disk image is located. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryGroup&#039;&#039;&#039;: The group for the new directory where the disk image is located. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryPermission&#039;&#039;&#039;: The permission (in octal representation) for the new directory where the disk image is located. Default is 770 (equivalent to 0770).&lt;br /&gt;
&lt;br /&gt;
=== Optional Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupExcludeFromBackup&#039;&#039;&#039;: Do we want to exclude a virtual machine from the default backup plan? Default is FALSE.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStop&#039;&#039;&#039;: Multiple dependencies for the stopping order can be defined. Example: a web VM depends on the corresponding database VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be stopped in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID1&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID3&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStart&#039;&#039;&#039;: Multiple dependencies for the starting order can be defined. Example: a database VM must be started before the corresponding web VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be started in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID3&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID1&lt;br /&gt;
&lt;br /&gt;
= Exit codes =&lt;br /&gt;
The following list defines the return codes and their meaning for the KVM-Backup script see also [https://github.com/stepping-stone/prov-backup-kvm/blob/master/lib/Provisioning/Backup/KVM/Constants.pm KVMConstants.pm]:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
use constant SUCCESS_CODE                               =&amp;gt; 0;&lt;br /&gt;
&lt;br /&gt;
### Error codes constants&lt;br /&gt;
use constant UNDEFINED_ERROR                            =&amp;gt; 1; # Always the first!&lt;br /&gt;
use constant MISSING_PARAMETER_IN_CONFIG_FILE           =&amp;gt; 2;&lt;br /&gt;
use constant CONFIGURED_RAM_DISK_IS_NOT_VALUD           =&amp;gt; 3;&lt;br /&gt;
use constant NOT_ENOUGH_SPACE_ON_RAM_DISK               =&amp;gt; 4;&lt;br /&gt;
use constant CANNOT_SAVE_MACHINE_STATE                  =&amp;gt; 5;&lt;br /&gt;
use constant CANNOT_WRITE_TO_BACKUP_LOCATION            =&amp;gt; 6;&lt;br /&gt;
use constant CANNOT_COPY_FILE_TO_BACKUP_LOCATION        =&amp;gt; 7;&lt;br /&gt;
use constant CANNOT_COPY_IMAGE_TO_BACKUP_LOCATION       =&amp;gt; 8;&lt;br /&gt;
use constant CANNOT_COPY_XML_TO_BACKUP_LOCATION         =&amp;gt; 9;&lt;br /&gt;
use constant CANNOT_COPY_BACKEND_FILE_TO_BACKUP_LOCATION=&amp;gt; 10;&lt;br /&gt;
use constant CANNOT_MERGE_DISK_IMAGES                   =&amp;gt; 11;&lt;br /&gt;
use constant CANNOT_REMOVE_OLD_DISK_IMAGE               =&amp;gt; 12;&lt;br /&gt;
use constant CANNOT_REMOVE_FILE                         =&amp;gt; 13;&lt;br /&gt;
use constant CANNOT_CREATE_EMPTY_DISK_IMAGE             =&amp;gt; 15;&lt;br /&gt;
use constant CANNOT_RENAME_DISK_IMAGE                   =&amp;gt; 16;&lt;br /&gt;
use constant CANNOT_CONNECT_TO_BACKEND                  =&amp;gt; 17;&lt;br /&gt;
use constant WRONG_STATE_INFORMATION                    =&amp;gt; 18;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_OWNERSHIP            =&amp;gt; 19;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_PERMISSION           =&amp;gt; 20;&lt;br /&gt;
use constant CANNOT_RESTORE_MACHINE                     =&amp;gt; 21;&lt;br /&gt;
use constant CANNOT_LOCK_MACHINE                        =&amp;gt; 22;&lt;br /&gt;
use constant CANNOT_FIND_MACHINE                        =&amp;gt; 23;&lt;br /&gt;
use constant CANNOT_COPY_STATE_FILE_TO_RETAIN           =&amp;gt; 24;&lt;br /&gt;
use constant RETAIN_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 25;&lt;br /&gt;
use constant BACKUP_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 26;&lt;br /&gt;
use constant CANNOT_CREATE_DIRECTORY                    =&amp;gt; 27;&lt;br /&gt;
use constant CANNOT_SAVE_XML                            =&amp;gt; 28;&lt;br /&gt;
use constant CANNOT_SAVE_BACKEND_ENTRY                  =&amp;gt; 29;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_OWNERSHIP             =&amp;gt; 30;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_PERMISSION            =&amp;gt; 31;&lt;br /&gt;
use constant CANNOT_FIND_CONFIGURATION_ENTRY            =&amp;gt; 32;&lt;br /&gt;
use constant BACKEND_XML_UNCONSISTENCY                  =&amp;gt; 33;&lt;br /&gt;
use constant CANNOT_CREATE_TARBALL                      =&amp;gt; 34;&lt;br /&gt;
use constant UNSUPPORTED_FILE_TRANSFER_PROTOCOL         =&amp;gt; 35;&lt;br /&gt;
use constant UNKNOWN_BACKEND_TYPE                       =&amp;gt; 36;&lt;br /&gt;
use constant MISSING_NECESSARY_FILES                    =&amp;gt; 37;&lt;br /&gt;
use constant CORRUPT_DISK_IMAGE_FOUND                   =&amp;gt; 38;&lt;br /&gt;
use constant UNSUPPORTED_CONFIGURATION_PARAMETER        =&amp;gt; 39;&lt;br /&gt;
use constant CANNOT_MOVE_DISK_IMAGE_TO_ORIGINAL_LOCATION=&amp;gt; 40;&lt;br /&gt;
use constant CANNOT_DEFINE_MACHINE                      =&amp;gt; 41;&lt;br /&gt;
use constant CANNOT_START_MACHINE                       =&amp;gt; 42;&lt;br /&gt;
use constant CANNOT_WORK_ON_UNDEFINED_OBJECT            =&amp;gt; 43;&lt;br /&gt;
use constant CANNOT_READ_STATE_FILE                     =&amp;gt; 44;&lt;br /&gt;
use constant CANNOT_READ_XML_FILE                       =&amp;gt; 45;&lt;br /&gt;
use constant NOT_ALL_FILES_DELETED_FROM_RETAIN_LOCATION =&amp;gt; 46;&lt;br /&gt;
use constant NOT_ENOUGH_DISK_SPACE                      =&amp;gt; 47;&lt;br /&gt;
use constant NO_DISK_SPACE_INFORMATION                  =&amp;gt; 48;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Next steps =&lt;br /&gt;
* Implement restore&lt;br /&gt;
&lt;br /&gt;
= Source Code =&lt;br /&gt;
The source code is located in our GitHub Repository:&lt;br /&gt;
&lt;br /&gt;
https://github.com/stepping-stone/prov-backup-kvm&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney conductor]][[Category:Provisioning Modules]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3765</id>
		<title>stoney conductor: prov-backup-kvm</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3765"/>
		<updated>2014-06-26T14:52:28Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Restore */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
The &#039;&#039;&#039;Provisioning-Backup-KVM Daemon&#039;&#039;&#039; is written in Perl and uses the mechanisms described under [[stoney core: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Workflow =&lt;br /&gt;
== Backup ==&lt;br /&gt;
This is the simplified workflow for the Provisioning-Backup-KVM Daemon. The Subroutines (create-, export- and commitSnapshot) are shown later.&lt;br /&gt;
&lt;br /&gt;
[[File:KVM-Backup-Workflow.png|thumb|none|400px|Figure 1: Simplified prov-backup-kvm workflow]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update this workflow by editing [[File:KVM-Backup-simple.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== createSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Snapshot.png|thumb|none|500px|Figure 2: Detailed workflow for the createsSnaphshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[stoney_conductor:_Backup#createSnapshot | createSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== exportSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Merge.png|thumb|none|500px|Figure 2: Detailed workflow for the exportSnapshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[ stoney_conductor:_Backup#exportSnapshot | exportSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== commitSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Retain.png|thumb|none|500px|Figure 3: Detailed workflow for the commitSnapshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#commitSnaphsot | commitSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
== Restore ==&lt;br /&gt;
&#039;&#039;&#039; Restore is currently not implemented &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Global ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright (C) 2013 stepping stone GmbH&lt;br /&gt;
#                    Switzerland&lt;br /&gt;
#                    http://www.stepping-stone.ch&lt;br /&gt;
#                    support@stepping-stone.ch&lt;br /&gt;
#&lt;br /&gt;
# Authors:&lt;br /&gt;
#  Pat Kläy &amp;lt;pat.klaey@stepping-stone.ch&amp;gt;&lt;br /&gt;
#  &lt;br /&gt;
# Licensed under the EUPL, Version 1.1.&lt;br /&gt;
#&lt;br /&gt;
# You may not use this work except in compliance with the&lt;br /&gt;
# Licence.&lt;br /&gt;
# You may obtain a copy of the Licence at:&lt;br /&gt;
#&lt;br /&gt;
# http://www.osor.eu/eupl&lt;br /&gt;
#&lt;br /&gt;
# Unless required by applicable law or agreed to in&lt;br /&gt;
# writing, software distributed under the Licence is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; basis,&lt;br /&gt;
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either&lt;br /&gt;
# express or implied.&lt;br /&gt;
# See the Licence for the specific language governing&lt;br /&gt;
# permissions and limitations under the Licence.&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Global]&lt;br /&gt;
# If true the script logs every information to the log-file.&lt;br /&gt;
LOG_DEBUG = 1&lt;br /&gt;
&lt;br /&gt;
# If true the script logs additional information to the log-file.&lt;br /&gt;
LOG_INFO = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs warnings to the log-file.&lt;br /&gt;
LOG_WARNING = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs errors to the log-file.&lt;br /&gt;
LOG_ERR = 1&lt;br /&gt;
&lt;br /&gt;
# The environment indicates the hostname (fqdn) on which the prov-backup-kvm &lt;br /&gt;
# daemon is running&lt;br /&gt;
ENVIRONMENT = &amp;lt;STONEY-CLOUD-NODE-NAME&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
# All information related to the database (backend) the daemon connects to&lt;br /&gt;
[Database]&lt;br /&gt;
BACKEND = LDAP&lt;br /&gt;
SERVER = &amp;lt;STONEY-CLOUD-LDAP-SERVER&amp;gt;&lt;br /&gt;
PORT = &amp;lt;STONEY-CLOUD-LDAP-PORT&amp;gt;&lt;br /&gt;
ADMIN_USER = &amp;lt;STONEY-CLOUD-LDAP-BINDDN&amp;gt;&lt;br /&gt;
ADMIN_PASSWORD = &amp;lt;STONEY-CLOUD-LDAP-BIND-PASSWORD&amp;gt;&lt;br /&gt;
SERVICE_SUBTREE = &amp;lt;STONEY-CLOUD-LDAP-SERVICE-SUBTREE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# A cookie file will be used to be able to restart the daemon without&lt;br /&gt;
# processing every entry again (they appear as new if the daemon is started) &lt;br /&gt;
COOKIE_FILE = &amp;lt;STONEY-CLOUD-LDAP-COOKIE-FILE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# The default cookie just contains an empty CSN, in that way, all entries&lt;br /&gt;
# are processed&lt;br /&gt;
DEFAULT_COOKIE = rid=001,csn=&lt;br /&gt;
&lt;br /&gt;
# The search filter for the database. Only process entries found with this&lt;br /&gt;
# filter&lt;br /&gt;
SEARCH_FILTER = (&amp;amp;(entryCSN&amp;gt;=%entryCSN%)(objectClass=*))&lt;br /&gt;
&lt;br /&gt;
# Indicates the prov-backup-kvm configuration which applies for every&lt;br /&gt;
# VM-Pool and every VM if not overwritten by a VM-Pool- or VM-specific &lt;br /&gt;
# configuration&lt;br /&gt;
STONEY_CLOUD_WIDE_CONFIGURATION = &amp;lt;STONEY-CLOUD-LDAP-PROV-BACKUP-KVM-DEFAULT-CONFIGURATION&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Configuration concerining the provisioning module&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
# The modus should always be selfcare&lt;br /&gt;
MODUS = selfcare&lt;br /&gt;
&lt;br /&gt;
# Which TransportApi is used to execute the commands on the destination system&lt;br /&gt;
# TransportApi can be &amp;quot;LocalCLI&amp;quot; or &amp;quot;CLISSH&amp;quot;&lt;br /&gt;
TRANSPORTAPI = LocalCLI&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning service&lt;br /&gt;
SERVICE = Backup&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning type&lt;br /&gt;
TYPE = KVM&lt;br /&gt;
&lt;br /&gt;
# The syslog tag (normally service-type)&lt;br /&gt;
SYSLOG = Backup-KVM&lt;br /&gt;
&lt;br /&gt;
# All information concerning the gateway (TransportApi)&lt;br /&gt;
[Gateway]&lt;br /&gt;
HOST = localhost&lt;br /&gt;
USER = provisioning&lt;br /&gt;
DSA_FILE = none&lt;br /&gt;
&lt;br /&gt;
# Service specific configuration which is not present in the backend&lt;br /&gt;
[Backup]&lt;br /&gt;
&lt;br /&gt;
# Which command is used to export files&lt;br /&gt;
EXPORT_COMMAND = cp -p&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Backend ==&lt;br /&gt;
In the backend, you need to have at least one configuration which applies for the whole stoney cloud. This configuration is referenced in the [[#Global|global configuration]]. You are able to overwrite the stoney-cloud-wide configuration for&lt;br /&gt;
* A VM-Pool&lt;br /&gt;
* A single VM&lt;br /&gt;
The configuration which applies for the VM is evaluated in the following way:&lt;br /&gt;
# Check if the VM has a VM-specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# Check if the VM-Pool has a specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# The stoney-cloud-wide configuration applies&lt;br /&gt;
&lt;br /&gt;
=== Mandatory Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupNumberOfIterations&#039;&#039;&#039;: An integer value how many backup iterations should be kept. Default is 1 (for disaster recovery).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRootDirectory&#039;&#039;&#039;: The path to the backup root directory where all iterations of disk-images and state files are stored. Default is file:///var/backup/virtualization.&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRetainDirectory&#039;&#039;&#039;: The path to the local retain directory where the temporary snapshots (disk-image and state file) are stored. Default is file:///var/virtualization/retain.&lt;br /&gt;
* &#039;&#039;&#039;sstRestoreVMWithoutState&#039;&#039;&#039;: Boolean value which indicates whether or not to restore a virtual machine without the state. Default is FALSE (most often we want to restore the state together with the virtual machine).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRamDiskLocation&#039;&#039;&#039;: Path to the RAM-Disk. Default is /mnt/ramdisk. Because this attribute can be set for the whole FOSS-Cloud, for a specific VM-Pool, for a specific virtual machine or a specific virtual machine template, this attribute is independent from the VM-Nodes. There for no guarantee can be given, that this RAM-Disk exists on all the VM-Nodes. A check for its existence is mandatory!&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineForceStart&#039;&#039;&#039;: Force start VM in the case of not being able to restore the VM State during the backup process. TRUE or FALSE, default is FALSE. Attention: If set to TRUE, this could lead to file system inconsistencies in the virtual machine.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationBandwidthMerge&#039;&#039;&#039;: Bandwidth of the disk merging process (specifies the maximum I/O rate to allow in Megabyte/s). Default is 0 (unlimited). Integer Attribute, single value.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageFormat&#039;&#039;&#039;: The format for the new disk image that is created during the backup process. Default is qcow2.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageOwner&#039;&#039;&#039;: The owner for the new disk image that is created during the backup process. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageGroup&#039;&#039;&#039; : The group for the new disk image that is created during the backup process. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImagePermission&#039;&#039;&#039;: The permission (in octal representation) for the new disk image that is created during the backup process. Default is 660 (equivalent to 0660).&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryOwner&#039;&#039;&#039;: The owner for the new directory where the disk image is located. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryGroup&#039;&#039;&#039;: The group for the new directory where the disk image is located. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryPermission&#039;&#039;&#039;: The permission (in octal representation) for the new directory where the disk image is located. Default is 770 (equivalent to 0770).&lt;br /&gt;
&lt;br /&gt;
=== Optional Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupExcludeFromBackup&#039;&#039;&#039;: Do we want to exclude a virtual machine from the default backup plan? Default is FALSE.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStop&#039;&#039;&#039;: Multiple dependencies for the stopping order can be defined. Example: a web VM depends on the corresponding database VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be stopped in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID1&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID3&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStart&#039;&#039;&#039;: Multiple dependencies for the starting order can be defined. Example: a database VM must be started before the corresponding web VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be started in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID3&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID1&lt;br /&gt;
&lt;br /&gt;
= Exit codes =&lt;br /&gt;
The following list defines the return codes and their meaning for the KVM-Backup script see also [https://github.com/stepping-stone/prov-backup-kvm/blob/master/lib/Provisioning/Backup/KVM/Constants.pm KVMConstants.pm]:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
use constant SUCCESS_CODE                               =&amp;gt; 0;&lt;br /&gt;
&lt;br /&gt;
### Error codes constants&lt;br /&gt;
use constant UNDEFINED_ERROR                            =&amp;gt; 1; # Always the first!&lt;br /&gt;
use constant MISSING_PARAMETER_IN_CONFIG_FILE           =&amp;gt; 2;&lt;br /&gt;
use constant CONFIGURED_RAM_DISK_IS_NOT_VALUD           =&amp;gt; 3;&lt;br /&gt;
use constant NOT_ENOUGH_SPACE_ON_RAM_DISK               =&amp;gt; 4;&lt;br /&gt;
use constant CANNOT_SAVE_MACHINE_STATE                  =&amp;gt; 5;&lt;br /&gt;
use constant CANNOT_WRITE_TO_BACKUP_LOCATION            =&amp;gt; 6;&lt;br /&gt;
use constant CANNOT_COPY_FILE_TO_BACKUP_LOCATION        =&amp;gt; 7;&lt;br /&gt;
use constant CANNOT_COPY_IMAGE_TO_BACKUP_LOCATION       =&amp;gt; 8;&lt;br /&gt;
use constant CANNOT_COPY_XML_TO_BACKUP_LOCATION         =&amp;gt; 9;&lt;br /&gt;
use constant CANNOT_COPY_BACKEND_FILE_TO_BACKUP_LOCATION=&amp;gt; 10;&lt;br /&gt;
use constant CANNOT_MERGE_DISK_IMAGES                   =&amp;gt; 11;&lt;br /&gt;
use constant CANNOT_REMOVE_OLD_DISK_IMAGE               =&amp;gt; 12;&lt;br /&gt;
use constant CANNOT_REMOVE_FILE                         =&amp;gt; 13;&lt;br /&gt;
use constant CANNOT_CREATE_EMPTY_DISK_IMAGE             =&amp;gt; 15;&lt;br /&gt;
use constant CANNOT_RENAME_DISK_IMAGE                   =&amp;gt; 16;&lt;br /&gt;
use constant CANNOT_CONNECT_TO_BACKEND                  =&amp;gt; 17;&lt;br /&gt;
use constant WRONG_STATE_INFORMATION                    =&amp;gt; 18;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_OWNERSHIP            =&amp;gt; 19;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_PERMISSION           =&amp;gt; 20;&lt;br /&gt;
use constant CANNOT_RESTORE_MACHINE                     =&amp;gt; 21;&lt;br /&gt;
use constant CANNOT_LOCK_MACHINE                        =&amp;gt; 22;&lt;br /&gt;
use constant CANNOT_FIND_MACHINE                        =&amp;gt; 23;&lt;br /&gt;
use constant CANNOT_COPY_STATE_FILE_TO_RETAIN           =&amp;gt; 24;&lt;br /&gt;
use constant RETAIN_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 25;&lt;br /&gt;
use constant BACKUP_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 26;&lt;br /&gt;
use constant CANNOT_CREATE_DIRECTORY                    =&amp;gt; 27;&lt;br /&gt;
use constant CANNOT_SAVE_XML                            =&amp;gt; 28;&lt;br /&gt;
use constant CANNOT_SAVE_BACKEND_ENTRY                  =&amp;gt; 29;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_OWNERSHIP             =&amp;gt; 30;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_PERMISSION            =&amp;gt; 31;&lt;br /&gt;
use constant CANNOT_FIND_CONFIGURATION_ENTRY            =&amp;gt; 32;&lt;br /&gt;
use constant BACKEND_XML_UNCONSISTENCY                  =&amp;gt; 33;&lt;br /&gt;
use constant CANNOT_CREATE_TARBALL                      =&amp;gt; 34;&lt;br /&gt;
use constant UNSUPPORTED_FILE_TRANSFER_PROTOCOL         =&amp;gt; 35;&lt;br /&gt;
use constant UNKNOWN_BACKEND_TYPE                       =&amp;gt; 36;&lt;br /&gt;
use constant MISSING_NECESSARY_FILES                    =&amp;gt; 37;&lt;br /&gt;
use constant CORRUPT_DISK_IMAGE_FOUND                   =&amp;gt; 38;&lt;br /&gt;
use constant UNSUPPORTED_CONFIGURATION_PARAMETER        =&amp;gt; 39;&lt;br /&gt;
use constant CANNOT_MOVE_DISK_IMAGE_TO_ORIGINAL_LOCATION=&amp;gt; 40;&lt;br /&gt;
use constant CANNOT_DEFINE_MACHINE                      =&amp;gt; 41;&lt;br /&gt;
use constant CANNOT_START_MACHINE                       =&amp;gt; 42;&lt;br /&gt;
use constant CANNOT_WORK_ON_UNDEFINED_OBJECT            =&amp;gt; 43;&lt;br /&gt;
use constant CANNOT_READ_STATE_FILE                     =&amp;gt; 44;&lt;br /&gt;
use constant CANNOT_READ_XML_FILE                       =&amp;gt; 45;&lt;br /&gt;
use constant NOT_ALL_FILES_DELETED_FROM_RETAIN_LOCATION =&amp;gt; 46;&lt;br /&gt;
use constant NOT_ENOUGH_DISK_SPACE                      =&amp;gt; 47;&lt;br /&gt;
use constant NO_DISK_SPACE_INFORMATION                  =&amp;gt; 48;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Next steps =&lt;br /&gt;
* Change the behaviour of the snapshot/merge process&lt;br /&gt;
** No longer merge the original file into the new one but merge (commit) backing store file back into original one&lt;br /&gt;
*** Like that we are able to reduce the backup (merge) time a lot.&lt;br /&gt;
*** Needs different behaviour for save -&amp;gt; copy/move -&amp;gt; create new image -&amp;gt; restore -&amp;gt; merge&lt;br /&gt;
&lt;br /&gt;
= Source Code =&lt;br /&gt;
The source code is located in our GitHub Repository:&lt;br /&gt;
&lt;br /&gt;
https://github.com/stepping-stone/prov-backup-kvm&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney conductor]][[Category:Provisioning Modules]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3764</id>
		<title>stoney conductor: prov-backup-kvm</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3764"/>
		<updated>2014-06-26T14:52:00Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* commitSnapshot */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
The &#039;&#039;&#039;Provisioning-Backup-KVM Daemon&#039;&#039;&#039; is written in Perl and uses the mechanisms described under [[stoney core: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Workflow =&lt;br /&gt;
== Backup ==&lt;br /&gt;
This is the simplified workflow for the Provisioning-Backup-KVM Daemon. The Subroutines (create-, export- and commitSnapshot) are shown later.&lt;br /&gt;
&lt;br /&gt;
[[File:KVM-Backup-Workflow.png|thumb|none|400px|Figure 1: Simplified prov-backup-kvm workflow]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update this workflow by editing [[File:KVM-Backup-simple.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== createSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Snapshot.png|thumb|none|500px|Figure 2: Detailed workflow for the createsSnaphshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[stoney_conductor:_Backup#createSnapshot | createSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== exportSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Merge.png|thumb|none|500px|Figure 2: Detailed workflow for the exportSnapshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[ stoney_conductor:_Backup#exportSnapshot | exportSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== commitSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Retain.png|thumb|none|500px|Figure 3: Detailed workflow for the commitSnapshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#commitSnaphsot | commitSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
== Restore ==&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#00FF00&amp;quot;&amp;gt;Task for the control-instance daemon&amp;lt;/span&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#FFFF00&amp;quot;&amp;gt;Task for the prov-backup-kvm daemon&amp;lt;/span&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#FF8000&amp;quot;&amp;gt;Task for the vm-manager&amp;lt;/span&amp;gt;&lt;br /&gt;
[[File:KVM-Backup-Workflow-Restore.png|thumb|none|500px|Figure 4: Detailed workflow for the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-restore.xmi]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#Basic_idea_2 | Restore: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Global ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright (C) 2013 stepping stone GmbH&lt;br /&gt;
#                    Switzerland&lt;br /&gt;
#                    http://www.stepping-stone.ch&lt;br /&gt;
#                    support@stepping-stone.ch&lt;br /&gt;
#&lt;br /&gt;
# Authors:&lt;br /&gt;
#  Pat Kläy &amp;lt;pat.klaey@stepping-stone.ch&amp;gt;&lt;br /&gt;
#  &lt;br /&gt;
# Licensed under the EUPL, Version 1.1.&lt;br /&gt;
#&lt;br /&gt;
# You may not use this work except in compliance with the&lt;br /&gt;
# Licence.&lt;br /&gt;
# You may obtain a copy of the Licence at:&lt;br /&gt;
#&lt;br /&gt;
# http://www.osor.eu/eupl&lt;br /&gt;
#&lt;br /&gt;
# Unless required by applicable law or agreed to in&lt;br /&gt;
# writing, software distributed under the Licence is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; basis,&lt;br /&gt;
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either&lt;br /&gt;
# express or implied.&lt;br /&gt;
# See the Licence for the specific language governing&lt;br /&gt;
# permissions and limitations under the Licence.&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Global]&lt;br /&gt;
# If true the script logs every information to the log-file.&lt;br /&gt;
LOG_DEBUG = 1&lt;br /&gt;
&lt;br /&gt;
# If true the script logs additional information to the log-file.&lt;br /&gt;
LOG_INFO = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs warnings to the log-file.&lt;br /&gt;
LOG_WARNING = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs errors to the log-file.&lt;br /&gt;
LOG_ERR = 1&lt;br /&gt;
&lt;br /&gt;
# The environment indicates the hostname (fqdn) on which the prov-backup-kvm &lt;br /&gt;
# daemon is running&lt;br /&gt;
ENVIRONMENT = &amp;lt;STONEY-CLOUD-NODE-NAME&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
# All information related to the database (backend) the daemon connects to&lt;br /&gt;
[Database]&lt;br /&gt;
BACKEND = LDAP&lt;br /&gt;
SERVER = &amp;lt;STONEY-CLOUD-LDAP-SERVER&amp;gt;&lt;br /&gt;
PORT = &amp;lt;STONEY-CLOUD-LDAP-PORT&amp;gt;&lt;br /&gt;
ADMIN_USER = &amp;lt;STONEY-CLOUD-LDAP-BINDDN&amp;gt;&lt;br /&gt;
ADMIN_PASSWORD = &amp;lt;STONEY-CLOUD-LDAP-BIND-PASSWORD&amp;gt;&lt;br /&gt;
SERVICE_SUBTREE = &amp;lt;STONEY-CLOUD-LDAP-SERVICE-SUBTREE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# A cookie file will be used to be able to restart the daemon without&lt;br /&gt;
# processing every entry again (they appear as new if the daemon is started) &lt;br /&gt;
COOKIE_FILE = &amp;lt;STONEY-CLOUD-LDAP-COOKIE-FILE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# The default cookie just contains an empty CSN, in that way, all entries&lt;br /&gt;
# are processed&lt;br /&gt;
DEFAULT_COOKIE = rid=001,csn=&lt;br /&gt;
&lt;br /&gt;
# The search filter for the database. Only process entries found with this&lt;br /&gt;
# filter&lt;br /&gt;
SEARCH_FILTER = (&amp;amp;(entryCSN&amp;gt;=%entryCSN%)(objectClass=*))&lt;br /&gt;
&lt;br /&gt;
# Indicates the prov-backup-kvm configuration which applies for every&lt;br /&gt;
# VM-Pool and every VM if not overwritten by a VM-Pool- or VM-specific &lt;br /&gt;
# configuration&lt;br /&gt;
STONEY_CLOUD_WIDE_CONFIGURATION = &amp;lt;STONEY-CLOUD-LDAP-PROV-BACKUP-KVM-DEFAULT-CONFIGURATION&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Configuration concerining the provisioning module&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
# The modus should always be selfcare&lt;br /&gt;
MODUS = selfcare&lt;br /&gt;
&lt;br /&gt;
# Which TransportApi is used to execute the commands on the destination system&lt;br /&gt;
# TransportApi can be &amp;quot;LocalCLI&amp;quot; or &amp;quot;CLISSH&amp;quot;&lt;br /&gt;
TRANSPORTAPI = LocalCLI&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning service&lt;br /&gt;
SERVICE = Backup&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning type&lt;br /&gt;
TYPE = KVM&lt;br /&gt;
&lt;br /&gt;
# The syslog tag (normally service-type)&lt;br /&gt;
SYSLOG = Backup-KVM&lt;br /&gt;
&lt;br /&gt;
# All information concerning the gateway (TransportApi)&lt;br /&gt;
[Gateway]&lt;br /&gt;
HOST = localhost&lt;br /&gt;
USER = provisioning&lt;br /&gt;
DSA_FILE = none&lt;br /&gt;
&lt;br /&gt;
# Service specific configuration which is not present in the backend&lt;br /&gt;
[Backup]&lt;br /&gt;
&lt;br /&gt;
# Which command is used to export files&lt;br /&gt;
EXPORT_COMMAND = cp -p&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Backend ==&lt;br /&gt;
In the backend, you need to have at least one configuration which applies for the whole stoney cloud. This configuration is referenced in the [[#Global|global configuration]]. You are able to overwrite the stoney-cloud-wide configuration for&lt;br /&gt;
* A VM-Pool&lt;br /&gt;
* A single VM&lt;br /&gt;
The configuration which applies for the VM is evaluated in the following way:&lt;br /&gt;
# Check if the VM has a VM-specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# Check if the VM-Pool has a specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# The stoney-cloud-wide configuration applies&lt;br /&gt;
&lt;br /&gt;
=== Mandatory Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupNumberOfIterations&#039;&#039;&#039;: An integer value how many backup iterations should be kept. Default is 1 (for disaster recovery).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRootDirectory&#039;&#039;&#039;: The path to the backup root directory where all iterations of disk-images and state files are stored. Default is file:///var/backup/virtualization.&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRetainDirectory&#039;&#039;&#039;: The path to the local retain directory where the temporary snapshots (disk-image and state file) are stored. Default is file:///var/virtualization/retain.&lt;br /&gt;
* &#039;&#039;&#039;sstRestoreVMWithoutState&#039;&#039;&#039;: Boolean value which indicates whether or not to restore a virtual machine without the state. Default is FALSE (most often we want to restore the state together with the virtual machine).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRamDiskLocation&#039;&#039;&#039;: Path to the RAM-Disk. Default is /mnt/ramdisk. Because this attribute can be set for the whole FOSS-Cloud, for a specific VM-Pool, for a specific virtual machine or a specific virtual machine template, this attribute is independent from the VM-Nodes. There for no guarantee can be given, that this RAM-Disk exists on all the VM-Nodes. A check for its existence is mandatory!&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineForceStart&#039;&#039;&#039;: Force start VM in the case of not being able to restore the VM State during the backup process. TRUE or FALSE, default is FALSE. Attention: If set to TRUE, this could lead to file system inconsistencies in the virtual machine.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationBandwidthMerge&#039;&#039;&#039;: Bandwidth of the disk merging process (specifies the maximum I/O rate to allow in Megabyte/s). Default is 0 (unlimited). Integer Attribute, single value.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageFormat&#039;&#039;&#039;: The format for the new disk image that is created during the backup process. Default is qcow2.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageOwner&#039;&#039;&#039;: The owner for the new disk image that is created during the backup process. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageGroup&#039;&#039;&#039; : The group for the new disk image that is created during the backup process. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImagePermission&#039;&#039;&#039;: The permission (in octal representation) for the new disk image that is created during the backup process. Default is 660 (equivalent to 0660).&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryOwner&#039;&#039;&#039;: The owner for the new directory where the disk image is located. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryGroup&#039;&#039;&#039;: The group for the new directory where the disk image is located. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryPermission&#039;&#039;&#039;: The permission (in octal representation) for the new directory where the disk image is located. Default is 770 (equivalent to 0770).&lt;br /&gt;
&lt;br /&gt;
=== Optional Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupExcludeFromBackup&#039;&#039;&#039;: Do we want to exclude a virtual machine from the default backup plan? Default is FALSE.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStop&#039;&#039;&#039;: Multiple dependencies for the stopping order can be defined. Example: a web VM depends on the corresponding database VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be stopped in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID1&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID3&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStart&#039;&#039;&#039;: Multiple dependencies for the starting order can be defined. Example: a database VM must be started before the corresponding web VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be started in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID3&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID1&lt;br /&gt;
&lt;br /&gt;
= Exit codes =&lt;br /&gt;
The following list defines the return codes and their meaning for the KVM-Backup script see also [https://github.com/stepping-stone/prov-backup-kvm/blob/master/lib/Provisioning/Backup/KVM/Constants.pm KVMConstants.pm]:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
use constant SUCCESS_CODE                               =&amp;gt; 0;&lt;br /&gt;
&lt;br /&gt;
### Error codes constants&lt;br /&gt;
use constant UNDEFINED_ERROR                            =&amp;gt; 1; # Always the first!&lt;br /&gt;
use constant MISSING_PARAMETER_IN_CONFIG_FILE           =&amp;gt; 2;&lt;br /&gt;
use constant CONFIGURED_RAM_DISK_IS_NOT_VALUD           =&amp;gt; 3;&lt;br /&gt;
use constant NOT_ENOUGH_SPACE_ON_RAM_DISK               =&amp;gt; 4;&lt;br /&gt;
use constant CANNOT_SAVE_MACHINE_STATE                  =&amp;gt; 5;&lt;br /&gt;
use constant CANNOT_WRITE_TO_BACKUP_LOCATION            =&amp;gt; 6;&lt;br /&gt;
use constant CANNOT_COPY_FILE_TO_BACKUP_LOCATION        =&amp;gt; 7;&lt;br /&gt;
use constant CANNOT_COPY_IMAGE_TO_BACKUP_LOCATION       =&amp;gt; 8;&lt;br /&gt;
use constant CANNOT_COPY_XML_TO_BACKUP_LOCATION         =&amp;gt; 9;&lt;br /&gt;
use constant CANNOT_COPY_BACKEND_FILE_TO_BACKUP_LOCATION=&amp;gt; 10;&lt;br /&gt;
use constant CANNOT_MERGE_DISK_IMAGES                   =&amp;gt; 11;&lt;br /&gt;
use constant CANNOT_REMOVE_OLD_DISK_IMAGE               =&amp;gt; 12;&lt;br /&gt;
use constant CANNOT_REMOVE_FILE                         =&amp;gt; 13;&lt;br /&gt;
use constant CANNOT_CREATE_EMPTY_DISK_IMAGE             =&amp;gt; 15;&lt;br /&gt;
use constant CANNOT_RENAME_DISK_IMAGE                   =&amp;gt; 16;&lt;br /&gt;
use constant CANNOT_CONNECT_TO_BACKEND                  =&amp;gt; 17;&lt;br /&gt;
use constant WRONG_STATE_INFORMATION                    =&amp;gt; 18;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_OWNERSHIP            =&amp;gt; 19;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_PERMISSION           =&amp;gt; 20;&lt;br /&gt;
use constant CANNOT_RESTORE_MACHINE                     =&amp;gt; 21;&lt;br /&gt;
use constant CANNOT_LOCK_MACHINE                        =&amp;gt; 22;&lt;br /&gt;
use constant CANNOT_FIND_MACHINE                        =&amp;gt; 23;&lt;br /&gt;
use constant CANNOT_COPY_STATE_FILE_TO_RETAIN           =&amp;gt; 24;&lt;br /&gt;
use constant RETAIN_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 25;&lt;br /&gt;
use constant BACKUP_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 26;&lt;br /&gt;
use constant CANNOT_CREATE_DIRECTORY                    =&amp;gt; 27;&lt;br /&gt;
use constant CANNOT_SAVE_XML                            =&amp;gt; 28;&lt;br /&gt;
use constant CANNOT_SAVE_BACKEND_ENTRY                  =&amp;gt; 29;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_OWNERSHIP             =&amp;gt; 30;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_PERMISSION            =&amp;gt; 31;&lt;br /&gt;
use constant CANNOT_FIND_CONFIGURATION_ENTRY            =&amp;gt; 32;&lt;br /&gt;
use constant BACKEND_XML_UNCONSISTENCY                  =&amp;gt; 33;&lt;br /&gt;
use constant CANNOT_CREATE_TARBALL                      =&amp;gt; 34;&lt;br /&gt;
use constant UNSUPPORTED_FILE_TRANSFER_PROTOCOL         =&amp;gt; 35;&lt;br /&gt;
use constant UNKNOWN_BACKEND_TYPE                       =&amp;gt; 36;&lt;br /&gt;
use constant MISSING_NECESSARY_FILES                    =&amp;gt; 37;&lt;br /&gt;
use constant CORRUPT_DISK_IMAGE_FOUND                   =&amp;gt; 38;&lt;br /&gt;
use constant UNSUPPORTED_CONFIGURATION_PARAMETER        =&amp;gt; 39;&lt;br /&gt;
use constant CANNOT_MOVE_DISK_IMAGE_TO_ORIGINAL_LOCATION=&amp;gt; 40;&lt;br /&gt;
use constant CANNOT_DEFINE_MACHINE                      =&amp;gt; 41;&lt;br /&gt;
use constant CANNOT_START_MACHINE                       =&amp;gt; 42;&lt;br /&gt;
use constant CANNOT_WORK_ON_UNDEFINED_OBJECT            =&amp;gt; 43;&lt;br /&gt;
use constant CANNOT_READ_STATE_FILE                     =&amp;gt; 44;&lt;br /&gt;
use constant CANNOT_READ_XML_FILE                       =&amp;gt; 45;&lt;br /&gt;
use constant NOT_ALL_FILES_DELETED_FROM_RETAIN_LOCATION =&amp;gt; 46;&lt;br /&gt;
use constant NOT_ENOUGH_DISK_SPACE                      =&amp;gt; 47;&lt;br /&gt;
use constant NO_DISK_SPACE_INFORMATION                  =&amp;gt; 48;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Next steps =&lt;br /&gt;
* Change the behaviour of the snapshot/merge process&lt;br /&gt;
** No longer merge the original file into the new one but merge (commit) backing store file back into original one&lt;br /&gt;
*** Like that we are able to reduce the backup (merge) time a lot.&lt;br /&gt;
*** Needs different behaviour for save -&amp;gt; copy/move -&amp;gt; create new image -&amp;gt; restore -&amp;gt; merge&lt;br /&gt;
&lt;br /&gt;
= Source Code =&lt;br /&gt;
The source code is located in our GitHub Repository:&lt;br /&gt;
&lt;br /&gt;
https://github.com/stepping-stone/prov-backup-kvm&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney conductor]][[Category:Provisioning Modules]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=File:KVM-Backup-Workflow-Retain.png&amp;diff=3763</id>
		<title>File:KVM-Backup-Workflow-Retain.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=File:KVM-Backup-Workflow-Retain.png&amp;diff=3763"/>
		<updated>2014-06-26T14:51:35Z</updated>

		<summary type="html">&lt;p&gt;Pat: Pat uploaded a new version of &amp;amp;quot;File:KVM-Backup-Workflow-Retain.png&amp;amp;quot;: New commit method for committing changes from overlay back to underlay image&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Detailed workflow for the retain process&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3762</id>
		<title>stoney conductor: prov-backup-kvm</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3762"/>
		<updated>2014-06-26T14:50:52Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* createSnapshot */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
The &#039;&#039;&#039;Provisioning-Backup-KVM Daemon&#039;&#039;&#039; is written in Perl and uses the mechanisms described under [[stoney core: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Workflow =&lt;br /&gt;
== Backup ==&lt;br /&gt;
This is the simplified workflow for the Provisioning-Backup-KVM Daemon. The Subroutines (create-, export- and commitSnapshot) are shown later.&lt;br /&gt;
&lt;br /&gt;
[[File:KVM-Backup-Workflow.png|thumb|none|400px|Figure 1: Simplified prov-backup-kvm workflow]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update this workflow by editing [[File:KVM-Backup-simple.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== createSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Snapshot.png|thumb|none|500px|Figure 2: Detailed workflow for the createsSnaphshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[stoney_conductor:_Backup#createSnapshot | createSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== exportSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Merge.png|thumb|none|500px|Figure 2: Detailed workflow for the exportSnapshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[ stoney_conductor:_Backup#exportSnapshot | exportSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== commitSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Retain.png|thumb|none|500px|Figure 3: Detailed workflow for the retain process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#commitSnaphsot | commitSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
== Restore ==&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#00FF00&amp;quot;&amp;gt;Task for the control-instance daemon&amp;lt;/span&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#FFFF00&amp;quot;&amp;gt;Task for the prov-backup-kvm daemon&amp;lt;/span&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#FF8000&amp;quot;&amp;gt;Task for the vm-manager&amp;lt;/span&amp;gt;&lt;br /&gt;
[[File:KVM-Backup-Workflow-Restore.png|thumb|none|500px|Figure 4: Detailed workflow for the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-restore.xmi]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#Basic_idea_2 | Restore: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Global ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright (C) 2013 stepping stone GmbH&lt;br /&gt;
#                    Switzerland&lt;br /&gt;
#                    http://www.stepping-stone.ch&lt;br /&gt;
#                    support@stepping-stone.ch&lt;br /&gt;
#&lt;br /&gt;
# Authors:&lt;br /&gt;
#  Pat Kläy &amp;lt;pat.klaey@stepping-stone.ch&amp;gt;&lt;br /&gt;
#  &lt;br /&gt;
# Licensed under the EUPL, Version 1.1.&lt;br /&gt;
#&lt;br /&gt;
# You may not use this work except in compliance with the&lt;br /&gt;
# Licence.&lt;br /&gt;
# You may obtain a copy of the Licence at:&lt;br /&gt;
#&lt;br /&gt;
# http://www.osor.eu/eupl&lt;br /&gt;
#&lt;br /&gt;
# Unless required by applicable law or agreed to in&lt;br /&gt;
# writing, software distributed under the Licence is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; basis,&lt;br /&gt;
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either&lt;br /&gt;
# express or implied.&lt;br /&gt;
# See the Licence for the specific language governing&lt;br /&gt;
# permissions and limitations under the Licence.&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Global]&lt;br /&gt;
# If true the script logs every information to the log-file.&lt;br /&gt;
LOG_DEBUG = 1&lt;br /&gt;
&lt;br /&gt;
# If true the script logs additional information to the log-file.&lt;br /&gt;
LOG_INFO = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs warnings to the log-file.&lt;br /&gt;
LOG_WARNING = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs errors to the log-file.&lt;br /&gt;
LOG_ERR = 1&lt;br /&gt;
&lt;br /&gt;
# The environment indicates the hostname (fqdn) on which the prov-backup-kvm &lt;br /&gt;
# daemon is running&lt;br /&gt;
ENVIRONMENT = &amp;lt;STONEY-CLOUD-NODE-NAME&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
# All information related to the database (backend) the daemon connects to&lt;br /&gt;
[Database]&lt;br /&gt;
BACKEND = LDAP&lt;br /&gt;
SERVER = &amp;lt;STONEY-CLOUD-LDAP-SERVER&amp;gt;&lt;br /&gt;
PORT = &amp;lt;STONEY-CLOUD-LDAP-PORT&amp;gt;&lt;br /&gt;
ADMIN_USER = &amp;lt;STONEY-CLOUD-LDAP-BINDDN&amp;gt;&lt;br /&gt;
ADMIN_PASSWORD = &amp;lt;STONEY-CLOUD-LDAP-BIND-PASSWORD&amp;gt;&lt;br /&gt;
SERVICE_SUBTREE = &amp;lt;STONEY-CLOUD-LDAP-SERVICE-SUBTREE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# A cookie file will be used to be able to restart the daemon without&lt;br /&gt;
# processing every entry again (they appear as new if the daemon is started) &lt;br /&gt;
COOKIE_FILE = &amp;lt;STONEY-CLOUD-LDAP-COOKIE-FILE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# The default cookie just contains an empty CSN, in that way, all entries&lt;br /&gt;
# are processed&lt;br /&gt;
DEFAULT_COOKIE = rid=001,csn=&lt;br /&gt;
&lt;br /&gt;
# The search filter for the database. Only process entries found with this&lt;br /&gt;
# filter&lt;br /&gt;
SEARCH_FILTER = (&amp;amp;(entryCSN&amp;gt;=%entryCSN%)(objectClass=*))&lt;br /&gt;
&lt;br /&gt;
# Indicates the prov-backup-kvm configuration which applies for every&lt;br /&gt;
# VM-Pool and every VM if not overwritten by a VM-Pool- or VM-specific &lt;br /&gt;
# configuration&lt;br /&gt;
STONEY_CLOUD_WIDE_CONFIGURATION = &amp;lt;STONEY-CLOUD-LDAP-PROV-BACKUP-KVM-DEFAULT-CONFIGURATION&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Configuration concerining the provisioning module&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
# The modus should always be selfcare&lt;br /&gt;
MODUS = selfcare&lt;br /&gt;
&lt;br /&gt;
# Which TransportApi is used to execute the commands on the destination system&lt;br /&gt;
# TransportApi can be &amp;quot;LocalCLI&amp;quot; or &amp;quot;CLISSH&amp;quot;&lt;br /&gt;
TRANSPORTAPI = LocalCLI&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning service&lt;br /&gt;
SERVICE = Backup&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning type&lt;br /&gt;
TYPE = KVM&lt;br /&gt;
&lt;br /&gt;
# The syslog tag (normally service-type)&lt;br /&gt;
SYSLOG = Backup-KVM&lt;br /&gt;
&lt;br /&gt;
# All information concerning the gateway (TransportApi)&lt;br /&gt;
[Gateway]&lt;br /&gt;
HOST = localhost&lt;br /&gt;
USER = provisioning&lt;br /&gt;
DSA_FILE = none&lt;br /&gt;
&lt;br /&gt;
# Service specific configuration which is not present in the backend&lt;br /&gt;
[Backup]&lt;br /&gt;
&lt;br /&gt;
# Which command is used to export files&lt;br /&gt;
EXPORT_COMMAND = cp -p&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Backend ==&lt;br /&gt;
In the backend, you need to have at least one configuration which applies for the whole stoney cloud. This configuration is referenced in the [[#Global|global configuration]]. You are able to overwrite the stoney-cloud-wide configuration for&lt;br /&gt;
* A VM-Pool&lt;br /&gt;
* A single VM&lt;br /&gt;
The configuration which applies for the VM is evaluated in the following way:&lt;br /&gt;
# Check if the VM has a VM-specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# Check if the VM-Pool has a specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# The stoney-cloud-wide configuration applies&lt;br /&gt;
&lt;br /&gt;
=== Mandatory Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupNumberOfIterations&#039;&#039;&#039;: An integer value how many backup iterations should be kept. Default is 1 (for disaster recovery).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRootDirectory&#039;&#039;&#039;: The path to the backup root directory where all iterations of disk-images and state files are stored. Default is file:///var/backup/virtualization.&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRetainDirectory&#039;&#039;&#039;: The path to the local retain directory where the temporary snapshots (disk-image and state file) are stored. Default is file:///var/virtualization/retain.&lt;br /&gt;
* &#039;&#039;&#039;sstRestoreVMWithoutState&#039;&#039;&#039;: Boolean value which indicates whether or not to restore a virtual machine without the state. Default is FALSE (most often we want to restore the state together with the virtual machine).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRamDiskLocation&#039;&#039;&#039;: Path to the RAM-Disk. Default is /mnt/ramdisk. Because this attribute can be set for the whole FOSS-Cloud, for a specific VM-Pool, for a specific virtual machine or a specific virtual machine template, this attribute is independent from the VM-Nodes. There for no guarantee can be given, that this RAM-Disk exists on all the VM-Nodes. A check for its existence is mandatory!&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineForceStart&#039;&#039;&#039;: Force start VM in the case of not being able to restore the VM State during the backup process. TRUE or FALSE, default is FALSE. Attention: If set to TRUE, this could lead to file system inconsistencies in the virtual machine.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationBandwidthMerge&#039;&#039;&#039;: Bandwidth of the disk merging process (specifies the maximum I/O rate to allow in Megabyte/s). Default is 0 (unlimited). Integer Attribute, single value.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageFormat&#039;&#039;&#039;: The format for the new disk image that is created during the backup process. Default is qcow2.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageOwner&#039;&#039;&#039;: The owner for the new disk image that is created during the backup process. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageGroup&#039;&#039;&#039; : The group for the new disk image that is created during the backup process. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImagePermission&#039;&#039;&#039;: The permission (in octal representation) for the new disk image that is created during the backup process. Default is 660 (equivalent to 0660).&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryOwner&#039;&#039;&#039;: The owner for the new directory where the disk image is located. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryGroup&#039;&#039;&#039;: The group for the new directory where the disk image is located. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryPermission&#039;&#039;&#039;: The permission (in octal representation) for the new directory where the disk image is located. Default is 770 (equivalent to 0770).&lt;br /&gt;
&lt;br /&gt;
=== Optional Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupExcludeFromBackup&#039;&#039;&#039;: Do we want to exclude a virtual machine from the default backup plan? Default is FALSE.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStop&#039;&#039;&#039;: Multiple dependencies for the stopping order can be defined. Example: a web VM depends on the corresponding database VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be stopped in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID1&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID3&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStart&#039;&#039;&#039;: Multiple dependencies for the starting order can be defined. Example: a database VM must be started before the corresponding web VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be started in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID3&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID1&lt;br /&gt;
&lt;br /&gt;
= Exit codes =&lt;br /&gt;
The following list defines the return codes and their meaning for the KVM-Backup script see also [https://github.com/stepping-stone/prov-backup-kvm/blob/master/lib/Provisioning/Backup/KVM/Constants.pm KVMConstants.pm]:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
use constant SUCCESS_CODE                               =&amp;gt; 0;&lt;br /&gt;
&lt;br /&gt;
### Error codes constants&lt;br /&gt;
use constant UNDEFINED_ERROR                            =&amp;gt; 1; # Always the first!&lt;br /&gt;
use constant MISSING_PARAMETER_IN_CONFIG_FILE           =&amp;gt; 2;&lt;br /&gt;
use constant CONFIGURED_RAM_DISK_IS_NOT_VALUD           =&amp;gt; 3;&lt;br /&gt;
use constant NOT_ENOUGH_SPACE_ON_RAM_DISK               =&amp;gt; 4;&lt;br /&gt;
use constant CANNOT_SAVE_MACHINE_STATE                  =&amp;gt; 5;&lt;br /&gt;
use constant CANNOT_WRITE_TO_BACKUP_LOCATION            =&amp;gt; 6;&lt;br /&gt;
use constant CANNOT_COPY_FILE_TO_BACKUP_LOCATION        =&amp;gt; 7;&lt;br /&gt;
use constant CANNOT_COPY_IMAGE_TO_BACKUP_LOCATION       =&amp;gt; 8;&lt;br /&gt;
use constant CANNOT_COPY_XML_TO_BACKUP_LOCATION         =&amp;gt; 9;&lt;br /&gt;
use constant CANNOT_COPY_BACKEND_FILE_TO_BACKUP_LOCATION=&amp;gt; 10;&lt;br /&gt;
use constant CANNOT_MERGE_DISK_IMAGES                   =&amp;gt; 11;&lt;br /&gt;
use constant CANNOT_REMOVE_OLD_DISK_IMAGE               =&amp;gt; 12;&lt;br /&gt;
use constant CANNOT_REMOVE_FILE                         =&amp;gt; 13;&lt;br /&gt;
use constant CANNOT_CREATE_EMPTY_DISK_IMAGE             =&amp;gt; 15;&lt;br /&gt;
use constant CANNOT_RENAME_DISK_IMAGE                   =&amp;gt; 16;&lt;br /&gt;
use constant CANNOT_CONNECT_TO_BACKEND                  =&amp;gt; 17;&lt;br /&gt;
use constant WRONG_STATE_INFORMATION                    =&amp;gt; 18;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_OWNERSHIP            =&amp;gt; 19;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_PERMISSION           =&amp;gt; 20;&lt;br /&gt;
use constant CANNOT_RESTORE_MACHINE                     =&amp;gt; 21;&lt;br /&gt;
use constant CANNOT_LOCK_MACHINE                        =&amp;gt; 22;&lt;br /&gt;
use constant CANNOT_FIND_MACHINE                        =&amp;gt; 23;&lt;br /&gt;
use constant CANNOT_COPY_STATE_FILE_TO_RETAIN           =&amp;gt; 24;&lt;br /&gt;
use constant RETAIN_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 25;&lt;br /&gt;
use constant BACKUP_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 26;&lt;br /&gt;
use constant CANNOT_CREATE_DIRECTORY                    =&amp;gt; 27;&lt;br /&gt;
use constant CANNOT_SAVE_XML                            =&amp;gt; 28;&lt;br /&gt;
use constant CANNOT_SAVE_BACKEND_ENTRY                  =&amp;gt; 29;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_OWNERSHIP             =&amp;gt; 30;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_PERMISSION            =&amp;gt; 31;&lt;br /&gt;
use constant CANNOT_FIND_CONFIGURATION_ENTRY            =&amp;gt; 32;&lt;br /&gt;
use constant BACKEND_XML_UNCONSISTENCY                  =&amp;gt; 33;&lt;br /&gt;
use constant CANNOT_CREATE_TARBALL                      =&amp;gt; 34;&lt;br /&gt;
use constant UNSUPPORTED_FILE_TRANSFER_PROTOCOL         =&amp;gt; 35;&lt;br /&gt;
use constant UNKNOWN_BACKEND_TYPE                       =&amp;gt; 36;&lt;br /&gt;
use constant MISSING_NECESSARY_FILES                    =&amp;gt; 37;&lt;br /&gt;
use constant CORRUPT_DISK_IMAGE_FOUND                   =&amp;gt; 38;&lt;br /&gt;
use constant UNSUPPORTED_CONFIGURATION_PARAMETER        =&amp;gt; 39;&lt;br /&gt;
use constant CANNOT_MOVE_DISK_IMAGE_TO_ORIGINAL_LOCATION=&amp;gt; 40;&lt;br /&gt;
use constant CANNOT_DEFINE_MACHINE                      =&amp;gt; 41;&lt;br /&gt;
use constant CANNOT_START_MACHINE                       =&amp;gt; 42;&lt;br /&gt;
use constant CANNOT_WORK_ON_UNDEFINED_OBJECT            =&amp;gt; 43;&lt;br /&gt;
use constant CANNOT_READ_STATE_FILE                     =&amp;gt; 44;&lt;br /&gt;
use constant CANNOT_READ_XML_FILE                       =&amp;gt; 45;&lt;br /&gt;
use constant NOT_ALL_FILES_DELETED_FROM_RETAIN_LOCATION =&amp;gt; 46;&lt;br /&gt;
use constant NOT_ENOUGH_DISK_SPACE                      =&amp;gt; 47;&lt;br /&gt;
use constant NO_DISK_SPACE_INFORMATION                  =&amp;gt; 48;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Next steps =&lt;br /&gt;
* Change the behaviour of the snapshot/merge process&lt;br /&gt;
** No longer merge the original file into the new one but merge (commit) backing store file back into original one&lt;br /&gt;
*** Like that we are able to reduce the backup (merge) time a lot.&lt;br /&gt;
*** Needs different behaviour for save -&amp;gt; copy/move -&amp;gt; create new image -&amp;gt; restore -&amp;gt; merge&lt;br /&gt;
&lt;br /&gt;
= Source Code =&lt;br /&gt;
The source code is located in our GitHub Repository:&lt;br /&gt;
&lt;br /&gt;
https://github.com/stepping-stone/prov-backup-kvm&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney conductor]][[Category:Provisioning Modules]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3761</id>
		<title>stoney conductor: prov-backup-kvm</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3761"/>
		<updated>2014-06-26T14:50:38Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* exportSnapshot */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
The &#039;&#039;&#039;Provisioning-Backup-KVM Daemon&#039;&#039;&#039; is written in Perl and uses the mechanisms described under [[stoney core: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Workflow =&lt;br /&gt;
== Backup ==&lt;br /&gt;
This is the simplified workflow for the Provisioning-Backup-KVM Daemon. The Subroutines (create-, export- and commitSnapshot) are shown later.&lt;br /&gt;
&lt;br /&gt;
[[File:KVM-Backup-Workflow.png|thumb|none|400px|Figure 1: Simplified prov-backup-kvm workflow]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update this workflow by editing [[File:KVM-Backup-simple.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== createSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Snapshot.png|thumb|none|500px|Figure 2: Detailed workflow for the snaphshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[stoney_conductor:_Backup#createSnapshot | createSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== exportSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Merge.png|thumb|none|500px|Figure 2: Detailed workflow for the exportSnapshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[ stoney_conductor:_Backup#exportSnapshot | exportSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== commitSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Retain.png|thumb|none|500px|Figure 3: Detailed workflow for the retain process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#commitSnaphsot | commitSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
== Restore ==&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#00FF00&amp;quot;&amp;gt;Task for the control-instance daemon&amp;lt;/span&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#FFFF00&amp;quot;&amp;gt;Task for the prov-backup-kvm daemon&amp;lt;/span&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#FF8000&amp;quot;&amp;gt;Task for the vm-manager&amp;lt;/span&amp;gt;&lt;br /&gt;
[[File:KVM-Backup-Workflow-Restore.png|thumb|none|500px|Figure 4: Detailed workflow for the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-restore.xmi]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#Basic_idea_2 | Restore: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Global ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright (C) 2013 stepping stone GmbH&lt;br /&gt;
#                    Switzerland&lt;br /&gt;
#                    http://www.stepping-stone.ch&lt;br /&gt;
#                    support@stepping-stone.ch&lt;br /&gt;
#&lt;br /&gt;
# Authors:&lt;br /&gt;
#  Pat Kläy &amp;lt;pat.klaey@stepping-stone.ch&amp;gt;&lt;br /&gt;
#  &lt;br /&gt;
# Licensed under the EUPL, Version 1.1.&lt;br /&gt;
#&lt;br /&gt;
# You may not use this work except in compliance with the&lt;br /&gt;
# Licence.&lt;br /&gt;
# You may obtain a copy of the Licence at:&lt;br /&gt;
#&lt;br /&gt;
# http://www.osor.eu/eupl&lt;br /&gt;
#&lt;br /&gt;
# Unless required by applicable law or agreed to in&lt;br /&gt;
# writing, software distributed under the Licence is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; basis,&lt;br /&gt;
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either&lt;br /&gt;
# express or implied.&lt;br /&gt;
# See the Licence for the specific language governing&lt;br /&gt;
# permissions and limitations under the Licence.&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Global]&lt;br /&gt;
# If true the script logs every information to the log-file.&lt;br /&gt;
LOG_DEBUG = 1&lt;br /&gt;
&lt;br /&gt;
# If true the script logs additional information to the log-file.&lt;br /&gt;
LOG_INFO = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs warnings to the log-file.&lt;br /&gt;
LOG_WARNING = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs errors to the log-file.&lt;br /&gt;
LOG_ERR = 1&lt;br /&gt;
&lt;br /&gt;
# The environment indicates the hostname (fqdn) on which the prov-backup-kvm &lt;br /&gt;
# daemon is running&lt;br /&gt;
ENVIRONMENT = &amp;lt;STONEY-CLOUD-NODE-NAME&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
# All information related to the database (backend) the daemon connects to&lt;br /&gt;
[Database]&lt;br /&gt;
BACKEND = LDAP&lt;br /&gt;
SERVER = &amp;lt;STONEY-CLOUD-LDAP-SERVER&amp;gt;&lt;br /&gt;
PORT = &amp;lt;STONEY-CLOUD-LDAP-PORT&amp;gt;&lt;br /&gt;
ADMIN_USER = &amp;lt;STONEY-CLOUD-LDAP-BINDDN&amp;gt;&lt;br /&gt;
ADMIN_PASSWORD = &amp;lt;STONEY-CLOUD-LDAP-BIND-PASSWORD&amp;gt;&lt;br /&gt;
SERVICE_SUBTREE = &amp;lt;STONEY-CLOUD-LDAP-SERVICE-SUBTREE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# A cookie file will be used to be able to restart the daemon without&lt;br /&gt;
# processing every entry again (they appear as new if the daemon is started) &lt;br /&gt;
COOKIE_FILE = &amp;lt;STONEY-CLOUD-LDAP-COOKIE-FILE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# The default cookie just contains an empty CSN, in that way, all entries&lt;br /&gt;
# are processed&lt;br /&gt;
DEFAULT_COOKIE = rid=001,csn=&lt;br /&gt;
&lt;br /&gt;
# The search filter for the database. Only process entries found with this&lt;br /&gt;
# filter&lt;br /&gt;
SEARCH_FILTER = (&amp;amp;(entryCSN&amp;gt;=%entryCSN%)(objectClass=*))&lt;br /&gt;
&lt;br /&gt;
# Indicates the prov-backup-kvm configuration which applies for every&lt;br /&gt;
# VM-Pool and every VM if not overwritten by a VM-Pool- or VM-specific &lt;br /&gt;
# configuration&lt;br /&gt;
STONEY_CLOUD_WIDE_CONFIGURATION = &amp;lt;STONEY-CLOUD-LDAP-PROV-BACKUP-KVM-DEFAULT-CONFIGURATION&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Configuration concerining the provisioning module&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
# The modus should always be selfcare&lt;br /&gt;
MODUS = selfcare&lt;br /&gt;
&lt;br /&gt;
# Which TransportApi is used to execute the commands on the destination system&lt;br /&gt;
# TransportApi can be &amp;quot;LocalCLI&amp;quot; or &amp;quot;CLISSH&amp;quot;&lt;br /&gt;
TRANSPORTAPI = LocalCLI&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning service&lt;br /&gt;
SERVICE = Backup&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning type&lt;br /&gt;
TYPE = KVM&lt;br /&gt;
&lt;br /&gt;
# The syslog tag (normally service-type)&lt;br /&gt;
SYSLOG = Backup-KVM&lt;br /&gt;
&lt;br /&gt;
# All information concerning the gateway (TransportApi)&lt;br /&gt;
[Gateway]&lt;br /&gt;
HOST = localhost&lt;br /&gt;
USER = provisioning&lt;br /&gt;
DSA_FILE = none&lt;br /&gt;
&lt;br /&gt;
# Service specific configuration which is not present in the backend&lt;br /&gt;
[Backup]&lt;br /&gt;
&lt;br /&gt;
# Which command is used to export files&lt;br /&gt;
EXPORT_COMMAND = cp -p&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Backend ==&lt;br /&gt;
In the backend, you need to have at least one configuration which applies for the whole stoney cloud. This configuration is referenced in the [[#Global|global configuration]]. You are able to overwrite the stoney-cloud-wide configuration for&lt;br /&gt;
* A VM-Pool&lt;br /&gt;
* A single VM&lt;br /&gt;
The configuration which applies for the VM is evaluated in the following way:&lt;br /&gt;
# Check if the VM has a VM-specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# Check if the VM-Pool has a specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# The stoney-cloud-wide configuration applies&lt;br /&gt;
&lt;br /&gt;
=== Mandatory Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupNumberOfIterations&#039;&#039;&#039;: An integer value how many backup iterations should be kept. Default is 1 (for disaster recovery).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRootDirectory&#039;&#039;&#039;: The path to the backup root directory where all iterations of disk-images and state files are stored. Default is file:///var/backup/virtualization.&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRetainDirectory&#039;&#039;&#039;: The path to the local retain directory where the temporary snapshots (disk-image and state file) are stored. Default is file:///var/virtualization/retain.&lt;br /&gt;
* &#039;&#039;&#039;sstRestoreVMWithoutState&#039;&#039;&#039;: Boolean value which indicates whether or not to restore a virtual machine without the state. Default is FALSE (most often we want to restore the state together with the virtual machine).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRamDiskLocation&#039;&#039;&#039;: Path to the RAM-Disk. Default is /mnt/ramdisk. Because this attribute can be set for the whole FOSS-Cloud, for a specific VM-Pool, for a specific virtual machine or a specific virtual machine template, this attribute is independent from the VM-Nodes. There for no guarantee can be given, that this RAM-Disk exists on all the VM-Nodes. A check for its existence is mandatory!&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineForceStart&#039;&#039;&#039;: Force start VM in the case of not being able to restore the VM State during the backup process. TRUE or FALSE, default is FALSE. Attention: If set to TRUE, this could lead to file system inconsistencies in the virtual machine.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationBandwidthMerge&#039;&#039;&#039;: Bandwidth of the disk merging process (specifies the maximum I/O rate to allow in Megabyte/s). Default is 0 (unlimited). Integer Attribute, single value.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageFormat&#039;&#039;&#039;: The format for the new disk image that is created during the backup process. Default is qcow2.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageOwner&#039;&#039;&#039;: The owner for the new disk image that is created during the backup process. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageGroup&#039;&#039;&#039; : The group for the new disk image that is created during the backup process. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImagePermission&#039;&#039;&#039;: The permission (in octal representation) for the new disk image that is created during the backup process. Default is 660 (equivalent to 0660).&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryOwner&#039;&#039;&#039;: The owner for the new directory where the disk image is located. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryGroup&#039;&#039;&#039;: The group for the new directory where the disk image is located. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryPermission&#039;&#039;&#039;: The permission (in octal representation) for the new directory where the disk image is located. Default is 770 (equivalent to 0770).&lt;br /&gt;
&lt;br /&gt;
=== Optional Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupExcludeFromBackup&#039;&#039;&#039;: Do we want to exclude a virtual machine from the default backup plan? Default is FALSE.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStop&#039;&#039;&#039;: Multiple dependencies for the stopping order can be defined. Example: a web VM depends on the corresponding database VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be stopped in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID1&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID3&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStart&#039;&#039;&#039;: Multiple dependencies for the starting order can be defined. Example: a database VM must be started before the corresponding web VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be started in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID3&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID1&lt;br /&gt;
&lt;br /&gt;
= Exit codes =&lt;br /&gt;
The following list defines the return codes and their meaning for the KVM-Backup script see also [https://github.com/stepping-stone/prov-backup-kvm/blob/master/lib/Provisioning/Backup/KVM/Constants.pm KVMConstants.pm]:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
use constant SUCCESS_CODE                               =&amp;gt; 0;&lt;br /&gt;
&lt;br /&gt;
### Error codes constants&lt;br /&gt;
use constant UNDEFINED_ERROR                            =&amp;gt; 1; # Always the first!&lt;br /&gt;
use constant MISSING_PARAMETER_IN_CONFIG_FILE           =&amp;gt; 2;&lt;br /&gt;
use constant CONFIGURED_RAM_DISK_IS_NOT_VALUD           =&amp;gt; 3;&lt;br /&gt;
use constant NOT_ENOUGH_SPACE_ON_RAM_DISK               =&amp;gt; 4;&lt;br /&gt;
use constant CANNOT_SAVE_MACHINE_STATE                  =&amp;gt; 5;&lt;br /&gt;
use constant CANNOT_WRITE_TO_BACKUP_LOCATION            =&amp;gt; 6;&lt;br /&gt;
use constant CANNOT_COPY_FILE_TO_BACKUP_LOCATION        =&amp;gt; 7;&lt;br /&gt;
use constant CANNOT_COPY_IMAGE_TO_BACKUP_LOCATION       =&amp;gt; 8;&lt;br /&gt;
use constant CANNOT_COPY_XML_TO_BACKUP_LOCATION         =&amp;gt; 9;&lt;br /&gt;
use constant CANNOT_COPY_BACKEND_FILE_TO_BACKUP_LOCATION=&amp;gt; 10;&lt;br /&gt;
use constant CANNOT_MERGE_DISK_IMAGES                   =&amp;gt; 11;&lt;br /&gt;
use constant CANNOT_REMOVE_OLD_DISK_IMAGE               =&amp;gt; 12;&lt;br /&gt;
use constant CANNOT_REMOVE_FILE                         =&amp;gt; 13;&lt;br /&gt;
use constant CANNOT_CREATE_EMPTY_DISK_IMAGE             =&amp;gt; 15;&lt;br /&gt;
use constant CANNOT_RENAME_DISK_IMAGE                   =&amp;gt; 16;&lt;br /&gt;
use constant CANNOT_CONNECT_TO_BACKEND                  =&amp;gt; 17;&lt;br /&gt;
use constant WRONG_STATE_INFORMATION                    =&amp;gt; 18;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_OWNERSHIP            =&amp;gt; 19;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_PERMISSION           =&amp;gt; 20;&lt;br /&gt;
use constant CANNOT_RESTORE_MACHINE                     =&amp;gt; 21;&lt;br /&gt;
use constant CANNOT_LOCK_MACHINE                        =&amp;gt; 22;&lt;br /&gt;
use constant CANNOT_FIND_MACHINE                        =&amp;gt; 23;&lt;br /&gt;
use constant CANNOT_COPY_STATE_FILE_TO_RETAIN           =&amp;gt; 24;&lt;br /&gt;
use constant RETAIN_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 25;&lt;br /&gt;
use constant BACKUP_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 26;&lt;br /&gt;
use constant CANNOT_CREATE_DIRECTORY                    =&amp;gt; 27;&lt;br /&gt;
use constant CANNOT_SAVE_XML                            =&amp;gt; 28;&lt;br /&gt;
use constant CANNOT_SAVE_BACKEND_ENTRY                  =&amp;gt; 29;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_OWNERSHIP             =&amp;gt; 30;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_PERMISSION            =&amp;gt; 31;&lt;br /&gt;
use constant CANNOT_FIND_CONFIGURATION_ENTRY            =&amp;gt; 32;&lt;br /&gt;
use constant BACKEND_XML_UNCONSISTENCY                  =&amp;gt; 33;&lt;br /&gt;
use constant CANNOT_CREATE_TARBALL                      =&amp;gt; 34;&lt;br /&gt;
use constant UNSUPPORTED_FILE_TRANSFER_PROTOCOL         =&amp;gt; 35;&lt;br /&gt;
use constant UNKNOWN_BACKEND_TYPE                       =&amp;gt; 36;&lt;br /&gt;
use constant MISSING_NECESSARY_FILES                    =&amp;gt; 37;&lt;br /&gt;
use constant CORRUPT_DISK_IMAGE_FOUND                   =&amp;gt; 38;&lt;br /&gt;
use constant UNSUPPORTED_CONFIGURATION_PARAMETER        =&amp;gt; 39;&lt;br /&gt;
use constant CANNOT_MOVE_DISK_IMAGE_TO_ORIGINAL_LOCATION=&amp;gt; 40;&lt;br /&gt;
use constant CANNOT_DEFINE_MACHINE                      =&amp;gt; 41;&lt;br /&gt;
use constant CANNOT_START_MACHINE                       =&amp;gt; 42;&lt;br /&gt;
use constant CANNOT_WORK_ON_UNDEFINED_OBJECT            =&amp;gt; 43;&lt;br /&gt;
use constant CANNOT_READ_STATE_FILE                     =&amp;gt; 44;&lt;br /&gt;
use constant CANNOT_READ_XML_FILE                       =&amp;gt; 45;&lt;br /&gt;
use constant NOT_ALL_FILES_DELETED_FROM_RETAIN_LOCATION =&amp;gt; 46;&lt;br /&gt;
use constant NOT_ENOUGH_DISK_SPACE                      =&amp;gt; 47;&lt;br /&gt;
use constant NO_DISK_SPACE_INFORMATION                  =&amp;gt; 48;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Next steps =&lt;br /&gt;
* Change the behaviour of the snapshot/merge process&lt;br /&gt;
** No longer merge the original file into the new one but merge (commit) backing store file back into original one&lt;br /&gt;
*** Like that we are able to reduce the backup (merge) time a lot.&lt;br /&gt;
*** Needs different behaviour for save -&amp;gt; copy/move -&amp;gt; create new image -&amp;gt; restore -&amp;gt; merge&lt;br /&gt;
&lt;br /&gt;
= Source Code =&lt;br /&gt;
The source code is located in our GitHub Repository:&lt;br /&gt;
&lt;br /&gt;
https://github.com/stepping-stone/prov-backup-kvm&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney conductor]][[Category:Provisioning Modules]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=File:KVM-Backup-Workflow-Merge.png&amp;diff=3760</id>
		<title>File:KVM-Backup-Workflow-Merge.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=File:KVM-Backup-Workflow-Merge.png&amp;diff=3760"/>
		<updated>2014-06-26T14:50:09Z</updated>

		<summary type="html">&lt;p&gt;Pat: Pat uploaded a new version of &amp;amp;quot;File:KVM-Backup-Workflow-Merge.png&amp;amp;quot;: New export method with only copy the live image away&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Detailed workflow for the merge process&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=File:KVM-Backup-Workflow-detailed.xmi&amp;diff=3759</id>
		<title>File:KVM-Backup-Workflow-detailed.xmi</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=File:KVM-Backup-Workflow-detailed.xmi&amp;diff=3759"/>
		<updated>2014-06-26T14:49:05Z</updated>

		<summary type="html">&lt;p&gt;Pat: Pat uploaded a new version of &amp;amp;quot;File:KVM-Backup-Workflow-detailed.xmi&amp;amp;quot;: The new backup process with disk only snapshot and committing changes back&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Detailed workflow for the three subprocesses (snapshot, merge and retain)&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=File:KVM-Backup-Workflow-Snapshot.png&amp;diff=3758</id>
		<title>File:KVM-Backup-Workflow-Snapshot.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=File:KVM-Backup-Workflow-Snapshot.png&amp;diff=3758"/>
		<updated>2014-06-26T14:48:22Z</updated>

		<summary type="html">&lt;p&gt;Pat: Pat uploaded a new version of &amp;amp;quot;File:KVM-Backup-Workflow-Snapshot.png&amp;amp;quot;: New createSnapshot method with disk only snapshots&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Detailed snapshot workflow&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=File:KVM-Backup-simple.xmi&amp;diff=3757</id>
		<title>File:KVM-Backup-simple.xmi</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=File:KVM-Backup-simple.xmi&amp;diff=3757"/>
		<updated>2014-06-26T14:01:37Z</updated>

		<summary type="html">&lt;p&gt;Pat: Pat uploaded a new version of &amp;amp;quot;File:KVM-Backup-simple.xmi&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Simplified prov-backup-kvm workflow&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=File:KVM-Backup-Workflow.png&amp;diff=3756</id>
		<title>File:KVM-Backup-Workflow.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=File:KVM-Backup-Workflow.png&amp;diff=3756"/>
		<updated>2014-06-26T14:01:09Z</updated>

		<summary type="html">&lt;p&gt;Pat: Pat uploaded a new version of &amp;amp;quot;File:KVM-Backup-Workflow.png&amp;amp;quot;: Adapted to the new sub steps (create-, export- and commitSnapshots)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Simplified prov-backup-kvm workflow&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3755</id>
		<title>stoney conductor: prov-backup-kvm</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3755"/>
		<updated>2014-06-26T14:00:25Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Backup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
The &#039;&#039;&#039;Provisioning-Backup-KVM Daemon&#039;&#039;&#039; is written in Perl and uses the mechanisms described under [[stoney core: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Workflow =&lt;br /&gt;
== Backup ==&lt;br /&gt;
This is the simplified workflow for the Provisioning-Backup-KVM Daemon. The Subroutines (create-, export- and commitSnapshot) are shown later.&lt;br /&gt;
&lt;br /&gt;
[[File:KVM-Backup-Workflow.png|thumb|none|400px|Figure 1: Simplified prov-backup-kvm workflow]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update this workflow by editing [[File:KVM-Backup-simple.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== createSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Snapshot.png|thumb|none|500px|Figure 2: Detailed workflow for the snaphshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[stoney_conductor:_Backup#createSnapshot | createSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== exportSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Merge.png|thumb|none|500px|Figure 2: Detailed workflow for the merge process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[ stoney_conductor:_Backup#exportSnapshot | exportSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== commitSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Retain.png|thumb|none|500px|Figure 3: Detailed workflow for the retain process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#commitSnaphsot | commitSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
== Restore ==&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#00FF00&amp;quot;&amp;gt;Task for the control-instance daemon&amp;lt;/span&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#FFFF00&amp;quot;&amp;gt;Task for the prov-backup-kvm daemon&amp;lt;/span&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#FF8000&amp;quot;&amp;gt;Task for the vm-manager&amp;lt;/span&amp;gt;&lt;br /&gt;
[[File:KVM-Backup-Workflow-Restore.png|thumb|none|500px|Figure 4: Detailed workflow for the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-restore.xmi]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#Basic_idea_2 | Restore: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Global ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright (C) 2013 stepping stone GmbH&lt;br /&gt;
#                    Switzerland&lt;br /&gt;
#                    http://www.stepping-stone.ch&lt;br /&gt;
#                    support@stepping-stone.ch&lt;br /&gt;
#&lt;br /&gt;
# Authors:&lt;br /&gt;
#  Pat Kläy &amp;lt;pat.klaey@stepping-stone.ch&amp;gt;&lt;br /&gt;
#  &lt;br /&gt;
# Licensed under the EUPL, Version 1.1.&lt;br /&gt;
#&lt;br /&gt;
# You may not use this work except in compliance with the&lt;br /&gt;
# Licence.&lt;br /&gt;
# You may obtain a copy of the Licence at:&lt;br /&gt;
#&lt;br /&gt;
# http://www.osor.eu/eupl&lt;br /&gt;
#&lt;br /&gt;
# Unless required by applicable law or agreed to in&lt;br /&gt;
# writing, software distributed under the Licence is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; basis,&lt;br /&gt;
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either&lt;br /&gt;
# express or implied.&lt;br /&gt;
# See the Licence for the specific language governing&lt;br /&gt;
# permissions and limitations under the Licence.&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Global]&lt;br /&gt;
# If true the script logs every information to the log-file.&lt;br /&gt;
LOG_DEBUG = 1&lt;br /&gt;
&lt;br /&gt;
# If true the script logs additional information to the log-file.&lt;br /&gt;
LOG_INFO = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs warnings to the log-file.&lt;br /&gt;
LOG_WARNING = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs errors to the log-file.&lt;br /&gt;
LOG_ERR = 1&lt;br /&gt;
&lt;br /&gt;
# The environment indicates the hostname (fqdn) on which the prov-backup-kvm &lt;br /&gt;
# daemon is running&lt;br /&gt;
ENVIRONMENT = &amp;lt;STONEY-CLOUD-NODE-NAME&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
# All information related to the database (backend) the daemon connects to&lt;br /&gt;
[Database]&lt;br /&gt;
BACKEND = LDAP&lt;br /&gt;
SERVER = &amp;lt;STONEY-CLOUD-LDAP-SERVER&amp;gt;&lt;br /&gt;
PORT = &amp;lt;STONEY-CLOUD-LDAP-PORT&amp;gt;&lt;br /&gt;
ADMIN_USER = &amp;lt;STONEY-CLOUD-LDAP-BINDDN&amp;gt;&lt;br /&gt;
ADMIN_PASSWORD = &amp;lt;STONEY-CLOUD-LDAP-BIND-PASSWORD&amp;gt;&lt;br /&gt;
SERVICE_SUBTREE = &amp;lt;STONEY-CLOUD-LDAP-SERVICE-SUBTREE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# A cookie file will be used to be able to restart the daemon without&lt;br /&gt;
# processing every entry again (they appear as new if the daemon is started) &lt;br /&gt;
COOKIE_FILE = &amp;lt;STONEY-CLOUD-LDAP-COOKIE-FILE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# The default cookie just contains an empty CSN, in that way, all entries&lt;br /&gt;
# are processed&lt;br /&gt;
DEFAULT_COOKIE = rid=001,csn=&lt;br /&gt;
&lt;br /&gt;
# The search filter for the database. Only process entries found with this&lt;br /&gt;
# filter&lt;br /&gt;
SEARCH_FILTER = (&amp;amp;(entryCSN&amp;gt;=%entryCSN%)(objectClass=*))&lt;br /&gt;
&lt;br /&gt;
# Indicates the prov-backup-kvm configuration which applies for every&lt;br /&gt;
# VM-Pool and every VM if not overwritten by a VM-Pool- or VM-specific &lt;br /&gt;
# configuration&lt;br /&gt;
STONEY_CLOUD_WIDE_CONFIGURATION = &amp;lt;STONEY-CLOUD-LDAP-PROV-BACKUP-KVM-DEFAULT-CONFIGURATION&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Configuration concerining the provisioning module&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
# The modus should always be selfcare&lt;br /&gt;
MODUS = selfcare&lt;br /&gt;
&lt;br /&gt;
# Which TransportApi is used to execute the commands on the destination system&lt;br /&gt;
# TransportApi can be &amp;quot;LocalCLI&amp;quot; or &amp;quot;CLISSH&amp;quot;&lt;br /&gt;
TRANSPORTAPI = LocalCLI&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning service&lt;br /&gt;
SERVICE = Backup&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning type&lt;br /&gt;
TYPE = KVM&lt;br /&gt;
&lt;br /&gt;
# The syslog tag (normally service-type)&lt;br /&gt;
SYSLOG = Backup-KVM&lt;br /&gt;
&lt;br /&gt;
# All information concerning the gateway (TransportApi)&lt;br /&gt;
[Gateway]&lt;br /&gt;
HOST = localhost&lt;br /&gt;
USER = provisioning&lt;br /&gt;
DSA_FILE = none&lt;br /&gt;
&lt;br /&gt;
# Service specific configuration which is not present in the backend&lt;br /&gt;
[Backup]&lt;br /&gt;
&lt;br /&gt;
# Which command is used to export files&lt;br /&gt;
EXPORT_COMMAND = cp -p&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Backend ==&lt;br /&gt;
In the backend, you need to have at least one configuration which applies for the whole stoney cloud. This configuration is referenced in the [[#Global|global configuration]]. You are able to overwrite the stoney-cloud-wide configuration for&lt;br /&gt;
* A VM-Pool&lt;br /&gt;
* A single VM&lt;br /&gt;
The configuration which applies for the VM is evaluated in the following way:&lt;br /&gt;
# Check if the VM has a VM-specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# Check if the VM-Pool has a specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# The stoney-cloud-wide configuration applies&lt;br /&gt;
&lt;br /&gt;
=== Mandatory Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupNumberOfIterations&#039;&#039;&#039;: An integer value how many backup iterations should be kept. Default is 1 (for disaster recovery).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRootDirectory&#039;&#039;&#039;: The path to the backup root directory where all iterations of disk-images and state files are stored. Default is file:///var/backup/virtualization.&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRetainDirectory&#039;&#039;&#039;: The path to the local retain directory where the temporary snapshots (disk-image and state file) are stored. Default is file:///var/virtualization/retain.&lt;br /&gt;
* &#039;&#039;&#039;sstRestoreVMWithoutState&#039;&#039;&#039;: Boolean value which indicates whether or not to restore a virtual machine without the state. Default is FALSE (most often we want to restore the state together with the virtual machine).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRamDiskLocation&#039;&#039;&#039;: Path to the RAM-Disk. Default is /mnt/ramdisk. Because this attribute can be set for the whole FOSS-Cloud, for a specific VM-Pool, for a specific virtual machine or a specific virtual machine template, this attribute is independent from the VM-Nodes. There for no guarantee can be given, that this RAM-Disk exists on all the VM-Nodes. A check for its existence is mandatory!&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineForceStart&#039;&#039;&#039;: Force start VM in the case of not being able to restore the VM State during the backup process. TRUE or FALSE, default is FALSE. Attention: If set to TRUE, this could lead to file system inconsistencies in the virtual machine.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationBandwidthMerge&#039;&#039;&#039;: Bandwidth of the disk merging process (specifies the maximum I/O rate to allow in Megabyte/s). Default is 0 (unlimited). Integer Attribute, single value.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageFormat&#039;&#039;&#039;: The format for the new disk image that is created during the backup process. Default is qcow2.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageOwner&#039;&#039;&#039;: The owner for the new disk image that is created during the backup process. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageGroup&#039;&#039;&#039; : The group for the new disk image that is created during the backup process. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImagePermission&#039;&#039;&#039;: The permission (in octal representation) for the new disk image that is created during the backup process. Default is 660 (equivalent to 0660).&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryOwner&#039;&#039;&#039;: The owner for the new directory where the disk image is located. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryGroup&#039;&#039;&#039;: The group for the new directory where the disk image is located. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryPermission&#039;&#039;&#039;: The permission (in octal representation) for the new directory where the disk image is located. Default is 770 (equivalent to 0770).&lt;br /&gt;
&lt;br /&gt;
=== Optional Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupExcludeFromBackup&#039;&#039;&#039;: Do we want to exclude a virtual machine from the default backup plan? Default is FALSE.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStop&#039;&#039;&#039;: Multiple dependencies for the stopping order can be defined. Example: a web VM depends on the corresponding database VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be stopped in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID1&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID3&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStart&#039;&#039;&#039;: Multiple dependencies for the starting order can be defined. Example: a database VM must be started before the corresponding web VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be started in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID3&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID1&lt;br /&gt;
&lt;br /&gt;
= Exit codes =&lt;br /&gt;
The following list defines the return codes and their meaning for the KVM-Backup script see also [https://github.com/stepping-stone/prov-backup-kvm/blob/master/lib/Provisioning/Backup/KVM/Constants.pm KVMConstants.pm]:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
use constant SUCCESS_CODE                               =&amp;gt; 0;&lt;br /&gt;
&lt;br /&gt;
### Error codes constants&lt;br /&gt;
use constant UNDEFINED_ERROR                            =&amp;gt; 1; # Always the first!&lt;br /&gt;
use constant MISSING_PARAMETER_IN_CONFIG_FILE           =&amp;gt; 2;&lt;br /&gt;
use constant CONFIGURED_RAM_DISK_IS_NOT_VALUD           =&amp;gt; 3;&lt;br /&gt;
use constant NOT_ENOUGH_SPACE_ON_RAM_DISK               =&amp;gt; 4;&lt;br /&gt;
use constant CANNOT_SAVE_MACHINE_STATE                  =&amp;gt; 5;&lt;br /&gt;
use constant CANNOT_WRITE_TO_BACKUP_LOCATION            =&amp;gt; 6;&lt;br /&gt;
use constant CANNOT_COPY_FILE_TO_BACKUP_LOCATION        =&amp;gt; 7;&lt;br /&gt;
use constant CANNOT_COPY_IMAGE_TO_BACKUP_LOCATION       =&amp;gt; 8;&lt;br /&gt;
use constant CANNOT_COPY_XML_TO_BACKUP_LOCATION         =&amp;gt; 9;&lt;br /&gt;
use constant CANNOT_COPY_BACKEND_FILE_TO_BACKUP_LOCATION=&amp;gt; 10;&lt;br /&gt;
use constant CANNOT_MERGE_DISK_IMAGES                   =&amp;gt; 11;&lt;br /&gt;
use constant CANNOT_REMOVE_OLD_DISK_IMAGE               =&amp;gt; 12;&lt;br /&gt;
use constant CANNOT_REMOVE_FILE                         =&amp;gt; 13;&lt;br /&gt;
use constant CANNOT_CREATE_EMPTY_DISK_IMAGE             =&amp;gt; 15;&lt;br /&gt;
use constant CANNOT_RENAME_DISK_IMAGE                   =&amp;gt; 16;&lt;br /&gt;
use constant CANNOT_CONNECT_TO_BACKEND                  =&amp;gt; 17;&lt;br /&gt;
use constant WRONG_STATE_INFORMATION                    =&amp;gt; 18;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_OWNERSHIP            =&amp;gt; 19;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_PERMISSION           =&amp;gt; 20;&lt;br /&gt;
use constant CANNOT_RESTORE_MACHINE                     =&amp;gt; 21;&lt;br /&gt;
use constant CANNOT_LOCK_MACHINE                        =&amp;gt; 22;&lt;br /&gt;
use constant CANNOT_FIND_MACHINE                        =&amp;gt; 23;&lt;br /&gt;
use constant CANNOT_COPY_STATE_FILE_TO_RETAIN           =&amp;gt; 24;&lt;br /&gt;
use constant RETAIN_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 25;&lt;br /&gt;
use constant BACKUP_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 26;&lt;br /&gt;
use constant CANNOT_CREATE_DIRECTORY                    =&amp;gt; 27;&lt;br /&gt;
use constant CANNOT_SAVE_XML                            =&amp;gt; 28;&lt;br /&gt;
use constant CANNOT_SAVE_BACKEND_ENTRY                  =&amp;gt; 29;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_OWNERSHIP             =&amp;gt; 30;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_PERMISSION            =&amp;gt; 31;&lt;br /&gt;
use constant CANNOT_FIND_CONFIGURATION_ENTRY            =&amp;gt; 32;&lt;br /&gt;
use constant BACKEND_XML_UNCONSISTENCY                  =&amp;gt; 33;&lt;br /&gt;
use constant CANNOT_CREATE_TARBALL                      =&amp;gt; 34;&lt;br /&gt;
use constant UNSUPPORTED_FILE_TRANSFER_PROTOCOL         =&amp;gt; 35;&lt;br /&gt;
use constant UNKNOWN_BACKEND_TYPE                       =&amp;gt; 36;&lt;br /&gt;
use constant MISSING_NECESSARY_FILES                    =&amp;gt; 37;&lt;br /&gt;
use constant CORRUPT_DISK_IMAGE_FOUND                   =&amp;gt; 38;&lt;br /&gt;
use constant UNSUPPORTED_CONFIGURATION_PARAMETER        =&amp;gt; 39;&lt;br /&gt;
use constant CANNOT_MOVE_DISK_IMAGE_TO_ORIGINAL_LOCATION=&amp;gt; 40;&lt;br /&gt;
use constant CANNOT_DEFINE_MACHINE                      =&amp;gt; 41;&lt;br /&gt;
use constant CANNOT_START_MACHINE                       =&amp;gt; 42;&lt;br /&gt;
use constant CANNOT_WORK_ON_UNDEFINED_OBJECT            =&amp;gt; 43;&lt;br /&gt;
use constant CANNOT_READ_STATE_FILE                     =&amp;gt; 44;&lt;br /&gt;
use constant CANNOT_READ_XML_FILE                       =&amp;gt; 45;&lt;br /&gt;
use constant NOT_ALL_FILES_DELETED_FROM_RETAIN_LOCATION =&amp;gt; 46;&lt;br /&gt;
use constant NOT_ENOUGH_DISK_SPACE                      =&amp;gt; 47;&lt;br /&gt;
use constant NO_DISK_SPACE_INFORMATION                  =&amp;gt; 48;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Next steps =&lt;br /&gt;
* Change the behaviour of the snapshot/merge process&lt;br /&gt;
** No longer merge the original file into the new one but merge (commit) backing store file back into original one&lt;br /&gt;
*** Like that we are able to reduce the backup (merge) time a lot.&lt;br /&gt;
*** Needs different behaviour for save -&amp;gt; copy/move -&amp;gt; create new image -&amp;gt; restore -&amp;gt; merge&lt;br /&gt;
&lt;br /&gt;
= Source Code =&lt;br /&gt;
The source code is located in our GitHub Repository:&lt;br /&gt;
&lt;br /&gt;
https://github.com/stepping-stone/prov-backup-kvm&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney conductor]][[Category:Provisioning Modules]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3754</id>
		<title>stoney conductor: VM Backup</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3754"/>
		<updated>2014-06-26T13:58:20Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Current Implementation (Restore) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This page describes how the VMs and VM-Templates are backed-up and restored inside the [http://www.stoney-cloud.org stoney cloud].&lt;br /&gt;
&lt;br /&gt;
= Requirements =&lt;br /&gt;
* sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
** This directory might be a single partition which needs to have the same size as your partition for the live images (it&#039;s a &amp;quot;copy&amp;quot; of the live partition)&lt;br /&gt;
* sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
** This directory must be on the same partition as your life images are&lt;br /&gt;
* A working stoney cloud, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].&lt;br /&gt;
* The backup configuration must be set: [[stoney_conductor:_OpenLDAP_directory_data_organisation#Backup | stoney conductor: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Backup =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The main idea to backup a VM or a VM-Template is, to divide the task into three subtasks: &lt;br /&gt;
* createSnapshot: Create a disk only snapshot. A new overlay file is created, all write operations are performed to this file. The underlying disk-image is now read only.&lt;br /&gt;
* exportSnapshot: Copy the read only disk-image to the backup location.&lt;br /&gt;
* commitSnapshot: Commit the performed write operations from the overlay back to the underlying (original) disk image. Now the underlying image is read-write again and the overlay image can be deleted.&lt;br /&gt;
A more detailed and technical description for these three sub-processes can be found [[#Sub-Processes | here]].&lt;br /&gt;
&lt;br /&gt;
Furthermore there is an control instance, which can independently call these three sub-processes for a given machine. Like that, the stoney cloud is able to handle different cases:&lt;br /&gt;
=== Backup a single machine ===&lt;br /&gt;
The procedure for backing up a single machine is very simple. Just call the three sub-processes (create-, export- and commitSnapshot) one after the other. So the control instance would do some very basic stuff: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machine = args[0];&lt;br /&gt;
&lt;br /&gt;
if( createSsnapshot( machine ) )&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
    if ( exportSnapshot( machine ) )&lt;br /&gt;
    {&lt;br /&gt;
&lt;br /&gt;
        if ( commitSnapshot( machine ) )&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Successfully backed up machine %s\n&amp;quot;, machine);&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
} else&lt;br /&gt;
{&lt;br /&gt;
    printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Backup multiple machines at the same time ===&lt;br /&gt;
When backing up multiple machines at the same time, we need to make sure that the snapshots for the machines are as close together as possible. Therefore the control instance should call first the createSnapshot process for all machines. After every machine has been snapshotted, the control instance can call the exportSnapshot and commitSnapshot process for every machine. The most important part here is, that the control instance somehow remembers, if the snapshot for a given machine was successful or not. Because if the snapshot failed, it must not call the exportSnapshot and commitSnapshot process. So the control instance needs a little bit more logic: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machines[] = args[0];&lt;br /&gt;
object successful_snapshots[];&lt;br /&gt;
&lt;br /&gt;
# Snapshot all machines&lt;br /&gt;
for( int i = 0; i &amp;lt;  sizeof(machines) / sizeof(object) ; i++ )&lt;br /&gt;
{&lt;br /&gt;
    # If the snapshot was successful, put the machine into the &lt;br /&gt;
    # successful_snapshots array&lt;br /&gt;
    if ( createSnapshot( machines[i] ) )&lt;br /&gt;
    {&lt;br /&gt;
        successful_snapshots[machines[i]];&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machines[i],error);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# export and commit all successful_snapshot machines&lt;br /&gt;
for ( int i = 0; i &amp;lt;  sizeof(successful_snapshots) / sizeof(object) ; i++ ) )&lt;br /&gt;
{&lt;br /&gt;
    # Check if the element at this position is not null, then the snapshot &lt;br /&gt;
    # for this machine was successful&lt;br /&gt;
    if ( successful_snapshots[i] )&lt;br /&gt;
    {&lt;br /&gt;
        if ( exportSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
        {&lt;br /&gt;
            if ( commitSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
            {&lt;br /&gt;
              printf(&amp;quot;Successfully backed-up machine %s\n&amp;quot;, successful_snapshots[i]);&lt;br /&gt;
            } else&lt;br /&gt;
            {&lt;br /&gt;
                printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sub-Processes ===&lt;br /&gt;
See also [[Libvirt_external_snapshot_with_GlusterFS]]&lt;br /&gt;
==== createSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Part_2:_Create_the_snapshot_using_virsh]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#createSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== exportSnapshot ====&lt;br /&gt;
# Simply copy the underlying image to the backup location&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;image&amp;gt;.qcow2 /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;backup&amp;gt;/&amp;lt;location&amp;gt;/.&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#exportSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== commitSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Cleanup.2FCommit_.28Online.29]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#commitSnapshot]]&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
Since the stoney cloud is (as the name says already) a cloud solution, it makes sense to have a backend (in our case openLDAP) involved in the whole process. Like that it is possible to run the backup jobs decentralized on every vm-node. The control instance can then modify the backend, and theses changes are seen by the diffenrent backup daemons on the vm-nodes. So the communication could look like shown in the following picture (Figure 1): &lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-communication.png|800px|thumbnail|none|Figure 1: Communication between the control instance and the prov-backup-kvm daemon through the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
=== Control-Instance Daemon Interaction for creating a Backup with LDIF Examples ===&lt;br /&gt;
The step numbers correspond with the graphical overview from above.&lt;br /&gt;
&lt;br /&gt;
==== Step 00: Backup Configuration for a virtual machine ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The following backup configuration says, that the backup should be done daily, at 03:00 hours (localtime).&lt;br /&gt;
# * * * * * command to be executed&lt;br /&gt;
# - - - - -&lt;br /&gt;
# | | | | |&lt;br /&gt;
# | | | | +----- day of week (0 - 6) (Sunday=0)&lt;br /&gt;
# | | | +------- month (1 - 12)&lt;br /&gt;
# | | +--------- day of month (1 - 31)&lt;br /&gt;
# | +----------- hour (0 - 23)&lt;br /&gt;
# +------------- min (0 - 59)&lt;br /&gt;
# localtime in the crontab entry&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
objectclass: sstCronObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
description: This sub tree contains the backup plan for the virtual machine kvm-005.&lt;br /&gt;
sstCronMinute: 0&lt;br /&gt;
sstCronHour: 3&lt;br /&gt;
sstCronDay: *&lt;br /&gt;
sstCronMonth: *&lt;br /&gt;
sstCronDayOfWeek: *&lt;br /&gt;
sstCronActive: TRUE&lt;br /&gt;
sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
sstBackupRamDiskLocation: file:///mnt/ramdisk-test&lt;br /&gt;
sstVirtualizationDiskImageFormat: qcow2&lt;br /&gt;
sstVirtualizationDiskImageOwner: root&lt;br /&gt;
sstVirtualizationDiskImageGroup: vm-storage&lt;br /&gt;
sstVirtualizationDiskImagePermission: 0660&lt;br /&gt;
sstBackupNumberOfIterations: 1&lt;br /&gt;
sstVirtualizationVirtualMachineForceStart: FALSE&lt;br /&gt;
sstVirtualizationBandwidthMerge: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 01: Initialize Backup Sub Tree (Control instance daemon) ====&lt;br /&gt;
The sub tree &#039;&#039;&#039; ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&#039;&#039;&#039; reflects the time, when the backup is planned (in the form of [YYYY][MM][DD]T[hh][mm][ss]Z ([http://en.wikipedia.org/wiki/ISO_8601 ISO 8601]) and it should be written at the time, when the backup is planned and should be executed. The section &#039;&#039;&#039;20121002T010000Z&#039;&#039;&#039; means the following:&lt;br /&gt;
* Year: 2012&lt;br /&gt;
* Month: 10&lt;br /&gt;
* Day of Month: 02&lt;br /&gt;
* Hour of Day: 01&lt;br /&gt;
* Minutes: 00&lt;br /&gt;
* Seconds: 00&lt;br /&gt;
Please be aware the the time is to be written in UTC (see also the comment in the LDIF example below).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# This entry is the place holder for the backup, which is to be executed at 03:00 hours (localtime with daylight-saving). This&lt;br /&gt;
# leads to the 20121002T010000Z timestamp (which is written in UTC).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: sstProvisioning&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
ou: 20121002T010000Z&lt;br /&gt;
sstProvisioningExecutionDate: 0&lt;br /&gt;
sstProvisioningMode: initialize&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
sstProvisioningState: 20121002T014513Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Finalize the Initialization (Control instance daemon) ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is modified.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: initialized&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Start the Snapshot Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshot&#039;&#039;&#039;, the actual backup process is kicked off by the Control instance daemon.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# snapshot (this way the Provisioning-Backup-VKM daemon knows, that it must start the snapshotting process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshot&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 04: Starting the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is snapshotting the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to snapshotting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Finalizing the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the snapshot of the virtual machine or virtual machine template is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010011Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Start the export Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;export&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to export the disk image to the backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# export (this way the Provisioning-Backup-VKM daemon knows, that it must start the export process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: export&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Starting the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exporting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is exporting the virtual machine or virtual machine template disk images.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to exporting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exporting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 08: Finalizing the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exported&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the export of the virtual machine or virtual machine template disk-images is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010500Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exported&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the commit Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;commit&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to commit the changes from the overlay file to the underlying disk-image&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# commit (this way the Provisioning-Backup-VKM daemon knows, that it must start the commit process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: commit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the commit Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the commit command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;comitting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is committing changes from the overlay disk-images back to the underlying ones.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to comitting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: committing&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the commit Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commit command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;comitted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the comitting of the changes from the overlay disk-images back to the underlying ones is done. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: comitted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the Backup Process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;committed&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Backup) ==&lt;br /&gt;
Since we do not have a working control instance, we need to have a workaround for backing up the machines: &lt;br /&gt;
&lt;br /&gt;
* We do already have a BackupKVMWrapper.pl script (File-Backend) which executes the three [[#Sub-Processes | sub-processes ]] in the correct order for a given list of machines (see [[#Backup multiple machines at the same_time]]).&lt;br /&gt;
* We do already have the implementation for the whole backup with the LDAP-Backend (see [[ stoney conductor: prov backup kvm ]]).&lt;br /&gt;
* We can now combine these two existing scripts and create a wrapper (lets call it LDAPKVMWrapper) which, in some way, adds some logic to the BackupKVMWrapper.pl. In fact the LDAPKVMWrapper wrapper will generate the list of machines which need a backup.&lt;br /&gt;
&lt;br /&gt;
The behaviour on our servers is as follows (c.f. Figure 2):&lt;br /&gt;
# The (decentralized) LDAPKVMWrapper wrapper (which is executed everyday via cronjob) generates a list off all machines running on the current host.&lt;br /&gt;
#* Currently on the hosts the cronjobs looks like: &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
#* For each of these machines:&lt;br /&gt;
#** Check if the machine is excluded from the backup, if yes, remove the machine from the list&lt;br /&gt;
#** Check if the last backup was successful, if not, remove the machine from the list&lt;br /&gt;
# Update the backup subtree for each machine in the list&lt;br /&gt;
#* Remove the old backup leaf (the &amp;quot;yesterday-leaf&amp;quot;), and add a new one (the &amp;quot;today-leaf&amp;quot;) &lt;br /&gt;
#* After this step, the machines are ready to be backed up&lt;br /&gt;
# Call the KVMBackupWrapper.pl script with the machines list as a parameter&lt;br /&gt;
# Wait for the KVMBackupWrapper.pl script to finish&lt;br /&gt;
# Go again through all machines and update the backup subtree a last time&lt;br /&gt;
#* Check if the backup was successful, if yes, set sstProvisioningMode = finished (see also TBD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:wrapper-interaction.png|500px|thumbnail|none|Figure 2: How the two wrapper interact with the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
* If for some reason something does not work at all, the whole backup process can be deactivated by simply disabling the LDAPKVMWrapper cronjob&lt;br /&gt;
** &amp;lt;code&amp;gt;crontab -e&amp;lt;/code&amp;gt;&lt;br /&gt;
** Comment the LDAPKVMWrapper cronjob line: &amp;lt;code&amp;gt;#00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
=== How to exclude a machine from the backup ===&lt;br /&gt;
Login to one of the [[VM-Node | vm-nodes]] and execute the following command&lt;br /&gt;
&lt;br /&gt;
If you want to exclude a machine from the backup run you simply need to add the following entry to your LDAP directory: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the backup subtree in the LDAP directory already exists, you need to add the sstbackupexcludefrombackup attribute: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
add: objectClass&lt;br /&gt;
objectClass: sstVirtualizationBackupObjectClass&lt;br /&gt;
-&lt;br /&gt;
add: sstbackupexcludefrombackup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Re-include the machine to the backup ====&lt;br /&gt;
If you want to re include a machine, simply delete the machines whole backup subtree. It will be recreated during the next backup run.&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
= Restore =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The restore process, similar to the backup process, can be divided into three sub-processes: &lt;br /&gt;
* Unretain the small files: Copy the small files (backend entry, XML description) from the backup directory to the retain directory&lt;br /&gt;
* Unretain the big files: Copy the big files (state file, disk image(s)) form the backup directory to the retain directory&lt;br /&gt;
* Restore the machine: Replace the live disk image(s) by the one(s) from the backup and restore the machine from the state file&lt;br /&gt;
&lt;br /&gt;
Additionally the restore process can also be divided into two phases: &lt;br /&gt;
* User-Interaction phase: After the &amp;quot;unretain small files&amp;quot; the user needs to decide two things:&lt;br /&gt;
** On conflicts between the backend entry file and the XML description, the user need to decide how to resolve this conflict(s)&lt;br /&gt;
** The user can also abort the restore process up to this point. After that the restore can not be aborted or undone! &lt;br /&gt;
* Non-User-Interaction phase: The daemons communicate through the backend between each other and the restore process continues without further user input (c.f. [[#Communication_through_backend_2 | Communication through backend]])&lt;br /&gt;
&lt;br /&gt;
=== Sub Processes ===&lt;br /&gt;
==== Unretain small files ====&lt;br /&gt;
This workflow assumes that the backup directory is on the same physical server as the retain directory (protocol is file://)&lt;br /&gt;
# Copy the backend-entry file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.backend /path/to/retain/vm-001.backend&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the XML description from the from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.xml /path/to/retain/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Compare the backend-entry file (the one in the retain directory) with the live-backend entry&lt;br /&gt;
#* Resolve all conflicts between these two backend entries&lt;br /&gt;
#** Modify the backend entry at the retain location accordingly&lt;br /&gt;
# Apply the same changes for the XML description at the retain location (backend entry and XML description need to be consistent).&lt;br /&gt;
&lt;br /&gt;
==== Unretain large files ====&lt;br /&gt;
# Copy the state file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.state /path/to/retain/vm-001.state&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the disk image(s) from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.qcow2 /path/to/retain/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
&lt;br /&gt;
==== Restore the VM ====&lt;br /&gt;
# Shutdown the VM if it is running:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh shutdown vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Undefine the VM if it is still defined: &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh undefine vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Overwrite the original disk image:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;mv /path/to/retain/vm-001.qcow2 /path/to/images/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
# Restore the VMs backend entry: &lt;br /&gt;
#* Write the backend entry from the retain location (&amp;lt;code&amp;gt;/path/to/retain/vm-001.backend&amp;lt;/code&amp;gt;) to the backend&lt;br /&gt;
# Overwrite the VMs XML description with the one from the retain location &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/retain/vm-001.xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Restore the VM from the state file with the corrected XML&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh restore /path/to/retain/vm-001.state --xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
The actual KVM-Restore process is controlled completely by the Control instance daemon via the OpenLDAP directory. See [[#OpenLDAP Directory Integration|OpenLDAP Directory Integration]] the involved attributes and possible values.&lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-interaction-restore.png|thumb|500px|none|Figure 3: Communication between all involved parties during the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update these interactions by editing [[File:Restore-Interaction.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== Control instance Daemon Interaction for restoring a Backup with LDIF Examples ===&lt;br /&gt;
==== Step 01: Start the unretainSmallFiles process (Control instance daemon) ====&lt;br /&gt;
The first step of the restore process is to copy the small files (in this case the XML file and the LDIF) from the configured backup location to the configured retain location. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainSmallFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainSmallFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Starting the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingSmallFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the small files for the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Finalizing the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedSmallFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the small files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Start the unretainLargeFiles process (Control instance daemon) ====&lt;br /&gt;
Next step in the restore process is to copy the large files (state file and disk images) from the configured backup directory to the configured retain directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainLargeFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainLargeFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Starting the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingLargeFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the large files for the virtual machine or virtual machine template.&lt;br /&gt;
&lt;br /&gt;
In the meantime the vm-manager merges the LDIF we have unretained in [[#Step_02:_Starting_the_unretainSmallFiles_process_.28Provisioning-Backup-KVM_daemon.29 | step 02]] with the one in the live directory to sort out possible differences in the configuration of the virtual machine.  &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Finalizing the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedLargeFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the large files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the restore process (Control instance daemon) ====&lt;br /&gt;
Since we now have all necessary files in the configured retain location, the restore process can be started. There we simply copy the disk images back to their original location and restore the VM from the state file (which is also at the configured retain location)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# restore (this way the Provisioning-Backup-VKM daemon knows, that it must start the restore process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restore&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restoring&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is restoring the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to restoring by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restoring&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restored&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restored&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the restore process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;restored&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the Control instance daemon, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the restore process is finished.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Restore) ==&lt;br /&gt;
&#039;&#039;&#039;Attention&#039;&#039;&#039;: The restore process is not yet defined / nor implemented. The following documentation is about the old restore process. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Since the prov-backup-kvm daemon is not running on the vm-nodes (c.f. [[stoney_conductor:_Backup#Current_Implementation_.28Backup.29]]), the restore process does not work when clicking the icon in the webinterface. &lt;br /&gt;
&lt;br /&gt;
=== How to manually restore a machine from backup ===&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: Before you continue with this guide, make sure that you have no other possibility to restore the machine. It might be easier and safer to get lost files from the online backup if the machine has one set up.&lt;br /&gt;
&lt;br /&gt;
If you really have to restore the machine from the backup:&lt;br /&gt;
# Stop the machine from via the [https://cloud.stepping-stone.ch/vm-manager/ web interface]&lt;br /&gt;
# Login (as root) on the [[VM-Node]] the machine was running on&lt;br /&gt;
&lt;br /&gt;
As a first step, you would like to set some useful bash variables to be able to copy paste the following guide:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Double check all variables you are setting here. If one is not correct, you will restore a running machine or overwrite a live-disk image!&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
machinename=&amp;quot;&amp;lt;MACHINE-NAME&amp;gt;&amp;quot; # For example: machinename=&amp;quot;b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6&amp;quot;&lt;br /&gt;
vmpool=&amp;quot;&amp;lt;VM-POOL&amp;gt;&amp;quot; # For example vmpool=&amp;quot;0f83f084-8080-413e-b558-b678e504836e&amp;quot;&lt;br /&gt;
vmtype=&amp;quot;&amp;lt;VM-TYPE&amp;gt;&amp;quot; # For example vmtype=&amp;quot;vm-persistent&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change to the backup directory for the given machine and check the iterations:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change into the most recent iteration&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd 2014...&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In there you should have: &lt;br /&gt;
* The state file &amp;lt;MACHINE-NAME&amp;gt;.state.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.state.20140109T134445Z)&lt;br /&gt;
* The XML description &amp;lt;MACHINE-NAME&amp;gt;.xml.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.xml.20140109T134445Z)&lt;br /&gt;
* The ldif file &amp;lt;MACHINE-NAME&amp;gt;.ldif.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.ldif.20140109T134445Z)&lt;br /&gt;
* And at least one disk image &amp;lt;DISK-IMAGE&amp;gt;.qcow2.&amp;lt;BACKUP-DATE&amp;gt; (for example 8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2.20140109T134445Z)&lt;br /&gt;
Now you should save the backup date and the disk image(s) in a variable&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
backupdate=&amp;quot;&amp;lt;BACKUP-DATE&amp;gt;&amp;quot; # For example: backupdate=&amp;quot;20140109T134445Z&amp;quot;&lt;br /&gt;
diskimage1=&amp;quot;&amp;lt;DISK-IMAGE-1&amp;gt;.qcow2&amp;quot; # For example: diskimage1=&amp;quot;8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2&amp;quot;&lt;br /&gt;
diskimage2=&amp;quot;&amp;lt;DISK-IMAGE-2&amp;gt;.qcow2&amp;quot; # For example: diskimage2=&amp;quot;aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee.qcow2&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Have again a look at the different variables and &#039;&#039;&#039;double check them again&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
echo &amp;quot;Machine Name = ${machinename}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Pool = ${vmpool}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Type = ${vmtype}&amp;quot;&lt;br /&gt;
echo &amp;quot;Backup date = ${backupdate}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 1 = ${diskimage1}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 2 = ${diskimage2}&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all these files to the retain location:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
currentdate=`date --utc +&#039;%Y%m%dT%H%M%SZ&#039;`&lt;br /&gt;
mkdir -p /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.ldif.${backupdate} /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Check if there is a difference between the current XML file and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
diff -Naur /etc/libvirt/qemu/${machinename}.xml /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.xml.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Now you are entering the critical part. You won&#039;t be able to undo the following steps&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Check if there is a difference between the current LDAP entry and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
domain=&amp;quot;&amp;lt;DOMAIN&amp;gt;&amp;quot; # For example domain=&amp;quot;stoney-cloud.org&amp;quot;&lt;br /&gt;
ldapbase=&amp;quot;&amp;lt;LDAPBASE&amp;gt;&amp;quot; # For expample ldapbase=&amp;quot;dc=stoney-cloud,dc=org&amp;quot;&lt;br /&gt;
ldapsearch -H ldaps://ldapm.${domain} -b &amp;quot;sstVirtualMachine=${machinename},ou=virtual machines,ou=virtualization,ou=services,${ldapbase}&amp;quot; -s sub -x -LLL -o ldif-wrap=no -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W  &amp;quot;(objectclass=*)&amp;quot; &amp;gt; /tmp/${machinename}.ldif&lt;br /&gt;
diff -Naur /tmp/${machinename}.ldif /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.&lt;br /&gt;
&lt;br /&gt;
If there are no differences (or the differences are not important) you can skip the following step. Otherwise use the [https://cloud.stepping-stone.ch/phpldapadmin PhpLdapAdmin] to delete the machine from the LDAP directory (do not forget to delete the dhcp entry &amp;lt;code&amp;gt;dn: cn=&amp;lt;MACHINE-NAME&amp;gt;,ou=virtual machines,cn=192.168.140.0,cn=config-01,ou=dhcp,ou=networks,ou=virtualization,ou=services,dc=stoney-cloud,dc=org&amp;lt;/code&amp;gt;). Then add the LDIF (the one you just edited) to the LDAP (first do some general replacement)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sed -i\&lt;br /&gt;
 -e &#039;s/snapshotting/finished/&#039;\&lt;br /&gt;
 -e &#039;/member.*/d&#039;\&lt;br /&gt;
 /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&lt;br /&gt;
/usr/bin/ldapadd -H &amp;quot;ldaps://ldapm.${domain}&amp;quot; -x -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W -f /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Undefine the machine&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh undefine ${machinename}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all the disk images from the backup location back to their original location&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage1}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage1}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage2}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage2}&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And restore the domain from the state file from the backup location with the XML from the retain location (the one you might have edited)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh restore /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.state.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now the machine should be up and running again. Continuing where it was stopped when taking the backup.&lt;br /&gt;
&lt;br /&gt;
If everything is OK, you can cleanup the created files and directories&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rm -rf /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
rm /tmp/${machinename}.ldif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: stoney conductor]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3753</id>
		<title>stoney conductor: VM Backup</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3753"/>
		<updated>2014-06-26T13:54:55Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Current Implementation (Backup) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This page describes how the VMs and VM-Templates are backed-up and restored inside the [http://www.stoney-cloud.org stoney cloud].&lt;br /&gt;
&lt;br /&gt;
= Requirements =&lt;br /&gt;
* sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
** This directory might be a single partition which needs to have the same size as your partition for the live images (it&#039;s a &amp;quot;copy&amp;quot; of the live partition)&lt;br /&gt;
* sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
** This directory must be on the same partition as your life images are&lt;br /&gt;
* A working stoney cloud, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].&lt;br /&gt;
* The backup configuration must be set: [[stoney_conductor:_OpenLDAP_directory_data_organisation#Backup | stoney conductor: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Backup =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The main idea to backup a VM or a VM-Template is, to divide the task into three subtasks: &lt;br /&gt;
* createSnapshot: Create a disk only snapshot. A new overlay file is created, all write operations are performed to this file. The underlying disk-image is now read only.&lt;br /&gt;
* exportSnapshot: Copy the read only disk-image to the backup location.&lt;br /&gt;
* commitSnapshot: Commit the performed write operations from the overlay back to the underlying (original) disk image. Now the underlying image is read-write again and the overlay image can be deleted.&lt;br /&gt;
A more detailed and technical description for these three sub-processes can be found [[#Sub-Processes | here]].&lt;br /&gt;
&lt;br /&gt;
Furthermore there is an control instance, which can independently call these three sub-processes for a given machine. Like that, the stoney cloud is able to handle different cases:&lt;br /&gt;
=== Backup a single machine ===&lt;br /&gt;
The procedure for backing up a single machine is very simple. Just call the three sub-processes (create-, export- and commitSnapshot) one after the other. So the control instance would do some very basic stuff: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machine = args[0];&lt;br /&gt;
&lt;br /&gt;
if( createSsnapshot( machine ) )&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
    if ( exportSnapshot( machine ) )&lt;br /&gt;
    {&lt;br /&gt;
&lt;br /&gt;
        if ( commitSnapshot( machine ) )&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Successfully backed up machine %s\n&amp;quot;, machine);&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
} else&lt;br /&gt;
{&lt;br /&gt;
    printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Backup multiple machines at the same time ===&lt;br /&gt;
When backing up multiple machines at the same time, we need to make sure that the snapshots for the machines are as close together as possible. Therefore the control instance should call first the createSnapshot process for all machines. After every machine has been snapshotted, the control instance can call the exportSnapshot and commitSnapshot process for every machine. The most important part here is, that the control instance somehow remembers, if the snapshot for a given machine was successful or not. Because if the snapshot failed, it must not call the exportSnapshot and commitSnapshot process. So the control instance needs a little bit more logic: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machines[] = args[0];&lt;br /&gt;
object successful_snapshots[];&lt;br /&gt;
&lt;br /&gt;
# Snapshot all machines&lt;br /&gt;
for( int i = 0; i &amp;lt;  sizeof(machines) / sizeof(object) ; i++ )&lt;br /&gt;
{&lt;br /&gt;
    # If the snapshot was successful, put the machine into the &lt;br /&gt;
    # successful_snapshots array&lt;br /&gt;
    if ( createSnapshot( machines[i] ) )&lt;br /&gt;
    {&lt;br /&gt;
        successful_snapshots[machines[i]];&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machines[i],error);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# export and commit all successful_snapshot machines&lt;br /&gt;
for ( int i = 0; i &amp;lt;  sizeof(successful_snapshots) / sizeof(object) ; i++ ) )&lt;br /&gt;
{&lt;br /&gt;
    # Check if the element at this position is not null, then the snapshot &lt;br /&gt;
    # for this machine was successful&lt;br /&gt;
    if ( successful_snapshots[i] )&lt;br /&gt;
    {&lt;br /&gt;
        if ( exportSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
        {&lt;br /&gt;
            if ( commitSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
            {&lt;br /&gt;
              printf(&amp;quot;Successfully backed-up machine %s\n&amp;quot;, successful_snapshots[i]);&lt;br /&gt;
            } else&lt;br /&gt;
            {&lt;br /&gt;
                printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sub-Processes ===&lt;br /&gt;
See also [[Libvirt_external_snapshot_with_GlusterFS]]&lt;br /&gt;
==== createSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Part_2:_Create_the_snapshot_using_virsh]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#createSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== exportSnapshot ====&lt;br /&gt;
# Simply copy the underlying image to the backup location&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;image&amp;gt;.qcow2 /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;backup&amp;gt;/&amp;lt;location&amp;gt;/.&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#exportSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== commitSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Cleanup.2FCommit_.28Online.29]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#commitSnapshot]]&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
Since the stoney cloud is (as the name says already) a cloud solution, it makes sense to have a backend (in our case openLDAP) involved in the whole process. Like that it is possible to run the backup jobs decentralized on every vm-node. The control instance can then modify the backend, and theses changes are seen by the diffenrent backup daemons on the vm-nodes. So the communication could look like shown in the following picture (Figure 1): &lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-communication.png|800px|thumbnail|none|Figure 1: Communication between the control instance and the prov-backup-kvm daemon through the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
=== Control-Instance Daemon Interaction for creating a Backup with LDIF Examples ===&lt;br /&gt;
The step numbers correspond with the graphical overview from above.&lt;br /&gt;
&lt;br /&gt;
==== Step 00: Backup Configuration for a virtual machine ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The following backup configuration says, that the backup should be done daily, at 03:00 hours (localtime).&lt;br /&gt;
# * * * * * command to be executed&lt;br /&gt;
# - - - - -&lt;br /&gt;
# | | | | |&lt;br /&gt;
# | | | | +----- day of week (0 - 6) (Sunday=0)&lt;br /&gt;
# | | | +------- month (1 - 12)&lt;br /&gt;
# | | +--------- day of month (1 - 31)&lt;br /&gt;
# | +----------- hour (0 - 23)&lt;br /&gt;
# +------------- min (0 - 59)&lt;br /&gt;
# localtime in the crontab entry&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
objectclass: sstCronObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
description: This sub tree contains the backup plan for the virtual machine kvm-005.&lt;br /&gt;
sstCronMinute: 0&lt;br /&gt;
sstCronHour: 3&lt;br /&gt;
sstCronDay: *&lt;br /&gt;
sstCronMonth: *&lt;br /&gt;
sstCronDayOfWeek: *&lt;br /&gt;
sstCronActive: TRUE&lt;br /&gt;
sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
sstBackupRamDiskLocation: file:///mnt/ramdisk-test&lt;br /&gt;
sstVirtualizationDiskImageFormat: qcow2&lt;br /&gt;
sstVirtualizationDiskImageOwner: root&lt;br /&gt;
sstVirtualizationDiskImageGroup: vm-storage&lt;br /&gt;
sstVirtualizationDiskImagePermission: 0660&lt;br /&gt;
sstBackupNumberOfIterations: 1&lt;br /&gt;
sstVirtualizationVirtualMachineForceStart: FALSE&lt;br /&gt;
sstVirtualizationBandwidthMerge: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 01: Initialize Backup Sub Tree (Control instance daemon) ====&lt;br /&gt;
The sub tree &#039;&#039;&#039; ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&#039;&#039;&#039; reflects the time, when the backup is planned (in the form of [YYYY][MM][DD]T[hh][mm][ss]Z ([http://en.wikipedia.org/wiki/ISO_8601 ISO 8601]) and it should be written at the time, when the backup is planned and should be executed. The section &#039;&#039;&#039;20121002T010000Z&#039;&#039;&#039; means the following:&lt;br /&gt;
* Year: 2012&lt;br /&gt;
* Month: 10&lt;br /&gt;
* Day of Month: 02&lt;br /&gt;
* Hour of Day: 01&lt;br /&gt;
* Minutes: 00&lt;br /&gt;
* Seconds: 00&lt;br /&gt;
Please be aware the the time is to be written in UTC (see also the comment in the LDIF example below).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# This entry is the place holder for the backup, which is to be executed at 03:00 hours (localtime with daylight-saving). This&lt;br /&gt;
# leads to the 20121002T010000Z timestamp (which is written in UTC).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: sstProvisioning&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
ou: 20121002T010000Z&lt;br /&gt;
sstProvisioningExecutionDate: 0&lt;br /&gt;
sstProvisioningMode: initialize&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
sstProvisioningState: 20121002T014513Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Finalize the Initialization (Control instance daemon) ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is modified.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: initialized&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Start the Snapshot Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshot&#039;&#039;&#039;, the actual backup process is kicked off by the Control instance daemon.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# snapshot (this way the Provisioning-Backup-VKM daemon knows, that it must start the snapshotting process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshot&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 04: Starting the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is snapshotting the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to snapshotting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Finalizing the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the snapshot of the virtual machine or virtual machine template is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010011Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Start the export Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;export&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to export the disk image to the backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# export (this way the Provisioning-Backup-VKM daemon knows, that it must start the export process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: export&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Starting the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exporting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is exporting the virtual machine or virtual machine template disk images.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to exporting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exporting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 08: Finalizing the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exported&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the export of the virtual machine or virtual machine template disk-images is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010500Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exported&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the commit Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;commit&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to commit the changes from the overlay file to the underlying disk-image&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# commit (this way the Provisioning-Backup-VKM daemon knows, that it must start the commit process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: commit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the commit Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the commit command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;comitting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is committing changes from the overlay disk-images back to the underlying ones.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to comitting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: committing&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the commit Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commit command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;comitted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the comitting of the changes from the overlay disk-images back to the underlying ones is done. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: comitted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the Backup Process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;committed&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Backup) ==&lt;br /&gt;
Since we do not have a working control instance, we need to have a workaround for backing up the machines: &lt;br /&gt;
&lt;br /&gt;
* We do already have a BackupKVMWrapper.pl script (File-Backend) which executes the three [[#Sub-Processes | sub-processes ]] in the correct order for a given list of machines (see [[#Backup multiple machines at the same_time]]).&lt;br /&gt;
* We do already have the implementation for the whole backup with the LDAP-Backend (see [[ stoney conductor: prov backup kvm ]]).&lt;br /&gt;
* We can now combine these two existing scripts and create a wrapper (lets call it LDAPKVMWrapper) which, in some way, adds some logic to the BackupKVMWrapper.pl. In fact the LDAPKVMWrapper wrapper will generate the list of machines which need a backup.&lt;br /&gt;
&lt;br /&gt;
The behaviour on our servers is as follows (c.f. Figure 2):&lt;br /&gt;
# The (decentralized) LDAPKVMWrapper wrapper (which is executed everyday via cronjob) generates a list off all machines running on the current host.&lt;br /&gt;
#* Currently on the hosts the cronjobs looks like: &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
#* For each of these machines:&lt;br /&gt;
#** Check if the machine is excluded from the backup, if yes, remove the machine from the list&lt;br /&gt;
#** Check if the last backup was successful, if not, remove the machine from the list&lt;br /&gt;
# Update the backup subtree for each machine in the list&lt;br /&gt;
#* Remove the old backup leaf (the &amp;quot;yesterday-leaf&amp;quot;), and add a new one (the &amp;quot;today-leaf&amp;quot;) &lt;br /&gt;
#* After this step, the machines are ready to be backed up&lt;br /&gt;
# Call the KVMBackupWrapper.pl script with the machines list as a parameter&lt;br /&gt;
# Wait for the KVMBackupWrapper.pl script to finish&lt;br /&gt;
# Go again through all machines and update the backup subtree a last time&lt;br /&gt;
#* Check if the backup was successful, if yes, set sstProvisioningMode = finished (see also TBD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:wrapper-interaction.png|500px|thumbnail|none|Figure 2: How the two wrapper interact with the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
* If for some reason something does not work at all, the whole backup process can be deactivated by simply disabling the LDAPKVMWrapper cronjob&lt;br /&gt;
** &amp;lt;code&amp;gt;crontab -e&amp;lt;/code&amp;gt;&lt;br /&gt;
** Comment the LDAPKVMWrapper cronjob line: &amp;lt;code&amp;gt;#00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
=== How to exclude a machine from the backup ===&lt;br /&gt;
Login to one of the [[VM-Node | vm-nodes]] and execute the following command&lt;br /&gt;
&lt;br /&gt;
If you want to exclude a machine from the backup run you simply need to add the following entry to your LDAP directory: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the backup subtree in the LDAP directory already exists, you need to add the sstbackupexcludefrombackup attribute: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
add: objectClass&lt;br /&gt;
objectClass: sstVirtualizationBackupObjectClass&lt;br /&gt;
-&lt;br /&gt;
add: sstbackupexcludefrombackup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Re-include the machine to the backup ====&lt;br /&gt;
If you want to re include a machine, simply delete the machines whole backup subtree. It will be recreated during the next backup run.&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
= Restore =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The restore process, similar to the backup process, can be divided into three sub-processes: &lt;br /&gt;
* Unretain the small files: Copy the small files (backend entry, XML description) from the backup directory to the retain directory&lt;br /&gt;
* Unretain the big files: Copy the big files (state file, disk image(s)) form the backup directory to the retain directory&lt;br /&gt;
* Restore the machine: Replace the live disk image(s) by the one(s) from the backup and restore the machine from the state file&lt;br /&gt;
&lt;br /&gt;
Additionally the restore process can also be divided into two phases: &lt;br /&gt;
* User-Interaction phase: After the &amp;quot;unretain small files&amp;quot; the user needs to decide two things:&lt;br /&gt;
** On conflicts between the backend entry file and the XML description, the user need to decide how to resolve this conflict(s)&lt;br /&gt;
** The user can also abort the restore process up to this point. After that the restore can not be aborted or undone! &lt;br /&gt;
* Non-User-Interaction phase: The daemons communicate through the backend between each other and the restore process continues without further user input (c.f. [[#Communication_through_backend_2 | Communication through backend]])&lt;br /&gt;
&lt;br /&gt;
=== Sub Processes ===&lt;br /&gt;
==== Unretain small files ====&lt;br /&gt;
This workflow assumes that the backup directory is on the same physical server as the retain directory (protocol is file://)&lt;br /&gt;
# Copy the backend-entry file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.backend /path/to/retain/vm-001.backend&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the XML description from the from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.xml /path/to/retain/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Compare the backend-entry file (the one in the retain directory) with the live-backend entry&lt;br /&gt;
#* Resolve all conflicts between these two backend entries&lt;br /&gt;
#** Modify the backend entry at the retain location accordingly&lt;br /&gt;
# Apply the same changes for the XML description at the retain location (backend entry and XML description need to be consistent).&lt;br /&gt;
&lt;br /&gt;
==== Unretain large files ====&lt;br /&gt;
# Copy the state file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.state /path/to/retain/vm-001.state&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the disk image(s) from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.qcow2 /path/to/retain/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
&lt;br /&gt;
==== Restore the VM ====&lt;br /&gt;
# Shutdown the VM if it is running:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh shutdown vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Undefine the VM if it is still defined: &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh undefine vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Overwrite the original disk image:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;mv /path/to/retain/vm-001.qcow2 /path/to/images/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
# Restore the VMs backend entry: &lt;br /&gt;
#* Write the backend entry from the retain location (&amp;lt;code&amp;gt;/path/to/retain/vm-001.backend&amp;lt;/code&amp;gt;) to the backend&lt;br /&gt;
# Overwrite the VMs XML description with the one from the retain location &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/retain/vm-001.xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Restore the VM from the state file with the corrected XML&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh restore /path/to/retain/vm-001.state --xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
The actual KVM-Restore process is controlled completely by the Control instance daemon via the OpenLDAP directory. See [[#OpenLDAP Directory Integration|OpenLDAP Directory Integration]] the involved attributes and possible values.&lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-interaction-restore.png|thumb|500px|none|Figure 3: Communication between all involved parties during the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update these interactions by editing [[File:Restore-Interaction.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== Control instance Daemon Interaction for restoring a Backup with LDIF Examples ===&lt;br /&gt;
==== Step 01: Start the unretainSmallFiles process (Control instance daemon) ====&lt;br /&gt;
The first step of the restore process is to copy the small files (in this case the XML file and the LDIF) from the configured backup location to the configured retain location. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainSmallFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainSmallFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Starting the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingSmallFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the small files for the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Finalizing the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedSmallFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the small files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Start the unretainLargeFiles process (Control instance daemon) ====&lt;br /&gt;
Next step in the restore process is to copy the large files (state file and disk images) from the configured backup directory to the configured retain directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainLargeFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainLargeFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Starting the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingLargeFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the large files for the virtual machine or virtual machine template.&lt;br /&gt;
&lt;br /&gt;
In the meantime the vm-manager merges the LDIF we have unretained in [[#Step_02:_Starting_the_unretainSmallFiles_process_.28Provisioning-Backup-KVM_daemon.29 | step 02]] with the one in the live directory to sort out possible differences in the configuration of the virtual machine.  &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Finalizing the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedLargeFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the large files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the restore process (Control instance daemon) ====&lt;br /&gt;
Since we now have all necessary files in the configured retain location, the restore process can be started. There we simply copy the disk images back to their original location and restore the VM from the state file (which is also at the configured retain location)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# restore (this way the Provisioning-Backup-VKM daemon knows, that it must start the restore process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restore&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restoring&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is restoring the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to restoring by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restoring&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restored&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restored&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the restore process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;restored&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the Control instance daemon, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the restore process is finished.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Restore) ==&lt;br /&gt;
* Since the prov-backup-kvm daemon is not running on the vm-nodes (c.f. [[stoney_conductor:_Backup#State_of_the_art]]), the restore process does not work when clicking the icon in the webinterface. &lt;br /&gt;
* Resolving the conflicts in the backend and XML description file is not yet done&lt;br /&gt;
** Actually all steps not executed by prov-backup-kvm are not yet properly implemented (c.f. [[stoney_conductor:_prov_backup_kvm#Restore]])&lt;br /&gt;
* The implementation is done, but the last step from the [[#Restore_2 | restore process ]] is different:&lt;br /&gt;
** The &amp;lt;code&amp;gt;virsh restore&amp;lt;/code&amp;gt; command is not executed with the &amp;lt;code&amp;gt;--xml&amp;lt;/code&amp;gt; option, the XML from the state file is taken when restoring the machine. Therefore the conflicts are not properly resolved. &lt;br /&gt;
*** --[[User:Pat|Pat]] ([[User talk:Pat|talk]]) 09:41, 29 October 2013 (CET): Currently the [http://search.cpan.org/~danberr/Sys-Virt-1.1.3/lib/Sys/Virt.pm Sys::Virt] library does not support the --xml parameter when restoring a domain&lt;br /&gt;
&lt;br /&gt;
=== How to manually restore a machine from backup ===&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: Before you continue with this guide, make sure that you have no other possibility to restore the machine. It might be easier and safer to get lost files from the online backup if the machine has one set up.&lt;br /&gt;
&lt;br /&gt;
If you really have to restore the machine from the backup:&lt;br /&gt;
# Stop the machine from via the [https://cloud.stepping-stone.ch/vm-manager/ web interface]&lt;br /&gt;
# Login (as root) on the [[VM-Node]] the machine was running on&lt;br /&gt;
&lt;br /&gt;
As a first step, you would like to set some useful bash variables to be able to copy paste the following guide:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Double check all variables you are setting here. If one is not correct, you will restore a running machine or overwrite a live-disk image!&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
machinename=&amp;quot;&amp;lt;MACHINE-NAME&amp;gt;&amp;quot; # For example: machinename=&amp;quot;b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6&amp;quot;&lt;br /&gt;
vmpool=&amp;quot;&amp;lt;VM-POOL&amp;gt;&amp;quot; # For example vmpool=&amp;quot;0f83f084-8080-413e-b558-b678e504836e&amp;quot;&lt;br /&gt;
vmtype=&amp;quot;&amp;lt;VM-TYPE&amp;gt;&amp;quot; # For example vmtype=&amp;quot;vm-persistent&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change to the backup directory for the given machine and check the iterations:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change into the most recent iteration&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd 2014...&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In there you should have: &lt;br /&gt;
* The state file &amp;lt;MACHINE-NAME&amp;gt;.state.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.state.20140109T134445Z)&lt;br /&gt;
* The XML description &amp;lt;MACHINE-NAME&amp;gt;.xml.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.xml.20140109T134445Z)&lt;br /&gt;
* The ldif file &amp;lt;MACHINE-NAME&amp;gt;.ldif.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.ldif.20140109T134445Z)&lt;br /&gt;
* And at least one disk image &amp;lt;DISK-IMAGE&amp;gt;.qcow2.&amp;lt;BACKUP-DATE&amp;gt; (for example 8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2.20140109T134445Z)&lt;br /&gt;
Now you should save the backup date and the disk image(s) in a variable&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
backupdate=&amp;quot;&amp;lt;BACKUP-DATE&amp;gt;&amp;quot; # For example: backupdate=&amp;quot;20140109T134445Z&amp;quot;&lt;br /&gt;
diskimage1=&amp;quot;&amp;lt;DISK-IMAGE-1&amp;gt;.qcow2&amp;quot; # For example: diskimage1=&amp;quot;8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2&amp;quot;&lt;br /&gt;
diskimage2=&amp;quot;&amp;lt;DISK-IMAGE-2&amp;gt;.qcow2&amp;quot; # For example: diskimage2=&amp;quot;aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee.qcow2&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Have again a look at the different variables and &#039;&#039;&#039;double check them again&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
echo &amp;quot;Machine Name = ${machinename}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Pool = ${vmpool}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Type = ${vmtype}&amp;quot;&lt;br /&gt;
echo &amp;quot;Backup date = ${backupdate}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 1 = ${diskimage1}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 2 = ${diskimage2}&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all these files to the retain location:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
currentdate=`date --utc +&#039;%Y%m%dT%H%M%SZ&#039;`&lt;br /&gt;
mkdir -p /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.ldif.${backupdate} /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Check if there is a difference between the current XML file and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
diff -Naur /etc/libvirt/qemu/${machinename}.xml /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.xml.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Now you are entering the critical part. You won&#039;t be able to undo the following steps&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Check if there is a difference between the current LDAP entry and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
domain=&amp;quot;&amp;lt;DOMAIN&amp;gt;&amp;quot; # For example domain=&amp;quot;stoney-cloud.org&amp;quot;&lt;br /&gt;
ldapbase=&amp;quot;&amp;lt;LDAPBASE&amp;gt;&amp;quot; # For expample ldapbase=&amp;quot;dc=stoney-cloud,dc=org&amp;quot;&lt;br /&gt;
ldapsearch -H ldaps://ldapm.${domain} -b &amp;quot;sstVirtualMachine=${machinename},ou=virtual machines,ou=virtualization,ou=services,${ldapbase}&amp;quot; -s sub -x -LLL -o ldif-wrap=no -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W  &amp;quot;(objectclass=*)&amp;quot; &amp;gt; /tmp/${machinename}.ldif&lt;br /&gt;
diff -Naur /tmp/${machinename}.ldif /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.&lt;br /&gt;
&lt;br /&gt;
If there are no differences (or the differences are not important) you can skip the following step. Otherwise use the [https://cloud.stepping-stone.ch/phpldapadmin PhpLdapAdmin] to delete the machine from the LDAP directory (do not forget to delete the dhcp entry &amp;lt;code&amp;gt;dn: cn=&amp;lt;MACHINE-NAME&amp;gt;,ou=virtual machines,cn=192.168.140.0,cn=config-01,ou=dhcp,ou=networks,ou=virtualization,ou=services,dc=stoney-cloud,dc=org&amp;lt;/code&amp;gt;). Then add the LDIF (the one you just edited) to the LDAP (first do some general replacement)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sed -i\&lt;br /&gt;
 -e &#039;s/snapshotting/finished/&#039;\&lt;br /&gt;
 -e &#039;/member.*/d&#039;\&lt;br /&gt;
 /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&lt;br /&gt;
/usr/bin/ldapadd -H &amp;quot;ldaps://ldapm.${domain}&amp;quot; -x -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W -f /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Undefine the machine&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh undefine ${machinename}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all the disk images from the backup location back to their original location&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage1}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage1}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage2}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage2}&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And restore the domain from the state file from the backup location with the XML from the retain location (the one you might have edited)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh restore /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.state.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now the machine should be up and running again. Continuing where it was stopped when taking the backup.&lt;br /&gt;
&lt;br /&gt;
If everything is OK, you can cleanup the created files and directories&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rm -rf /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
rm /tmp/${machinename}.ldif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: stoney conductor]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3752</id>
		<title>stoney conductor: VM Backup</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3752"/>
		<updated>2014-06-26T13:52:47Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Backup a single machine */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This page describes how the VMs and VM-Templates are backed-up and restored inside the [http://www.stoney-cloud.org stoney cloud].&lt;br /&gt;
&lt;br /&gt;
= Requirements =&lt;br /&gt;
* sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
** This directory might be a single partition which needs to have the same size as your partition for the live images (it&#039;s a &amp;quot;copy&amp;quot; of the live partition)&lt;br /&gt;
* sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
** This directory must be on the same partition as your life images are&lt;br /&gt;
* A working stoney cloud, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].&lt;br /&gt;
* The backup configuration must be set: [[stoney_conductor:_OpenLDAP_directory_data_organisation#Backup | stoney conductor: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Backup =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The main idea to backup a VM or a VM-Template is, to divide the task into three subtasks: &lt;br /&gt;
* createSnapshot: Create a disk only snapshot. A new overlay file is created, all write operations are performed to this file. The underlying disk-image is now read only.&lt;br /&gt;
* exportSnapshot: Copy the read only disk-image to the backup location.&lt;br /&gt;
* commitSnapshot: Commit the performed write operations from the overlay back to the underlying (original) disk image. Now the underlying image is read-write again and the overlay image can be deleted.&lt;br /&gt;
A more detailed and technical description for these three sub-processes can be found [[#Sub-Processes | here]].&lt;br /&gt;
&lt;br /&gt;
Furthermore there is an control instance, which can independently call these three sub-processes for a given machine. Like that, the stoney cloud is able to handle different cases:&lt;br /&gt;
=== Backup a single machine ===&lt;br /&gt;
The procedure for backing up a single machine is very simple. Just call the three sub-processes (create-, export- and commitSnapshot) one after the other. So the control instance would do some very basic stuff: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machine = args[0];&lt;br /&gt;
&lt;br /&gt;
if( createSsnapshot( machine ) )&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
    if ( exportSnapshot( machine ) )&lt;br /&gt;
    {&lt;br /&gt;
&lt;br /&gt;
        if ( commitSnapshot( machine ) )&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Successfully backed up machine %s\n&amp;quot;, machine);&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
} else&lt;br /&gt;
{&lt;br /&gt;
    printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Backup multiple machines at the same time ===&lt;br /&gt;
When backing up multiple machines at the same time, we need to make sure that the snapshots for the machines are as close together as possible. Therefore the control instance should call first the createSnapshot process for all machines. After every machine has been snapshotted, the control instance can call the exportSnapshot and commitSnapshot process for every machine. The most important part here is, that the control instance somehow remembers, if the snapshot for a given machine was successful or not. Because if the snapshot failed, it must not call the exportSnapshot and commitSnapshot process. So the control instance needs a little bit more logic: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machines[] = args[0];&lt;br /&gt;
object successful_snapshots[];&lt;br /&gt;
&lt;br /&gt;
# Snapshot all machines&lt;br /&gt;
for( int i = 0; i &amp;lt;  sizeof(machines) / sizeof(object) ; i++ )&lt;br /&gt;
{&lt;br /&gt;
    # If the snapshot was successful, put the machine into the &lt;br /&gt;
    # successful_snapshots array&lt;br /&gt;
    if ( createSnapshot( machines[i] ) )&lt;br /&gt;
    {&lt;br /&gt;
        successful_snapshots[machines[i]];&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machines[i],error);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# export and commit all successful_snapshot machines&lt;br /&gt;
for ( int i = 0; i &amp;lt;  sizeof(successful_snapshots) / sizeof(object) ; i++ ) )&lt;br /&gt;
{&lt;br /&gt;
    # Check if the element at this position is not null, then the snapshot &lt;br /&gt;
    # for this machine was successful&lt;br /&gt;
    if ( successful_snapshots[i] )&lt;br /&gt;
    {&lt;br /&gt;
        if ( exportSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
        {&lt;br /&gt;
            if ( commitSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
            {&lt;br /&gt;
              printf(&amp;quot;Successfully backed-up machine %s\n&amp;quot;, successful_snapshots[i]);&lt;br /&gt;
            } else&lt;br /&gt;
            {&lt;br /&gt;
                printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sub-Processes ===&lt;br /&gt;
See also [[Libvirt_external_snapshot_with_GlusterFS]]&lt;br /&gt;
==== createSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Part_2:_Create_the_snapshot_using_virsh]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#createSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== exportSnapshot ====&lt;br /&gt;
# Simply copy the underlying image to the backup location&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;image&amp;gt;.qcow2 /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;backup&amp;gt;/&amp;lt;location&amp;gt;/.&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#exportSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== commitSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Cleanup.2FCommit_.28Online.29]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#commitSnapshot]]&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
Since the stoney cloud is (as the name says already) a cloud solution, it makes sense to have a backend (in our case openLDAP) involved in the whole process. Like that it is possible to run the backup jobs decentralized on every vm-node. The control instance can then modify the backend, and theses changes are seen by the diffenrent backup daemons on the vm-nodes. So the communication could look like shown in the following picture (Figure 1): &lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-communication.png|800px|thumbnail|none|Figure 1: Communication between the control instance and the prov-backup-kvm daemon through the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
=== Control-Instance Daemon Interaction for creating a Backup with LDIF Examples ===&lt;br /&gt;
The step numbers correspond with the graphical overview from above.&lt;br /&gt;
&lt;br /&gt;
==== Step 00: Backup Configuration for a virtual machine ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The following backup configuration says, that the backup should be done daily, at 03:00 hours (localtime).&lt;br /&gt;
# * * * * * command to be executed&lt;br /&gt;
# - - - - -&lt;br /&gt;
# | | | | |&lt;br /&gt;
# | | | | +----- day of week (0 - 6) (Sunday=0)&lt;br /&gt;
# | | | +------- month (1 - 12)&lt;br /&gt;
# | | +--------- day of month (1 - 31)&lt;br /&gt;
# | +----------- hour (0 - 23)&lt;br /&gt;
# +------------- min (0 - 59)&lt;br /&gt;
# localtime in the crontab entry&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
objectclass: sstCronObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
description: This sub tree contains the backup plan for the virtual machine kvm-005.&lt;br /&gt;
sstCronMinute: 0&lt;br /&gt;
sstCronHour: 3&lt;br /&gt;
sstCronDay: *&lt;br /&gt;
sstCronMonth: *&lt;br /&gt;
sstCronDayOfWeek: *&lt;br /&gt;
sstCronActive: TRUE&lt;br /&gt;
sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
sstBackupRamDiskLocation: file:///mnt/ramdisk-test&lt;br /&gt;
sstVirtualizationDiskImageFormat: qcow2&lt;br /&gt;
sstVirtualizationDiskImageOwner: root&lt;br /&gt;
sstVirtualizationDiskImageGroup: vm-storage&lt;br /&gt;
sstVirtualizationDiskImagePermission: 0660&lt;br /&gt;
sstBackupNumberOfIterations: 1&lt;br /&gt;
sstVirtualizationVirtualMachineForceStart: FALSE&lt;br /&gt;
sstVirtualizationBandwidthMerge: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 01: Initialize Backup Sub Tree (Control instance daemon) ====&lt;br /&gt;
The sub tree &#039;&#039;&#039; ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&#039;&#039;&#039; reflects the time, when the backup is planned (in the form of [YYYY][MM][DD]T[hh][mm][ss]Z ([http://en.wikipedia.org/wiki/ISO_8601 ISO 8601]) and it should be written at the time, when the backup is planned and should be executed. The section &#039;&#039;&#039;20121002T010000Z&#039;&#039;&#039; means the following:&lt;br /&gt;
* Year: 2012&lt;br /&gt;
* Month: 10&lt;br /&gt;
* Day of Month: 02&lt;br /&gt;
* Hour of Day: 01&lt;br /&gt;
* Minutes: 00&lt;br /&gt;
* Seconds: 00&lt;br /&gt;
Please be aware the the time is to be written in UTC (see also the comment in the LDIF example below).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# This entry is the place holder for the backup, which is to be executed at 03:00 hours (localtime with daylight-saving). This&lt;br /&gt;
# leads to the 20121002T010000Z timestamp (which is written in UTC).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: sstProvisioning&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
ou: 20121002T010000Z&lt;br /&gt;
sstProvisioningExecutionDate: 0&lt;br /&gt;
sstProvisioningMode: initialize&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
sstProvisioningState: 20121002T014513Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Finalize the Initialization (Control instance daemon) ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is modified.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: initialized&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Start the Snapshot Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshot&#039;&#039;&#039;, the actual backup process is kicked off by the Control instance daemon.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# snapshot (this way the Provisioning-Backup-VKM daemon knows, that it must start the snapshotting process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshot&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 04: Starting the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is snapshotting the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to snapshotting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Finalizing the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the snapshot of the virtual machine or virtual machine template is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010011Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Start the export Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;export&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to export the disk image to the backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# export (this way the Provisioning-Backup-VKM daemon knows, that it must start the export process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: export&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Starting the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exporting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is exporting the virtual machine or virtual machine template disk images.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to exporting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exporting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 08: Finalizing the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exported&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the export of the virtual machine or virtual machine template disk-images is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010500Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exported&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the commit Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;commit&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to commit the changes from the overlay file to the underlying disk-image&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# commit (this way the Provisioning-Backup-VKM daemon knows, that it must start the commit process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: commit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the commit Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the commit command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;comitting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is committing changes from the overlay disk-images back to the underlying ones.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to comitting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: committing&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the commit Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commit command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;comitted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the comitting of the changes from the overlay disk-images back to the underlying ones is done. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: comitted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the Backup Process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;committed&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Backup) ==&lt;br /&gt;
Since we do not have a working control instance, we need to have a workaround for backing up the machines: &lt;br /&gt;
&lt;br /&gt;
* We do already have a BackupKVMWrapper.pl script (File-Backend) which executes the three [[#Sub-Processes | sub-processes ]] in the correct order for a given list of machines (see [[#Backup multiple machines at the same_time]]).&lt;br /&gt;
* We do already have the implementation for the whole backup with the LDAP-Backend (see [[ stoney conductor: prov backup kvm ]]).&lt;br /&gt;
* We can now combine these two existing scripts and create a wrapper (lets call it LDAPKVMWrapper) which, in some way, adds some logic to the BackupKVMWrapper.pl. In fact the LDAPKVMWrapper wrapper will generate the list of machines which need a backup.&lt;br /&gt;
&lt;br /&gt;
The behaviour on our servers is as follows (c.f. Figure 2):&lt;br /&gt;
# The (decentralized) LDAPKVMWrapper wrapper (which is executed everyday via cronjob) generates a list off all machines running on the current host.&lt;br /&gt;
#* Currently on the hosts the cronjobs looks like: &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
#* For each of these machines:&lt;br /&gt;
#** Check if the machine is excluded from the backup, if yes, remove the machine from the list&lt;br /&gt;
#** Check if the last backup was successful, if not, remove the machine from the list&lt;br /&gt;
# Update the backup subtree for each machine in the list&lt;br /&gt;
#* Remove the old backup leaf (the &amp;quot;yesterday-leaf&amp;quot;), and add a new one (the &amp;quot;today-leaf&amp;quot;) &lt;br /&gt;
#* After this step, the machines are ready to be backed up&lt;br /&gt;
# Call the BackupKVMWrapper.pl script with the machines list as a parameter&lt;br /&gt;
# Wait for the BackupKVMWrapper.pl script to finish&lt;br /&gt;
# Go again through all machines and update the backup subtree a last time&lt;br /&gt;
#* Check if the backup was successful, if yes, set sstProvisioningMode = finished (see also TBD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:wrapper-interaction.png|500px|thumbnail|none|Figure 2: How the two wrapper interact with the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
* If for some reason something does not work at all, the whole backup process can be deactivated by simply disabling the LDAPKVMWrapper cronjob&lt;br /&gt;
** &amp;lt;code&amp;gt;crontab -e&amp;lt;/code&amp;gt;&lt;br /&gt;
** Comment the LDAPKVMWrapper cronjob line: &amp;lt;code&amp;gt;#00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
=== How to exclude a machine from the backup ===&lt;br /&gt;
Login to one of the [[VM-Node | vm-nodes]] and execute the following command&lt;br /&gt;
&lt;br /&gt;
If you want to exclude a machine from the backup run you simply need to add the following entry to your LDAP directory: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the backup subtree in the LDAP directory already exists, you need to add the sstbackupexcludefrombackup attribute: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
add: objectClass&lt;br /&gt;
objectClass: sstVirtualizationBackupObjectClass&lt;br /&gt;
-&lt;br /&gt;
add: sstbackupexcludefrombackup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Re-include the machine to the backup ====&lt;br /&gt;
If you want to re include a machine, simply delete the machines whole backup subtree. It will be recreated during the next backup run.&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
= Restore =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The restore process, similar to the backup process, can be divided into three sub-processes: &lt;br /&gt;
* Unretain the small files: Copy the small files (backend entry, XML description) from the backup directory to the retain directory&lt;br /&gt;
* Unretain the big files: Copy the big files (state file, disk image(s)) form the backup directory to the retain directory&lt;br /&gt;
* Restore the machine: Replace the live disk image(s) by the one(s) from the backup and restore the machine from the state file&lt;br /&gt;
&lt;br /&gt;
Additionally the restore process can also be divided into two phases: &lt;br /&gt;
* User-Interaction phase: After the &amp;quot;unretain small files&amp;quot; the user needs to decide two things:&lt;br /&gt;
** On conflicts between the backend entry file and the XML description, the user need to decide how to resolve this conflict(s)&lt;br /&gt;
** The user can also abort the restore process up to this point. After that the restore can not be aborted or undone! &lt;br /&gt;
* Non-User-Interaction phase: The daemons communicate through the backend between each other and the restore process continues without further user input (c.f. [[#Communication_through_backend_2 | Communication through backend]])&lt;br /&gt;
&lt;br /&gt;
=== Sub Processes ===&lt;br /&gt;
==== Unretain small files ====&lt;br /&gt;
This workflow assumes that the backup directory is on the same physical server as the retain directory (protocol is file://)&lt;br /&gt;
# Copy the backend-entry file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.backend /path/to/retain/vm-001.backend&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the XML description from the from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.xml /path/to/retain/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Compare the backend-entry file (the one in the retain directory) with the live-backend entry&lt;br /&gt;
#* Resolve all conflicts between these two backend entries&lt;br /&gt;
#** Modify the backend entry at the retain location accordingly&lt;br /&gt;
# Apply the same changes for the XML description at the retain location (backend entry and XML description need to be consistent).&lt;br /&gt;
&lt;br /&gt;
==== Unretain large files ====&lt;br /&gt;
# Copy the state file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.state /path/to/retain/vm-001.state&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the disk image(s) from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.qcow2 /path/to/retain/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
&lt;br /&gt;
==== Restore the VM ====&lt;br /&gt;
# Shutdown the VM if it is running:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh shutdown vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Undefine the VM if it is still defined: &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh undefine vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Overwrite the original disk image:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;mv /path/to/retain/vm-001.qcow2 /path/to/images/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
# Restore the VMs backend entry: &lt;br /&gt;
#* Write the backend entry from the retain location (&amp;lt;code&amp;gt;/path/to/retain/vm-001.backend&amp;lt;/code&amp;gt;) to the backend&lt;br /&gt;
# Overwrite the VMs XML description with the one from the retain location &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/retain/vm-001.xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Restore the VM from the state file with the corrected XML&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh restore /path/to/retain/vm-001.state --xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
The actual KVM-Restore process is controlled completely by the Control instance daemon via the OpenLDAP directory. See [[#OpenLDAP Directory Integration|OpenLDAP Directory Integration]] the involved attributes and possible values.&lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-interaction-restore.png|thumb|500px|none|Figure 3: Communication between all involved parties during the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update these interactions by editing [[File:Restore-Interaction.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== Control instance Daemon Interaction for restoring a Backup with LDIF Examples ===&lt;br /&gt;
==== Step 01: Start the unretainSmallFiles process (Control instance daemon) ====&lt;br /&gt;
The first step of the restore process is to copy the small files (in this case the XML file and the LDIF) from the configured backup location to the configured retain location. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainSmallFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainSmallFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Starting the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingSmallFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the small files for the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Finalizing the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedSmallFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the small files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Start the unretainLargeFiles process (Control instance daemon) ====&lt;br /&gt;
Next step in the restore process is to copy the large files (state file and disk images) from the configured backup directory to the configured retain directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainLargeFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainLargeFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Starting the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingLargeFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the large files for the virtual machine or virtual machine template.&lt;br /&gt;
&lt;br /&gt;
In the meantime the vm-manager merges the LDIF we have unretained in [[#Step_02:_Starting_the_unretainSmallFiles_process_.28Provisioning-Backup-KVM_daemon.29 | step 02]] with the one in the live directory to sort out possible differences in the configuration of the virtual machine.  &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Finalizing the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedLargeFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the large files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the restore process (Control instance daemon) ====&lt;br /&gt;
Since we now have all necessary files in the configured retain location, the restore process can be started. There we simply copy the disk images back to their original location and restore the VM from the state file (which is also at the configured retain location)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# restore (this way the Provisioning-Backup-VKM daemon knows, that it must start the restore process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restore&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restoring&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is restoring the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to restoring by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restoring&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restored&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restored&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the restore process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;restored&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the Control instance daemon, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the restore process is finished.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Restore) ==&lt;br /&gt;
* Since the prov-backup-kvm daemon is not running on the vm-nodes (c.f. [[stoney_conductor:_Backup#State_of_the_art]]), the restore process does not work when clicking the icon in the webinterface. &lt;br /&gt;
* Resolving the conflicts in the backend and XML description file is not yet done&lt;br /&gt;
** Actually all steps not executed by prov-backup-kvm are not yet properly implemented (c.f. [[stoney_conductor:_prov_backup_kvm#Restore]])&lt;br /&gt;
* The implementation is done, but the last step from the [[#Restore_2 | restore process ]] is different:&lt;br /&gt;
** The &amp;lt;code&amp;gt;virsh restore&amp;lt;/code&amp;gt; command is not executed with the &amp;lt;code&amp;gt;--xml&amp;lt;/code&amp;gt; option, the XML from the state file is taken when restoring the machine. Therefore the conflicts are not properly resolved. &lt;br /&gt;
*** --[[User:Pat|Pat]] ([[User talk:Pat|talk]]) 09:41, 29 October 2013 (CET): Currently the [http://search.cpan.org/~danberr/Sys-Virt-1.1.3/lib/Sys/Virt.pm Sys::Virt] library does not support the --xml parameter when restoring a domain&lt;br /&gt;
&lt;br /&gt;
=== How to manually restore a machine from backup ===&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: Before you continue with this guide, make sure that you have no other possibility to restore the machine. It might be easier and safer to get lost files from the online backup if the machine has one set up.&lt;br /&gt;
&lt;br /&gt;
If you really have to restore the machine from the backup:&lt;br /&gt;
# Stop the machine from via the [https://cloud.stepping-stone.ch/vm-manager/ web interface]&lt;br /&gt;
# Login (as root) on the [[VM-Node]] the machine was running on&lt;br /&gt;
&lt;br /&gt;
As a first step, you would like to set some useful bash variables to be able to copy paste the following guide:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Double check all variables you are setting here. If one is not correct, you will restore a running machine or overwrite a live-disk image!&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
machinename=&amp;quot;&amp;lt;MACHINE-NAME&amp;gt;&amp;quot; # For example: machinename=&amp;quot;b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6&amp;quot;&lt;br /&gt;
vmpool=&amp;quot;&amp;lt;VM-POOL&amp;gt;&amp;quot; # For example vmpool=&amp;quot;0f83f084-8080-413e-b558-b678e504836e&amp;quot;&lt;br /&gt;
vmtype=&amp;quot;&amp;lt;VM-TYPE&amp;gt;&amp;quot; # For example vmtype=&amp;quot;vm-persistent&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change to the backup directory for the given machine and check the iterations:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change into the most recent iteration&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd 2014...&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In there you should have: &lt;br /&gt;
* The state file &amp;lt;MACHINE-NAME&amp;gt;.state.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.state.20140109T134445Z)&lt;br /&gt;
* The XML description &amp;lt;MACHINE-NAME&amp;gt;.xml.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.xml.20140109T134445Z)&lt;br /&gt;
* The ldif file &amp;lt;MACHINE-NAME&amp;gt;.ldif.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.ldif.20140109T134445Z)&lt;br /&gt;
* And at least one disk image &amp;lt;DISK-IMAGE&amp;gt;.qcow2.&amp;lt;BACKUP-DATE&amp;gt; (for example 8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2.20140109T134445Z)&lt;br /&gt;
Now you should save the backup date and the disk image(s) in a variable&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
backupdate=&amp;quot;&amp;lt;BACKUP-DATE&amp;gt;&amp;quot; # For example: backupdate=&amp;quot;20140109T134445Z&amp;quot;&lt;br /&gt;
diskimage1=&amp;quot;&amp;lt;DISK-IMAGE-1&amp;gt;.qcow2&amp;quot; # For example: diskimage1=&amp;quot;8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2&amp;quot;&lt;br /&gt;
diskimage2=&amp;quot;&amp;lt;DISK-IMAGE-2&amp;gt;.qcow2&amp;quot; # For example: diskimage2=&amp;quot;aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee.qcow2&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Have again a look at the different variables and &#039;&#039;&#039;double check them again&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
echo &amp;quot;Machine Name = ${machinename}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Pool = ${vmpool}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Type = ${vmtype}&amp;quot;&lt;br /&gt;
echo &amp;quot;Backup date = ${backupdate}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 1 = ${diskimage1}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 2 = ${diskimage2}&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all these files to the retain location:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
currentdate=`date --utc +&#039;%Y%m%dT%H%M%SZ&#039;`&lt;br /&gt;
mkdir -p /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.ldif.${backupdate} /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Check if there is a difference between the current XML file and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
diff -Naur /etc/libvirt/qemu/${machinename}.xml /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.xml.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Now you are entering the critical part. You won&#039;t be able to undo the following steps&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Check if there is a difference between the current LDAP entry and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
domain=&amp;quot;&amp;lt;DOMAIN&amp;gt;&amp;quot; # For example domain=&amp;quot;stoney-cloud.org&amp;quot;&lt;br /&gt;
ldapbase=&amp;quot;&amp;lt;LDAPBASE&amp;gt;&amp;quot; # For expample ldapbase=&amp;quot;dc=stoney-cloud,dc=org&amp;quot;&lt;br /&gt;
ldapsearch -H ldaps://ldapm.${domain} -b &amp;quot;sstVirtualMachine=${machinename},ou=virtual machines,ou=virtualization,ou=services,${ldapbase}&amp;quot; -s sub -x -LLL -o ldif-wrap=no -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W  &amp;quot;(objectclass=*)&amp;quot; &amp;gt; /tmp/${machinename}.ldif&lt;br /&gt;
diff -Naur /tmp/${machinename}.ldif /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.&lt;br /&gt;
&lt;br /&gt;
If there are no differences (or the differences are not important) you can skip the following step. Otherwise use the [https://cloud.stepping-stone.ch/phpldapadmin PhpLdapAdmin] to delete the machine from the LDAP directory (do not forget to delete the dhcp entry &amp;lt;code&amp;gt;dn: cn=&amp;lt;MACHINE-NAME&amp;gt;,ou=virtual machines,cn=192.168.140.0,cn=config-01,ou=dhcp,ou=networks,ou=virtualization,ou=services,dc=stoney-cloud,dc=org&amp;lt;/code&amp;gt;). Then add the LDIF (the one you just edited) to the LDAP (first do some general replacement)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sed -i\&lt;br /&gt;
 -e &#039;s/snapshotting/finished/&#039;\&lt;br /&gt;
 -e &#039;/member.*/d&#039;\&lt;br /&gt;
 /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&lt;br /&gt;
/usr/bin/ldapadd -H &amp;quot;ldaps://ldapm.${domain}&amp;quot; -x -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W -f /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Undefine the machine&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh undefine ${machinename}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all the disk images from the backup location back to their original location&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage1}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage1}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage2}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage2}&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And restore the domain from the state file from the backup location with the XML from the retain location (the one you might have edited)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh restore /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.state.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now the machine should be up and running again. Continuing where it was stopped when taking the backup.&lt;br /&gt;
&lt;br /&gt;
If everything is OK, you can cleanup the created files and directories&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rm -rf /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
rm /tmp/${machinename}.ldif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: stoney conductor]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3751</id>
		<title>stoney conductor: VM Backup</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3751"/>
		<updated>2014-06-26T13:52:15Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Step 12: Finalizing the Backup Process (Control instance daemon) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This page describes how the VMs and VM-Templates are backed-up and restored inside the [http://www.stoney-cloud.org stoney cloud].&lt;br /&gt;
&lt;br /&gt;
= Requirements =&lt;br /&gt;
* sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
** This directory might be a single partition which needs to have the same size as your partition for the live images (it&#039;s a &amp;quot;copy&amp;quot; of the live partition)&lt;br /&gt;
* sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
** This directory must be on the same partition as your life images are&lt;br /&gt;
* A working stoney cloud, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].&lt;br /&gt;
* The backup configuration must be set: [[stoney_conductor:_OpenLDAP_directory_data_organisation#Backup | stoney conductor: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Backup =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The main idea to backup a VM or a VM-Template is, to divide the task into three subtasks: &lt;br /&gt;
* createSnapshot: Create a disk only snapshot. A new overlay file is created, all write operations are performed to this file. The underlying disk-image is now read only.&lt;br /&gt;
* exportSnapshot: Copy the read only disk-image to the backup location.&lt;br /&gt;
* commitSnapshot: Commit the performed write operations from the overlay back to the underlying (original) disk image. Now the underlying image is read-write again and the overlay image can be deleted.&lt;br /&gt;
A more detailed and technical description for these three sub-processes can be found [[#Sub-Processes | here]].&lt;br /&gt;
&lt;br /&gt;
Furthermore there is an control instance, which can independently call these three sub-processes for a given machine. Like that, the stoney cloud is able to handle different cases:&lt;br /&gt;
=== Backup a single machine ===&lt;br /&gt;
The procedure for backing up a single machine is very simple. Just call the three sub-processes (snapshot, merge and retain) one after the other. So the control instance would do some very basic stuff: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machine = args[0];&lt;br /&gt;
&lt;br /&gt;
if( createSsnapshot( machine ) )&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
    if ( exportSnapshot( machine ) )&lt;br /&gt;
    {&lt;br /&gt;
&lt;br /&gt;
        if ( commitSnapshot( machine ) )&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Successfully backed up machine %s\n&amp;quot;, machine);&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
} else&lt;br /&gt;
{&lt;br /&gt;
    printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Backup multiple machines at the same time ===&lt;br /&gt;
When backing up multiple machines at the same time, we need to make sure that the snapshots for the machines are as close together as possible. Therefore the control instance should call first the createSnapshot process for all machines. After every machine has been snapshotted, the control instance can call the exportSnapshot and commitSnapshot process for every machine. The most important part here is, that the control instance somehow remembers, if the snapshot for a given machine was successful or not. Because if the snapshot failed, it must not call the exportSnapshot and commitSnapshot process. So the control instance needs a little bit more logic: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machines[] = args[0];&lt;br /&gt;
object successful_snapshots[];&lt;br /&gt;
&lt;br /&gt;
# Snapshot all machines&lt;br /&gt;
for( int i = 0; i &amp;lt;  sizeof(machines) / sizeof(object) ; i++ )&lt;br /&gt;
{&lt;br /&gt;
    # If the snapshot was successful, put the machine into the &lt;br /&gt;
    # successful_snapshots array&lt;br /&gt;
    if ( createSnapshot( machines[i] ) )&lt;br /&gt;
    {&lt;br /&gt;
        successful_snapshots[machines[i]];&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machines[i],error);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# export and commit all successful_snapshot machines&lt;br /&gt;
for ( int i = 0; i &amp;lt;  sizeof(successful_snapshots) / sizeof(object) ; i++ ) )&lt;br /&gt;
{&lt;br /&gt;
    # Check if the element at this position is not null, then the snapshot &lt;br /&gt;
    # for this machine was successful&lt;br /&gt;
    if ( successful_snapshots[i] )&lt;br /&gt;
    {&lt;br /&gt;
        if ( exportSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
        {&lt;br /&gt;
            if ( commitSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
            {&lt;br /&gt;
              printf(&amp;quot;Successfully backed-up machine %s\n&amp;quot;, successful_snapshots[i]);&lt;br /&gt;
            } else&lt;br /&gt;
            {&lt;br /&gt;
                printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sub-Processes ===&lt;br /&gt;
See also [[Libvirt_external_snapshot_with_GlusterFS]]&lt;br /&gt;
==== createSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Part_2:_Create_the_snapshot_using_virsh]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#createSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== exportSnapshot ====&lt;br /&gt;
# Simply copy the underlying image to the backup location&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;image&amp;gt;.qcow2 /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;backup&amp;gt;/&amp;lt;location&amp;gt;/.&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#exportSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== commitSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Cleanup.2FCommit_.28Online.29]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#commitSnapshot]]&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
Since the stoney cloud is (as the name says already) a cloud solution, it makes sense to have a backend (in our case openLDAP) involved in the whole process. Like that it is possible to run the backup jobs decentralized on every vm-node. The control instance can then modify the backend, and theses changes are seen by the diffenrent backup daemons on the vm-nodes. So the communication could look like shown in the following picture (Figure 1): &lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-communication.png|800px|thumbnail|none|Figure 1: Communication between the control instance and the prov-backup-kvm daemon through the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
=== Control-Instance Daemon Interaction for creating a Backup with LDIF Examples ===&lt;br /&gt;
The step numbers correspond with the graphical overview from above.&lt;br /&gt;
&lt;br /&gt;
==== Step 00: Backup Configuration for a virtual machine ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The following backup configuration says, that the backup should be done daily, at 03:00 hours (localtime).&lt;br /&gt;
# * * * * * command to be executed&lt;br /&gt;
# - - - - -&lt;br /&gt;
# | | | | |&lt;br /&gt;
# | | | | +----- day of week (0 - 6) (Sunday=0)&lt;br /&gt;
# | | | +------- month (1 - 12)&lt;br /&gt;
# | | +--------- day of month (1 - 31)&lt;br /&gt;
# | +----------- hour (0 - 23)&lt;br /&gt;
# +------------- min (0 - 59)&lt;br /&gt;
# localtime in the crontab entry&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
objectclass: sstCronObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
description: This sub tree contains the backup plan for the virtual machine kvm-005.&lt;br /&gt;
sstCronMinute: 0&lt;br /&gt;
sstCronHour: 3&lt;br /&gt;
sstCronDay: *&lt;br /&gt;
sstCronMonth: *&lt;br /&gt;
sstCronDayOfWeek: *&lt;br /&gt;
sstCronActive: TRUE&lt;br /&gt;
sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
sstBackupRamDiskLocation: file:///mnt/ramdisk-test&lt;br /&gt;
sstVirtualizationDiskImageFormat: qcow2&lt;br /&gt;
sstVirtualizationDiskImageOwner: root&lt;br /&gt;
sstVirtualizationDiskImageGroup: vm-storage&lt;br /&gt;
sstVirtualizationDiskImagePermission: 0660&lt;br /&gt;
sstBackupNumberOfIterations: 1&lt;br /&gt;
sstVirtualizationVirtualMachineForceStart: FALSE&lt;br /&gt;
sstVirtualizationBandwidthMerge: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 01: Initialize Backup Sub Tree (Control instance daemon) ====&lt;br /&gt;
The sub tree &#039;&#039;&#039; ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&#039;&#039;&#039; reflects the time, when the backup is planned (in the form of [YYYY][MM][DD]T[hh][mm][ss]Z ([http://en.wikipedia.org/wiki/ISO_8601 ISO 8601]) and it should be written at the time, when the backup is planned and should be executed. The section &#039;&#039;&#039;20121002T010000Z&#039;&#039;&#039; means the following:&lt;br /&gt;
* Year: 2012&lt;br /&gt;
* Month: 10&lt;br /&gt;
* Day of Month: 02&lt;br /&gt;
* Hour of Day: 01&lt;br /&gt;
* Minutes: 00&lt;br /&gt;
* Seconds: 00&lt;br /&gt;
Please be aware the the time is to be written in UTC (see also the comment in the LDIF example below).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# This entry is the place holder for the backup, which is to be executed at 03:00 hours (localtime with daylight-saving). This&lt;br /&gt;
# leads to the 20121002T010000Z timestamp (which is written in UTC).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: sstProvisioning&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
ou: 20121002T010000Z&lt;br /&gt;
sstProvisioningExecutionDate: 0&lt;br /&gt;
sstProvisioningMode: initialize&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
sstProvisioningState: 20121002T014513Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Finalize the Initialization (Control instance daemon) ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is modified.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: initialized&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Start the Snapshot Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshot&#039;&#039;&#039;, the actual backup process is kicked off by the Control instance daemon.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# snapshot (this way the Provisioning-Backup-VKM daemon knows, that it must start the snapshotting process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshot&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 04: Starting the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is snapshotting the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to snapshotting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Finalizing the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the snapshot of the virtual machine or virtual machine template is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010011Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Start the export Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;export&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to export the disk image to the backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# export (this way the Provisioning-Backup-VKM daemon knows, that it must start the export process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: export&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Starting the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exporting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is exporting the virtual machine or virtual machine template disk images.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to exporting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exporting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 08: Finalizing the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exported&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the export of the virtual machine or virtual machine template disk-images is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010500Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exported&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the commit Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;commit&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to commit the changes from the overlay file to the underlying disk-image&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# commit (this way the Provisioning-Backup-VKM daemon knows, that it must start the commit process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: commit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the commit Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the commit command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;comitting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is committing changes from the overlay disk-images back to the underlying ones.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to comitting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: committing&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the commit Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commit command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;comitted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the comitting of the changes from the overlay disk-images back to the underlying ones is done. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: comitted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the Backup Process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;committed&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Backup) ==&lt;br /&gt;
Since we do not have a working control instance, we need to have a workaround for backing up the machines: &lt;br /&gt;
&lt;br /&gt;
* We do already have a BackupKVMWrapper.pl script (File-Backend) which executes the three [[#Sub-Processes | sub-processes ]] in the correct order for a given list of machines (see [[#Backup multiple machines at the same_time]]).&lt;br /&gt;
* We do already have the implementation for the whole backup with the LDAP-Backend (see [[ stoney conductor: prov backup kvm ]]).&lt;br /&gt;
* We can now combine these two existing scripts and create a wrapper (lets call it LDAPKVMWrapper) which, in some way, adds some logic to the BackupKVMWrapper.pl. In fact the LDAPKVMWrapper wrapper will generate the list of machines which need a backup.&lt;br /&gt;
&lt;br /&gt;
The behaviour on our servers is as follows (c.f. Figure 2):&lt;br /&gt;
# The (decentralized) LDAPKVMWrapper wrapper (which is executed everyday via cronjob) generates a list off all machines running on the current host.&lt;br /&gt;
#* Currently on the hosts the cronjobs looks like: &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
#* For each of these machines:&lt;br /&gt;
#** Check if the machine is excluded from the backup, if yes, remove the machine from the list&lt;br /&gt;
#** Check if the last backup was successful, if not, remove the machine from the list&lt;br /&gt;
# Update the backup subtree for each machine in the list&lt;br /&gt;
#* Remove the old backup leaf (the &amp;quot;yesterday-leaf&amp;quot;), and add a new one (the &amp;quot;today-leaf&amp;quot;) &lt;br /&gt;
#* After this step, the machines are ready to be backed up&lt;br /&gt;
# Call the BackupKVMWrapper.pl script with the machines list as a parameter&lt;br /&gt;
# Wait for the BackupKVMWrapper.pl script to finish&lt;br /&gt;
# Go again through all machines and update the backup subtree a last time&lt;br /&gt;
#* Check if the backup was successful, if yes, set sstProvisioningMode = finished (see also TBD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:wrapper-interaction.png|500px|thumbnail|none|Figure 2: How the two wrapper interact with the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
* If for some reason something does not work at all, the whole backup process can be deactivated by simply disabling the LDAPKVMWrapper cronjob&lt;br /&gt;
** &amp;lt;code&amp;gt;crontab -e&amp;lt;/code&amp;gt;&lt;br /&gt;
** Comment the LDAPKVMWrapper cronjob line: &amp;lt;code&amp;gt;#00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
=== How to exclude a machine from the backup ===&lt;br /&gt;
Login to one of the [[VM-Node | vm-nodes]] and execute the following command&lt;br /&gt;
&lt;br /&gt;
If you want to exclude a machine from the backup run you simply need to add the following entry to your LDAP directory: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the backup subtree in the LDAP directory already exists, you need to add the sstbackupexcludefrombackup attribute: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
add: objectClass&lt;br /&gt;
objectClass: sstVirtualizationBackupObjectClass&lt;br /&gt;
-&lt;br /&gt;
add: sstbackupexcludefrombackup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Re-include the machine to the backup ====&lt;br /&gt;
If you want to re include a machine, simply delete the machines whole backup subtree. It will be recreated during the next backup run.&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
= Restore =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The restore process, similar to the backup process, can be divided into three sub-processes: &lt;br /&gt;
* Unretain the small files: Copy the small files (backend entry, XML description) from the backup directory to the retain directory&lt;br /&gt;
* Unretain the big files: Copy the big files (state file, disk image(s)) form the backup directory to the retain directory&lt;br /&gt;
* Restore the machine: Replace the live disk image(s) by the one(s) from the backup and restore the machine from the state file&lt;br /&gt;
&lt;br /&gt;
Additionally the restore process can also be divided into two phases: &lt;br /&gt;
* User-Interaction phase: After the &amp;quot;unretain small files&amp;quot; the user needs to decide two things:&lt;br /&gt;
** On conflicts between the backend entry file and the XML description, the user need to decide how to resolve this conflict(s)&lt;br /&gt;
** The user can also abort the restore process up to this point. After that the restore can not be aborted or undone! &lt;br /&gt;
* Non-User-Interaction phase: The daemons communicate through the backend between each other and the restore process continues without further user input (c.f. [[#Communication_through_backend_2 | Communication through backend]])&lt;br /&gt;
&lt;br /&gt;
=== Sub Processes ===&lt;br /&gt;
==== Unretain small files ====&lt;br /&gt;
This workflow assumes that the backup directory is on the same physical server as the retain directory (protocol is file://)&lt;br /&gt;
# Copy the backend-entry file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.backend /path/to/retain/vm-001.backend&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the XML description from the from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.xml /path/to/retain/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Compare the backend-entry file (the one in the retain directory) with the live-backend entry&lt;br /&gt;
#* Resolve all conflicts between these two backend entries&lt;br /&gt;
#** Modify the backend entry at the retain location accordingly&lt;br /&gt;
# Apply the same changes for the XML description at the retain location (backend entry and XML description need to be consistent).&lt;br /&gt;
&lt;br /&gt;
==== Unretain large files ====&lt;br /&gt;
# Copy the state file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.state /path/to/retain/vm-001.state&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the disk image(s) from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.qcow2 /path/to/retain/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
&lt;br /&gt;
==== Restore the VM ====&lt;br /&gt;
# Shutdown the VM if it is running:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh shutdown vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Undefine the VM if it is still defined: &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh undefine vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Overwrite the original disk image:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;mv /path/to/retain/vm-001.qcow2 /path/to/images/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
# Restore the VMs backend entry: &lt;br /&gt;
#* Write the backend entry from the retain location (&amp;lt;code&amp;gt;/path/to/retain/vm-001.backend&amp;lt;/code&amp;gt;) to the backend&lt;br /&gt;
# Overwrite the VMs XML description with the one from the retain location &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/retain/vm-001.xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Restore the VM from the state file with the corrected XML&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh restore /path/to/retain/vm-001.state --xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
The actual KVM-Restore process is controlled completely by the Control instance daemon via the OpenLDAP directory. See [[#OpenLDAP Directory Integration|OpenLDAP Directory Integration]] the involved attributes and possible values.&lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-interaction-restore.png|thumb|500px|none|Figure 3: Communication between all involved parties during the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update these interactions by editing [[File:Restore-Interaction.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== Control instance Daemon Interaction for restoring a Backup with LDIF Examples ===&lt;br /&gt;
==== Step 01: Start the unretainSmallFiles process (Control instance daemon) ====&lt;br /&gt;
The first step of the restore process is to copy the small files (in this case the XML file and the LDIF) from the configured backup location to the configured retain location. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainSmallFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainSmallFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Starting the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingSmallFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the small files for the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Finalizing the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedSmallFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the small files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Start the unretainLargeFiles process (Control instance daemon) ====&lt;br /&gt;
Next step in the restore process is to copy the large files (state file and disk images) from the configured backup directory to the configured retain directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainLargeFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainLargeFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Starting the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingLargeFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the large files for the virtual machine or virtual machine template.&lt;br /&gt;
&lt;br /&gt;
In the meantime the vm-manager merges the LDIF we have unretained in [[#Step_02:_Starting_the_unretainSmallFiles_process_.28Provisioning-Backup-KVM_daemon.29 | step 02]] with the one in the live directory to sort out possible differences in the configuration of the virtual machine.  &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Finalizing the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedLargeFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the large files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the restore process (Control instance daemon) ====&lt;br /&gt;
Since we now have all necessary files in the configured retain location, the restore process can be started. There we simply copy the disk images back to their original location and restore the VM from the state file (which is also at the configured retain location)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# restore (this way the Provisioning-Backup-VKM daemon knows, that it must start the restore process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restore&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restoring&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is restoring the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to restoring by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restoring&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restored&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restored&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the restore process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;restored&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the Control instance daemon, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the restore process is finished.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Restore) ==&lt;br /&gt;
* Since the prov-backup-kvm daemon is not running on the vm-nodes (c.f. [[stoney_conductor:_Backup#State_of_the_art]]), the restore process does not work when clicking the icon in the webinterface. &lt;br /&gt;
* Resolving the conflicts in the backend and XML description file is not yet done&lt;br /&gt;
** Actually all steps not executed by prov-backup-kvm are not yet properly implemented (c.f. [[stoney_conductor:_prov_backup_kvm#Restore]])&lt;br /&gt;
* The implementation is done, but the last step from the [[#Restore_2 | restore process ]] is different:&lt;br /&gt;
** The &amp;lt;code&amp;gt;virsh restore&amp;lt;/code&amp;gt; command is not executed with the &amp;lt;code&amp;gt;--xml&amp;lt;/code&amp;gt; option, the XML from the state file is taken when restoring the machine. Therefore the conflicts are not properly resolved. &lt;br /&gt;
*** --[[User:Pat|Pat]] ([[User talk:Pat|talk]]) 09:41, 29 October 2013 (CET): Currently the [http://search.cpan.org/~danberr/Sys-Virt-1.1.3/lib/Sys/Virt.pm Sys::Virt] library does not support the --xml parameter when restoring a domain&lt;br /&gt;
&lt;br /&gt;
=== How to manually restore a machine from backup ===&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: Before you continue with this guide, make sure that you have no other possibility to restore the machine. It might be easier and safer to get lost files from the online backup if the machine has one set up.&lt;br /&gt;
&lt;br /&gt;
If you really have to restore the machine from the backup:&lt;br /&gt;
# Stop the machine from via the [https://cloud.stepping-stone.ch/vm-manager/ web interface]&lt;br /&gt;
# Login (as root) on the [[VM-Node]] the machine was running on&lt;br /&gt;
&lt;br /&gt;
As a first step, you would like to set some useful bash variables to be able to copy paste the following guide:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Double check all variables you are setting here. If one is not correct, you will restore a running machine or overwrite a live-disk image!&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
machinename=&amp;quot;&amp;lt;MACHINE-NAME&amp;gt;&amp;quot; # For example: machinename=&amp;quot;b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6&amp;quot;&lt;br /&gt;
vmpool=&amp;quot;&amp;lt;VM-POOL&amp;gt;&amp;quot; # For example vmpool=&amp;quot;0f83f084-8080-413e-b558-b678e504836e&amp;quot;&lt;br /&gt;
vmtype=&amp;quot;&amp;lt;VM-TYPE&amp;gt;&amp;quot; # For example vmtype=&amp;quot;vm-persistent&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change to the backup directory for the given machine and check the iterations:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change into the most recent iteration&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd 2014...&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In there you should have: &lt;br /&gt;
* The state file &amp;lt;MACHINE-NAME&amp;gt;.state.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.state.20140109T134445Z)&lt;br /&gt;
* The XML description &amp;lt;MACHINE-NAME&amp;gt;.xml.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.xml.20140109T134445Z)&lt;br /&gt;
* The ldif file &amp;lt;MACHINE-NAME&amp;gt;.ldif.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.ldif.20140109T134445Z)&lt;br /&gt;
* And at least one disk image &amp;lt;DISK-IMAGE&amp;gt;.qcow2.&amp;lt;BACKUP-DATE&amp;gt; (for example 8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2.20140109T134445Z)&lt;br /&gt;
Now you should save the backup date and the disk image(s) in a variable&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
backupdate=&amp;quot;&amp;lt;BACKUP-DATE&amp;gt;&amp;quot; # For example: backupdate=&amp;quot;20140109T134445Z&amp;quot;&lt;br /&gt;
diskimage1=&amp;quot;&amp;lt;DISK-IMAGE-1&amp;gt;.qcow2&amp;quot; # For example: diskimage1=&amp;quot;8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2&amp;quot;&lt;br /&gt;
diskimage2=&amp;quot;&amp;lt;DISK-IMAGE-2&amp;gt;.qcow2&amp;quot; # For example: diskimage2=&amp;quot;aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee.qcow2&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Have again a look at the different variables and &#039;&#039;&#039;double check them again&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
echo &amp;quot;Machine Name = ${machinename}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Pool = ${vmpool}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Type = ${vmtype}&amp;quot;&lt;br /&gt;
echo &amp;quot;Backup date = ${backupdate}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 1 = ${diskimage1}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 2 = ${diskimage2}&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all these files to the retain location:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
currentdate=`date --utc +&#039;%Y%m%dT%H%M%SZ&#039;`&lt;br /&gt;
mkdir -p /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.ldif.${backupdate} /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Check if there is a difference between the current XML file and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
diff -Naur /etc/libvirt/qemu/${machinename}.xml /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.xml.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Now you are entering the critical part. You won&#039;t be able to undo the following steps&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Check if there is a difference between the current LDAP entry and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
domain=&amp;quot;&amp;lt;DOMAIN&amp;gt;&amp;quot; # For example domain=&amp;quot;stoney-cloud.org&amp;quot;&lt;br /&gt;
ldapbase=&amp;quot;&amp;lt;LDAPBASE&amp;gt;&amp;quot; # For expample ldapbase=&amp;quot;dc=stoney-cloud,dc=org&amp;quot;&lt;br /&gt;
ldapsearch -H ldaps://ldapm.${domain} -b &amp;quot;sstVirtualMachine=${machinename},ou=virtual machines,ou=virtualization,ou=services,${ldapbase}&amp;quot; -s sub -x -LLL -o ldif-wrap=no -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W  &amp;quot;(objectclass=*)&amp;quot; &amp;gt; /tmp/${machinename}.ldif&lt;br /&gt;
diff -Naur /tmp/${machinename}.ldif /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.&lt;br /&gt;
&lt;br /&gt;
If there are no differences (or the differences are not important) you can skip the following step. Otherwise use the [https://cloud.stepping-stone.ch/phpldapadmin PhpLdapAdmin] to delete the machine from the LDAP directory (do not forget to delete the dhcp entry &amp;lt;code&amp;gt;dn: cn=&amp;lt;MACHINE-NAME&amp;gt;,ou=virtual machines,cn=192.168.140.0,cn=config-01,ou=dhcp,ou=networks,ou=virtualization,ou=services,dc=stoney-cloud,dc=org&amp;lt;/code&amp;gt;). Then add the LDIF (the one you just edited) to the LDAP (first do some general replacement)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sed -i\&lt;br /&gt;
 -e &#039;s/snapshotting/finished/&#039;\&lt;br /&gt;
 -e &#039;/member.*/d&#039;\&lt;br /&gt;
 /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&lt;br /&gt;
/usr/bin/ldapadd -H &amp;quot;ldaps://ldapm.${domain}&amp;quot; -x -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W -f /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Undefine the machine&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh undefine ${machinename}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all the disk images from the backup location back to their original location&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage1}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage1}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage2}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage2}&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And restore the domain from the state file from the backup location with the XML from the retain location (the one you might have edited)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh restore /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.state.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now the machine should be up and running again. Continuing where it was stopped when taking the backup.&lt;br /&gt;
&lt;br /&gt;
If everything is OK, you can cleanup the created files and directories&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rm -rf /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
rm /tmp/${machinename}.ldif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: stoney conductor]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3750</id>
		<title>stoney conductor: VM Backup</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3750"/>
		<updated>2014-06-26T13:51:56Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Step 11: Finalizing the Retaing Process (Provisioning-Backup-KVM daemon) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This page describes how the VMs and VM-Templates are backed-up and restored inside the [http://www.stoney-cloud.org stoney cloud].&lt;br /&gt;
&lt;br /&gt;
= Requirements =&lt;br /&gt;
* sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
** This directory might be a single partition which needs to have the same size as your partition for the live images (it&#039;s a &amp;quot;copy&amp;quot; of the live partition)&lt;br /&gt;
* sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
** This directory must be on the same partition as your life images are&lt;br /&gt;
* A working stoney cloud, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].&lt;br /&gt;
* The backup configuration must be set: [[stoney_conductor:_OpenLDAP_directory_data_organisation#Backup | stoney conductor: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Backup =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The main idea to backup a VM or a VM-Template is, to divide the task into three subtasks: &lt;br /&gt;
* createSnapshot: Create a disk only snapshot. A new overlay file is created, all write operations are performed to this file. The underlying disk-image is now read only.&lt;br /&gt;
* exportSnapshot: Copy the read only disk-image to the backup location.&lt;br /&gt;
* commitSnapshot: Commit the performed write operations from the overlay back to the underlying (original) disk image. Now the underlying image is read-write again and the overlay image can be deleted.&lt;br /&gt;
A more detailed and technical description for these three sub-processes can be found [[#Sub-Processes | here]].&lt;br /&gt;
&lt;br /&gt;
Furthermore there is an control instance, which can independently call these three sub-processes for a given machine. Like that, the stoney cloud is able to handle different cases:&lt;br /&gt;
=== Backup a single machine ===&lt;br /&gt;
The procedure for backing up a single machine is very simple. Just call the three sub-processes (snapshot, merge and retain) one after the other. So the control instance would do some very basic stuff: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machine = args[0];&lt;br /&gt;
&lt;br /&gt;
if( createSsnapshot( machine ) )&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
    if ( exportSnapshot( machine ) )&lt;br /&gt;
    {&lt;br /&gt;
&lt;br /&gt;
        if ( commitSnapshot( machine ) )&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Successfully backed up machine %s\n&amp;quot;, machine);&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
} else&lt;br /&gt;
{&lt;br /&gt;
    printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Backup multiple machines at the same time ===&lt;br /&gt;
When backing up multiple machines at the same time, we need to make sure that the snapshots for the machines are as close together as possible. Therefore the control instance should call first the createSnapshot process for all machines. After every machine has been snapshotted, the control instance can call the exportSnapshot and commitSnapshot process for every machine. The most important part here is, that the control instance somehow remembers, if the snapshot for a given machine was successful or not. Because if the snapshot failed, it must not call the exportSnapshot and commitSnapshot process. So the control instance needs a little bit more logic: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machines[] = args[0];&lt;br /&gt;
object successful_snapshots[];&lt;br /&gt;
&lt;br /&gt;
# Snapshot all machines&lt;br /&gt;
for( int i = 0; i &amp;lt;  sizeof(machines) / sizeof(object) ; i++ )&lt;br /&gt;
{&lt;br /&gt;
    # If the snapshot was successful, put the machine into the &lt;br /&gt;
    # successful_snapshots array&lt;br /&gt;
    if ( createSnapshot( machines[i] ) )&lt;br /&gt;
    {&lt;br /&gt;
        successful_snapshots[machines[i]];&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machines[i],error);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# export and commit all successful_snapshot machines&lt;br /&gt;
for ( int i = 0; i &amp;lt;  sizeof(successful_snapshots) / sizeof(object) ; i++ ) )&lt;br /&gt;
{&lt;br /&gt;
    # Check if the element at this position is not null, then the snapshot &lt;br /&gt;
    # for this machine was successful&lt;br /&gt;
    if ( successful_snapshots[i] )&lt;br /&gt;
    {&lt;br /&gt;
        if ( exportSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
        {&lt;br /&gt;
            if ( commitSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
            {&lt;br /&gt;
              printf(&amp;quot;Successfully backed-up machine %s\n&amp;quot;, successful_snapshots[i]);&lt;br /&gt;
            } else&lt;br /&gt;
            {&lt;br /&gt;
                printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sub-Processes ===&lt;br /&gt;
See also [[Libvirt_external_snapshot_with_GlusterFS]]&lt;br /&gt;
==== createSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Part_2:_Create_the_snapshot_using_virsh]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#createSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== exportSnapshot ====&lt;br /&gt;
# Simply copy the underlying image to the backup location&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;image&amp;gt;.qcow2 /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;backup&amp;gt;/&amp;lt;location&amp;gt;/.&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#exportSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== commitSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Cleanup.2FCommit_.28Online.29]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#commitSnapshot]]&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
Since the stoney cloud is (as the name says already) a cloud solution, it makes sense to have a backend (in our case openLDAP) involved in the whole process. Like that it is possible to run the backup jobs decentralized on every vm-node. The control instance can then modify the backend, and theses changes are seen by the diffenrent backup daemons on the vm-nodes. So the communication could look like shown in the following picture (Figure 1): &lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-communication.png|800px|thumbnail|none|Figure 1: Communication between the control instance and the prov-backup-kvm daemon through the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
=== Control-Instance Daemon Interaction for creating a Backup with LDIF Examples ===&lt;br /&gt;
The step numbers correspond with the graphical overview from above.&lt;br /&gt;
&lt;br /&gt;
==== Step 00: Backup Configuration for a virtual machine ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The following backup configuration says, that the backup should be done daily, at 03:00 hours (localtime).&lt;br /&gt;
# * * * * * command to be executed&lt;br /&gt;
# - - - - -&lt;br /&gt;
# | | | | |&lt;br /&gt;
# | | | | +----- day of week (0 - 6) (Sunday=0)&lt;br /&gt;
# | | | +------- month (1 - 12)&lt;br /&gt;
# | | +--------- day of month (1 - 31)&lt;br /&gt;
# | +----------- hour (0 - 23)&lt;br /&gt;
# +------------- min (0 - 59)&lt;br /&gt;
# localtime in the crontab entry&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
objectclass: sstCronObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
description: This sub tree contains the backup plan for the virtual machine kvm-005.&lt;br /&gt;
sstCronMinute: 0&lt;br /&gt;
sstCronHour: 3&lt;br /&gt;
sstCronDay: *&lt;br /&gt;
sstCronMonth: *&lt;br /&gt;
sstCronDayOfWeek: *&lt;br /&gt;
sstCronActive: TRUE&lt;br /&gt;
sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
sstBackupRamDiskLocation: file:///mnt/ramdisk-test&lt;br /&gt;
sstVirtualizationDiskImageFormat: qcow2&lt;br /&gt;
sstVirtualizationDiskImageOwner: root&lt;br /&gt;
sstVirtualizationDiskImageGroup: vm-storage&lt;br /&gt;
sstVirtualizationDiskImagePermission: 0660&lt;br /&gt;
sstBackupNumberOfIterations: 1&lt;br /&gt;
sstVirtualizationVirtualMachineForceStart: FALSE&lt;br /&gt;
sstVirtualizationBandwidthMerge: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 01: Initialize Backup Sub Tree (Control instance daemon) ====&lt;br /&gt;
The sub tree &#039;&#039;&#039; ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&#039;&#039;&#039; reflects the time, when the backup is planned (in the form of [YYYY][MM][DD]T[hh][mm][ss]Z ([http://en.wikipedia.org/wiki/ISO_8601 ISO 8601]) and it should be written at the time, when the backup is planned and should be executed. The section &#039;&#039;&#039;20121002T010000Z&#039;&#039;&#039; means the following:&lt;br /&gt;
* Year: 2012&lt;br /&gt;
* Month: 10&lt;br /&gt;
* Day of Month: 02&lt;br /&gt;
* Hour of Day: 01&lt;br /&gt;
* Minutes: 00&lt;br /&gt;
* Seconds: 00&lt;br /&gt;
Please be aware the the time is to be written in UTC (see also the comment in the LDIF example below).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# This entry is the place holder for the backup, which is to be executed at 03:00 hours (localtime with daylight-saving). This&lt;br /&gt;
# leads to the 20121002T010000Z timestamp (which is written in UTC).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: sstProvisioning&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
ou: 20121002T010000Z&lt;br /&gt;
sstProvisioningExecutionDate: 0&lt;br /&gt;
sstProvisioningMode: initialize&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
sstProvisioningState: 20121002T014513Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Finalize the Initialization (Control instance daemon) ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is modified.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: initialized&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Start the Snapshot Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshot&#039;&#039;&#039;, the actual backup process is kicked off by the Control instance daemon.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# snapshot (this way the Provisioning-Backup-VKM daemon knows, that it must start the snapshotting process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshot&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 04: Starting the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is snapshotting the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to snapshotting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Finalizing the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the snapshot of the virtual machine or virtual machine template is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010011Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Start the export Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;export&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to export the disk image to the backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# export (this way the Provisioning-Backup-VKM daemon knows, that it must start the export process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: export&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Starting the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exporting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is exporting the virtual machine or virtual machine template disk images.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to exporting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exporting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 08: Finalizing the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exported&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the export of the virtual machine or virtual machine template disk-images is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010500Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exported&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the commit Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;commit&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to commit the changes from the overlay file to the underlying disk-image&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# commit (this way the Provisioning-Backup-VKM daemon knows, that it must start the commit process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: commit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the commit Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the commit command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;comitting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is committing changes from the overlay disk-images back to the underlying ones.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to comitting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: committing&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the commit Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commit command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;comitted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the comitting of the changes from the overlay disk-images back to the underlying ones is done. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: comitted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the Backup Process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;retained&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Backup) ==&lt;br /&gt;
Since we do not have a working control instance, we need to have a workaround for backing up the machines: &lt;br /&gt;
&lt;br /&gt;
* We do already have a BackupKVMWrapper.pl script (File-Backend) which executes the three [[#Sub-Processes | sub-processes ]] in the correct order for a given list of machines (see [[#Backup multiple machines at the same_time]]).&lt;br /&gt;
* We do already have the implementation for the whole backup with the LDAP-Backend (see [[ stoney conductor: prov backup kvm ]]).&lt;br /&gt;
* We can now combine these two existing scripts and create a wrapper (lets call it LDAPKVMWrapper) which, in some way, adds some logic to the BackupKVMWrapper.pl. In fact the LDAPKVMWrapper wrapper will generate the list of machines which need a backup.&lt;br /&gt;
&lt;br /&gt;
The behaviour on our servers is as follows (c.f. Figure 2):&lt;br /&gt;
# The (decentralized) LDAPKVMWrapper wrapper (which is executed everyday via cronjob) generates a list off all machines running on the current host.&lt;br /&gt;
#* Currently on the hosts the cronjobs looks like: &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
#* For each of these machines:&lt;br /&gt;
#** Check if the machine is excluded from the backup, if yes, remove the machine from the list&lt;br /&gt;
#** Check if the last backup was successful, if not, remove the machine from the list&lt;br /&gt;
# Update the backup subtree for each machine in the list&lt;br /&gt;
#* Remove the old backup leaf (the &amp;quot;yesterday-leaf&amp;quot;), and add a new one (the &amp;quot;today-leaf&amp;quot;) &lt;br /&gt;
#* After this step, the machines are ready to be backed up&lt;br /&gt;
# Call the BackupKVMWrapper.pl script with the machines list as a parameter&lt;br /&gt;
# Wait for the BackupKVMWrapper.pl script to finish&lt;br /&gt;
# Go again through all machines and update the backup subtree a last time&lt;br /&gt;
#* Check if the backup was successful, if yes, set sstProvisioningMode = finished (see also TBD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:wrapper-interaction.png|500px|thumbnail|none|Figure 2: How the two wrapper interact with the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
* If for some reason something does not work at all, the whole backup process can be deactivated by simply disabling the LDAPKVMWrapper cronjob&lt;br /&gt;
** &amp;lt;code&amp;gt;crontab -e&amp;lt;/code&amp;gt;&lt;br /&gt;
** Comment the LDAPKVMWrapper cronjob line: &amp;lt;code&amp;gt;#00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
=== How to exclude a machine from the backup ===&lt;br /&gt;
Login to one of the [[VM-Node | vm-nodes]] and execute the following command&lt;br /&gt;
&lt;br /&gt;
If you want to exclude a machine from the backup run you simply need to add the following entry to your LDAP directory: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the backup subtree in the LDAP directory already exists, you need to add the sstbackupexcludefrombackup attribute: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
add: objectClass&lt;br /&gt;
objectClass: sstVirtualizationBackupObjectClass&lt;br /&gt;
-&lt;br /&gt;
add: sstbackupexcludefrombackup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Re-include the machine to the backup ====&lt;br /&gt;
If you want to re include a machine, simply delete the machines whole backup subtree. It will be recreated during the next backup run.&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
= Restore =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The restore process, similar to the backup process, can be divided into three sub-processes: &lt;br /&gt;
* Unretain the small files: Copy the small files (backend entry, XML description) from the backup directory to the retain directory&lt;br /&gt;
* Unretain the big files: Copy the big files (state file, disk image(s)) form the backup directory to the retain directory&lt;br /&gt;
* Restore the machine: Replace the live disk image(s) by the one(s) from the backup and restore the machine from the state file&lt;br /&gt;
&lt;br /&gt;
Additionally the restore process can also be divided into two phases: &lt;br /&gt;
* User-Interaction phase: After the &amp;quot;unretain small files&amp;quot; the user needs to decide two things:&lt;br /&gt;
** On conflicts between the backend entry file and the XML description, the user need to decide how to resolve this conflict(s)&lt;br /&gt;
** The user can also abort the restore process up to this point. After that the restore can not be aborted or undone! &lt;br /&gt;
* Non-User-Interaction phase: The daemons communicate through the backend between each other and the restore process continues without further user input (c.f. [[#Communication_through_backend_2 | Communication through backend]])&lt;br /&gt;
&lt;br /&gt;
=== Sub Processes ===&lt;br /&gt;
==== Unretain small files ====&lt;br /&gt;
This workflow assumes that the backup directory is on the same physical server as the retain directory (protocol is file://)&lt;br /&gt;
# Copy the backend-entry file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.backend /path/to/retain/vm-001.backend&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the XML description from the from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.xml /path/to/retain/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Compare the backend-entry file (the one in the retain directory) with the live-backend entry&lt;br /&gt;
#* Resolve all conflicts between these two backend entries&lt;br /&gt;
#** Modify the backend entry at the retain location accordingly&lt;br /&gt;
# Apply the same changes for the XML description at the retain location (backend entry and XML description need to be consistent).&lt;br /&gt;
&lt;br /&gt;
==== Unretain large files ====&lt;br /&gt;
# Copy the state file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.state /path/to/retain/vm-001.state&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the disk image(s) from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.qcow2 /path/to/retain/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
&lt;br /&gt;
==== Restore the VM ====&lt;br /&gt;
# Shutdown the VM if it is running:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh shutdown vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Undefine the VM if it is still defined: &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh undefine vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Overwrite the original disk image:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;mv /path/to/retain/vm-001.qcow2 /path/to/images/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
# Restore the VMs backend entry: &lt;br /&gt;
#* Write the backend entry from the retain location (&amp;lt;code&amp;gt;/path/to/retain/vm-001.backend&amp;lt;/code&amp;gt;) to the backend&lt;br /&gt;
# Overwrite the VMs XML description with the one from the retain location &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/retain/vm-001.xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Restore the VM from the state file with the corrected XML&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh restore /path/to/retain/vm-001.state --xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
The actual KVM-Restore process is controlled completely by the Control instance daemon via the OpenLDAP directory. See [[#OpenLDAP Directory Integration|OpenLDAP Directory Integration]] the involved attributes and possible values.&lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-interaction-restore.png|thumb|500px|none|Figure 3: Communication between all involved parties during the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update these interactions by editing [[File:Restore-Interaction.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== Control instance Daemon Interaction for restoring a Backup with LDIF Examples ===&lt;br /&gt;
==== Step 01: Start the unretainSmallFiles process (Control instance daemon) ====&lt;br /&gt;
The first step of the restore process is to copy the small files (in this case the XML file and the LDIF) from the configured backup location to the configured retain location. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainSmallFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainSmallFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Starting the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingSmallFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the small files for the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Finalizing the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedSmallFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the small files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Start the unretainLargeFiles process (Control instance daemon) ====&lt;br /&gt;
Next step in the restore process is to copy the large files (state file and disk images) from the configured backup directory to the configured retain directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainLargeFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainLargeFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Starting the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingLargeFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the large files for the virtual machine or virtual machine template.&lt;br /&gt;
&lt;br /&gt;
In the meantime the vm-manager merges the LDIF we have unretained in [[#Step_02:_Starting_the_unretainSmallFiles_process_.28Provisioning-Backup-KVM_daemon.29 | step 02]] with the one in the live directory to sort out possible differences in the configuration of the virtual machine.  &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Finalizing the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedLargeFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the large files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the restore process (Control instance daemon) ====&lt;br /&gt;
Since we now have all necessary files in the configured retain location, the restore process can be started. There we simply copy the disk images back to their original location and restore the VM from the state file (which is also at the configured retain location)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# restore (this way the Provisioning-Backup-VKM daemon knows, that it must start the restore process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restore&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restoring&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is restoring the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to restoring by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restoring&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restored&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restored&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the restore process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;restored&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the Control instance daemon, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the restore process is finished.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Restore) ==&lt;br /&gt;
* Since the prov-backup-kvm daemon is not running on the vm-nodes (c.f. [[stoney_conductor:_Backup#State_of_the_art]]), the restore process does not work when clicking the icon in the webinterface. &lt;br /&gt;
* Resolving the conflicts in the backend and XML description file is not yet done&lt;br /&gt;
** Actually all steps not executed by prov-backup-kvm are not yet properly implemented (c.f. [[stoney_conductor:_prov_backup_kvm#Restore]])&lt;br /&gt;
* The implementation is done, but the last step from the [[#Restore_2 | restore process ]] is different:&lt;br /&gt;
** The &amp;lt;code&amp;gt;virsh restore&amp;lt;/code&amp;gt; command is not executed with the &amp;lt;code&amp;gt;--xml&amp;lt;/code&amp;gt; option, the XML from the state file is taken when restoring the machine. Therefore the conflicts are not properly resolved. &lt;br /&gt;
*** --[[User:Pat|Pat]] ([[User talk:Pat|talk]]) 09:41, 29 October 2013 (CET): Currently the [http://search.cpan.org/~danberr/Sys-Virt-1.1.3/lib/Sys/Virt.pm Sys::Virt] library does not support the --xml parameter when restoring a domain&lt;br /&gt;
&lt;br /&gt;
=== How to manually restore a machine from backup ===&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: Before you continue with this guide, make sure that you have no other possibility to restore the machine. It might be easier and safer to get lost files from the online backup if the machine has one set up.&lt;br /&gt;
&lt;br /&gt;
If you really have to restore the machine from the backup:&lt;br /&gt;
# Stop the machine from via the [https://cloud.stepping-stone.ch/vm-manager/ web interface]&lt;br /&gt;
# Login (as root) on the [[VM-Node]] the machine was running on&lt;br /&gt;
&lt;br /&gt;
As a first step, you would like to set some useful bash variables to be able to copy paste the following guide:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Double check all variables you are setting here. If one is not correct, you will restore a running machine or overwrite a live-disk image!&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
machinename=&amp;quot;&amp;lt;MACHINE-NAME&amp;gt;&amp;quot; # For example: machinename=&amp;quot;b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6&amp;quot;&lt;br /&gt;
vmpool=&amp;quot;&amp;lt;VM-POOL&amp;gt;&amp;quot; # For example vmpool=&amp;quot;0f83f084-8080-413e-b558-b678e504836e&amp;quot;&lt;br /&gt;
vmtype=&amp;quot;&amp;lt;VM-TYPE&amp;gt;&amp;quot; # For example vmtype=&amp;quot;vm-persistent&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change to the backup directory for the given machine and check the iterations:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change into the most recent iteration&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd 2014...&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In there you should have: &lt;br /&gt;
* The state file &amp;lt;MACHINE-NAME&amp;gt;.state.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.state.20140109T134445Z)&lt;br /&gt;
* The XML description &amp;lt;MACHINE-NAME&amp;gt;.xml.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.xml.20140109T134445Z)&lt;br /&gt;
* The ldif file &amp;lt;MACHINE-NAME&amp;gt;.ldif.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.ldif.20140109T134445Z)&lt;br /&gt;
* And at least one disk image &amp;lt;DISK-IMAGE&amp;gt;.qcow2.&amp;lt;BACKUP-DATE&amp;gt; (for example 8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2.20140109T134445Z)&lt;br /&gt;
Now you should save the backup date and the disk image(s) in a variable&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
backupdate=&amp;quot;&amp;lt;BACKUP-DATE&amp;gt;&amp;quot; # For example: backupdate=&amp;quot;20140109T134445Z&amp;quot;&lt;br /&gt;
diskimage1=&amp;quot;&amp;lt;DISK-IMAGE-1&amp;gt;.qcow2&amp;quot; # For example: diskimage1=&amp;quot;8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2&amp;quot;&lt;br /&gt;
diskimage2=&amp;quot;&amp;lt;DISK-IMAGE-2&amp;gt;.qcow2&amp;quot; # For example: diskimage2=&amp;quot;aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee.qcow2&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Have again a look at the different variables and &#039;&#039;&#039;double check them again&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
echo &amp;quot;Machine Name = ${machinename}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Pool = ${vmpool}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Type = ${vmtype}&amp;quot;&lt;br /&gt;
echo &amp;quot;Backup date = ${backupdate}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 1 = ${diskimage1}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 2 = ${diskimage2}&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all these files to the retain location:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
currentdate=`date --utc +&#039;%Y%m%dT%H%M%SZ&#039;`&lt;br /&gt;
mkdir -p /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.ldif.${backupdate} /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Check if there is a difference between the current XML file and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
diff -Naur /etc/libvirt/qemu/${machinename}.xml /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.xml.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Now you are entering the critical part. You won&#039;t be able to undo the following steps&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Check if there is a difference between the current LDAP entry and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
domain=&amp;quot;&amp;lt;DOMAIN&amp;gt;&amp;quot; # For example domain=&amp;quot;stoney-cloud.org&amp;quot;&lt;br /&gt;
ldapbase=&amp;quot;&amp;lt;LDAPBASE&amp;gt;&amp;quot; # For expample ldapbase=&amp;quot;dc=stoney-cloud,dc=org&amp;quot;&lt;br /&gt;
ldapsearch -H ldaps://ldapm.${domain} -b &amp;quot;sstVirtualMachine=${machinename},ou=virtual machines,ou=virtualization,ou=services,${ldapbase}&amp;quot; -s sub -x -LLL -o ldif-wrap=no -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W  &amp;quot;(objectclass=*)&amp;quot; &amp;gt; /tmp/${machinename}.ldif&lt;br /&gt;
diff -Naur /tmp/${machinename}.ldif /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.&lt;br /&gt;
&lt;br /&gt;
If there are no differences (or the differences are not important) you can skip the following step. Otherwise use the [https://cloud.stepping-stone.ch/phpldapadmin PhpLdapAdmin] to delete the machine from the LDAP directory (do not forget to delete the dhcp entry &amp;lt;code&amp;gt;dn: cn=&amp;lt;MACHINE-NAME&amp;gt;,ou=virtual machines,cn=192.168.140.0,cn=config-01,ou=dhcp,ou=networks,ou=virtualization,ou=services,dc=stoney-cloud,dc=org&amp;lt;/code&amp;gt;). Then add the LDIF (the one you just edited) to the LDAP (first do some general replacement)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sed -i\&lt;br /&gt;
 -e &#039;s/snapshotting/finished/&#039;\&lt;br /&gt;
 -e &#039;/member.*/d&#039;\&lt;br /&gt;
 /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&lt;br /&gt;
/usr/bin/ldapadd -H &amp;quot;ldaps://ldapm.${domain}&amp;quot; -x -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W -f /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Undefine the machine&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh undefine ${machinename}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all the disk images from the backup location back to their original location&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage1}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage1}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage2}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage2}&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And restore the domain from the state file from the backup location with the XML from the retain location (the one you might have edited)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh restore /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.state.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now the machine should be up and running again. Continuing where it was stopped when taking the backup.&lt;br /&gt;
&lt;br /&gt;
If everything is OK, you can cleanup the created files and directories&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rm -rf /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
rm /tmp/${machinename}.ldif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: stoney conductor]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3749</id>
		<title>stoney conductor: VM Backup</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3749"/>
		<updated>2014-06-26T13:50:47Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Step 10: Starting the Retain Process (Provisioning-Backup-KVM daemon) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This page describes how the VMs and VM-Templates are backed-up and restored inside the [http://www.stoney-cloud.org stoney cloud].&lt;br /&gt;
&lt;br /&gt;
= Requirements =&lt;br /&gt;
* sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
** This directory might be a single partition which needs to have the same size as your partition for the live images (it&#039;s a &amp;quot;copy&amp;quot; of the live partition)&lt;br /&gt;
* sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
** This directory must be on the same partition as your life images are&lt;br /&gt;
* A working stoney cloud, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].&lt;br /&gt;
* The backup configuration must be set: [[stoney_conductor:_OpenLDAP_directory_data_organisation#Backup | stoney conductor: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Backup =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The main idea to backup a VM or a VM-Template is, to divide the task into three subtasks: &lt;br /&gt;
* createSnapshot: Create a disk only snapshot. A new overlay file is created, all write operations are performed to this file. The underlying disk-image is now read only.&lt;br /&gt;
* exportSnapshot: Copy the read only disk-image to the backup location.&lt;br /&gt;
* commitSnapshot: Commit the performed write operations from the overlay back to the underlying (original) disk image. Now the underlying image is read-write again and the overlay image can be deleted.&lt;br /&gt;
A more detailed and technical description for these three sub-processes can be found [[#Sub-Processes | here]].&lt;br /&gt;
&lt;br /&gt;
Furthermore there is an control instance, which can independently call these three sub-processes for a given machine. Like that, the stoney cloud is able to handle different cases:&lt;br /&gt;
=== Backup a single machine ===&lt;br /&gt;
The procedure for backing up a single machine is very simple. Just call the three sub-processes (snapshot, merge and retain) one after the other. So the control instance would do some very basic stuff: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machine = args[0];&lt;br /&gt;
&lt;br /&gt;
if( createSsnapshot( machine ) )&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
    if ( exportSnapshot( machine ) )&lt;br /&gt;
    {&lt;br /&gt;
&lt;br /&gt;
        if ( commitSnapshot( machine ) )&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Successfully backed up machine %s\n&amp;quot;, machine);&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
} else&lt;br /&gt;
{&lt;br /&gt;
    printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Backup multiple machines at the same time ===&lt;br /&gt;
When backing up multiple machines at the same time, we need to make sure that the snapshots for the machines are as close together as possible. Therefore the control instance should call first the createSnapshot process for all machines. After every machine has been snapshotted, the control instance can call the exportSnapshot and commitSnapshot process for every machine. The most important part here is, that the control instance somehow remembers, if the snapshot for a given machine was successful or not. Because if the snapshot failed, it must not call the exportSnapshot and commitSnapshot process. So the control instance needs a little bit more logic: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machines[] = args[0];&lt;br /&gt;
object successful_snapshots[];&lt;br /&gt;
&lt;br /&gt;
# Snapshot all machines&lt;br /&gt;
for( int i = 0; i &amp;lt;  sizeof(machines) / sizeof(object) ; i++ )&lt;br /&gt;
{&lt;br /&gt;
    # If the snapshot was successful, put the machine into the &lt;br /&gt;
    # successful_snapshots array&lt;br /&gt;
    if ( createSnapshot( machines[i] ) )&lt;br /&gt;
    {&lt;br /&gt;
        successful_snapshots[machines[i]];&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machines[i],error);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# export and commit all successful_snapshot machines&lt;br /&gt;
for ( int i = 0; i &amp;lt;  sizeof(successful_snapshots) / sizeof(object) ; i++ ) )&lt;br /&gt;
{&lt;br /&gt;
    # Check if the element at this position is not null, then the snapshot &lt;br /&gt;
    # for this machine was successful&lt;br /&gt;
    if ( successful_snapshots[i] )&lt;br /&gt;
    {&lt;br /&gt;
        if ( exportSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
        {&lt;br /&gt;
            if ( commitSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
            {&lt;br /&gt;
              printf(&amp;quot;Successfully backed-up machine %s\n&amp;quot;, successful_snapshots[i]);&lt;br /&gt;
            } else&lt;br /&gt;
            {&lt;br /&gt;
                printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sub-Processes ===&lt;br /&gt;
See also [[Libvirt_external_snapshot_with_GlusterFS]]&lt;br /&gt;
==== createSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Part_2:_Create_the_snapshot_using_virsh]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#createSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== exportSnapshot ====&lt;br /&gt;
# Simply copy the underlying image to the backup location&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;image&amp;gt;.qcow2 /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;backup&amp;gt;/&amp;lt;location&amp;gt;/.&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#exportSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== commitSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Cleanup.2FCommit_.28Online.29]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#commitSnapshot]]&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
Since the stoney cloud is (as the name says already) a cloud solution, it makes sense to have a backend (in our case openLDAP) involved in the whole process. Like that it is possible to run the backup jobs decentralized on every vm-node. The control instance can then modify the backend, and theses changes are seen by the diffenrent backup daemons on the vm-nodes. So the communication could look like shown in the following picture (Figure 1): &lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-communication.png|800px|thumbnail|none|Figure 1: Communication between the control instance and the prov-backup-kvm daemon through the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
=== Control-Instance Daemon Interaction for creating a Backup with LDIF Examples ===&lt;br /&gt;
The step numbers correspond with the graphical overview from above.&lt;br /&gt;
&lt;br /&gt;
==== Step 00: Backup Configuration for a virtual machine ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The following backup configuration says, that the backup should be done daily, at 03:00 hours (localtime).&lt;br /&gt;
# * * * * * command to be executed&lt;br /&gt;
# - - - - -&lt;br /&gt;
# | | | | |&lt;br /&gt;
# | | | | +----- day of week (0 - 6) (Sunday=0)&lt;br /&gt;
# | | | +------- month (1 - 12)&lt;br /&gt;
# | | +--------- day of month (1 - 31)&lt;br /&gt;
# | +----------- hour (0 - 23)&lt;br /&gt;
# +------------- min (0 - 59)&lt;br /&gt;
# localtime in the crontab entry&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
objectclass: sstCronObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
description: This sub tree contains the backup plan for the virtual machine kvm-005.&lt;br /&gt;
sstCronMinute: 0&lt;br /&gt;
sstCronHour: 3&lt;br /&gt;
sstCronDay: *&lt;br /&gt;
sstCronMonth: *&lt;br /&gt;
sstCronDayOfWeek: *&lt;br /&gt;
sstCronActive: TRUE&lt;br /&gt;
sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
sstBackupRamDiskLocation: file:///mnt/ramdisk-test&lt;br /&gt;
sstVirtualizationDiskImageFormat: qcow2&lt;br /&gt;
sstVirtualizationDiskImageOwner: root&lt;br /&gt;
sstVirtualizationDiskImageGroup: vm-storage&lt;br /&gt;
sstVirtualizationDiskImagePermission: 0660&lt;br /&gt;
sstBackupNumberOfIterations: 1&lt;br /&gt;
sstVirtualizationVirtualMachineForceStart: FALSE&lt;br /&gt;
sstVirtualizationBandwidthMerge: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 01: Initialize Backup Sub Tree (Control instance daemon) ====&lt;br /&gt;
The sub tree &#039;&#039;&#039; ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&#039;&#039;&#039; reflects the time, when the backup is planned (in the form of [YYYY][MM][DD]T[hh][mm][ss]Z ([http://en.wikipedia.org/wiki/ISO_8601 ISO 8601]) and it should be written at the time, when the backup is planned and should be executed. The section &#039;&#039;&#039;20121002T010000Z&#039;&#039;&#039; means the following:&lt;br /&gt;
* Year: 2012&lt;br /&gt;
* Month: 10&lt;br /&gt;
* Day of Month: 02&lt;br /&gt;
* Hour of Day: 01&lt;br /&gt;
* Minutes: 00&lt;br /&gt;
* Seconds: 00&lt;br /&gt;
Please be aware the the time is to be written in UTC (see also the comment in the LDIF example below).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# This entry is the place holder for the backup, which is to be executed at 03:00 hours (localtime with daylight-saving). This&lt;br /&gt;
# leads to the 20121002T010000Z timestamp (which is written in UTC).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: sstProvisioning&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
ou: 20121002T010000Z&lt;br /&gt;
sstProvisioningExecutionDate: 0&lt;br /&gt;
sstProvisioningMode: initialize&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
sstProvisioningState: 20121002T014513Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Finalize the Initialization (Control instance daemon) ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is modified.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: initialized&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Start the Snapshot Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshot&#039;&#039;&#039;, the actual backup process is kicked off by the Control instance daemon.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# snapshot (this way the Provisioning-Backup-VKM daemon knows, that it must start the snapshotting process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshot&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 04: Starting the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is snapshotting the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to snapshotting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Finalizing the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the snapshot of the virtual machine or virtual machine template is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010011Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Start the export Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;export&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to export the disk image to the backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# export (this way the Provisioning-Backup-VKM daemon knows, that it must start the export process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: export&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Starting the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exporting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is exporting the virtual machine or virtual machine template disk images.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to exporting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exporting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 08: Finalizing the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exported&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the export of the virtual machine or virtual machine template disk-images is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010500Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exported&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the commit Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;commit&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to commit the changes from the overlay file to the underlying disk-image&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# commit (this way the Provisioning-Backup-VKM daemon knows, that it must start the commit process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: commit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the commit Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the commit command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;comitting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is committing changes from the overlay disk-images back to the underlying ones.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to comitting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: committing&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the Retaing Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the retain command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;retained&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the retaining of all the necessary files to the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: retained&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the Backup Process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;retained&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Backup) ==&lt;br /&gt;
Since we do not have a working control instance, we need to have a workaround for backing up the machines: &lt;br /&gt;
&lt;br /&gt;
* We do already have a BackupKVMWrapper.pl script (File-Backend) which executes the three [[#Sub-Processes | sub-processes ]] in the correct order for a given list of machines (see [[#Backup multiple machines at the same_time]]).&lt;br /&gt;
* We do already have the implementation for the whole backup with the LDAP-Backend (see [[ stoney conductor: prov backup kvm ]]).&lt;br /&gt;
* We can now combine these two existing scripts and create a wrapper (lets call it LDAPKVMWrapper) which, in some way, adds some logic to the BackupKVMWrapper.pl. In fact the LDAPKVMWrapper wrapper will generate the list of machines which need a backup.&lt;br /&gt;
&lt;br /&gt;
The behaviour on our servers is as follows (c.f. Figure 2):&lt;br /&gt;
# The (decentralized) LDAPKVMWrapper wrapper (which is executed everyday via cronjob) generates a list off all machines running on the current host.&lt;br /&gt;
#* Currently on the hosts the cronjobs looks like: &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
#* For each of these machines:&lt;br /&gt;
#** Check if the machine is excluded from the backup, if yes, remove the machine from the list&lt;br /&gt;
#** Check if the last backup was successful, if not, remove the machine from the list&lt;br /&gt;
# Update the backup subtree for each machine in the list&lt;br /&gt;
#* Remove the old backup leaf (the &amp;quot;yesterday-leaf&amp;quot;), and add a new one (the &amp;quot;today-leaf&amp;quot;) &lt;br /&gt;
#* After this step, the machines are ready to be backed up&lt;br /&gt;
# Call the BackupKVMWrapper.pl script with the machines list as a parameter&lt;br /&gt;
# Wait for the BackupKVMWrapper.pl script to finish&lt;br /&gt;
# Go again through all machines and update the backup subtree a last time&lt;br /&gt;
#* Check if the backup was successful, if yes, set sstProvisioningMode = finished (see also TBD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:wrapper-interaction.png|500px|thumbnail|none|Figure 2: How the two wrapper interact with the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
* If for some reason something does not work at all, the whole backup process can be deactivated by simply disabling the LDAPKVMWrapper cronjob&lt;br /&gt;
** &amp;lt;code&amp;gt;crontab -e&amp;lt;/code&amp;gt;&lt;br /&gt;
** Comment the LDAPKVMWrapper cronjob line: &amp;lt;code&amp;gt;#00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
=== How to exclude a machine from the backup ===&lt;br /&gt;
Login to one of the [[VM-Node | vm-nodes]] and execute the following command&lt;br /&gt;
&lt;br /&gt;
If you want to exclude a machine from the backup run you simply need to add the following entry to your LDAP directory: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the backup subtree in the LDAP directory already exists, you need to add the sstbackupexcludefrombackup attribute: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
add: objectClass&lt;br /&gt;
objectClass: sstVirtualizationBackupObjectClass&lt;br /&gt;
-&lt;br /&gt;
add: sstbackupexcludefrombackup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Re-include the machine to the backup ====&lt;br /&gt;
If you want to re include a machine, simply delete the machines whole backup subtree. It will be recreated during the next backup run.&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
= Restore =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The restore process, similar to the backup process, can be divided into three sub-processes: &lt;br /&gt;
* Unretain the small files: Copy the small files (backend entry, XML description) from the backup directory to the retain directory&lt;br /&gt;
* Unretain the big files: Copy the big files (state file, disk image(s)) form the backup directory to the retain directory&lt;br /&gt;
* Restore the machine: Replace the live disk image(s) by the one(s) from the backup and restore the machine from the state file&lt;br /&gt;
&lt;br /&gt;
Additionally the restore process can also be divided into two phases: &lt;br /&gt;
* User-Interaction phase: After the &amp;quot;unretain small files&amp;quot; the user needs to decide two things:&lt;br /&gt;
** On conflicts between the backend entry file and the XML description, the user need to decide how to resolve this conflict(s)&lt;br /&gt;
** The user can also abort the restore process up to this point. After that the restore can not be aborted or undone! &lt;br /&gt;
* Non-User-Interaction phase: The daemons communicate through the backend between each other and the restore process continues without further user input (c.f. [[#Communication_through_backend_2 | Communication through backend]])&lt;br /&gt;
&lt;br /&gt;
=== Sub Processes ===&lt;br /&gt;
==== Unretain small files ====&lt;br /&gt;
This workflow assumes that the backup directory is on the same physical server as the retain directory (protocol is file://)&lt;br /&gt;
# Copy the backend-entry file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.backend /path/to/retain/vm-001.backend&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the XML description from the from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.xml /path/to/retain/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Compare the backend-entry file (the one in the retain directory) with the live-backend entry&lt;br /&gt;
#* Resolve all conflicts between these two backend entries&lt;br /&gt;
#** Modify the backend entry at the retain location accordingly&lt;br /&gt;
# Apply the same changes for the XML description at the retain location (backend entry and XML description need to be consistent).&lt;br /&gt;
&lt;br /&gt;
==== Unretain large files ====&lt;br /&gt;
# Copy the state file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.state /path/to/retain/vm-001.state&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the disk image(s) from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.qcow2 /path/to/retain/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
&lt;br /&gt;
==== Restore the VM ====&lt;br /&gt;
# Shutdown the VM if it is running:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh shutdown vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Undefine the VM if it is still defined: &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh undefine vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Overwrite the original disk image:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;mv /path/to/retain/vm-001.qcow2 /path/to/images/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
# Restore the VMs backend entry: &lt;br /&gt;
#* Write the backend entry from the retain location (&amp;lt;code&amp;gt;/path/to/retain/vm-001.backend&amp;lt;/code&amp;gt;) to the backend&lt;br /&gt;
# Overwrite the VMs XML description with the one from the retain location &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/retain/vm-001.xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Restore the VM from the state file with the corrected XML&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh restore /path/to/retain/vm-001.state --xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
The actual KVM-Restore process is controlled completely by the Control instance daemon via the OpenLDAP directory. See [[#OpenLDAP Directory Integration|OpenLDAP Directory Integration]] the involved attributes and possible values.&lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-interaction-restore.png|thumb|500px|none|Figure 3: Communication between all involved parties during the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update these interactions by editing [[File:Restore-Interaction.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== Control instance Daemon Interaction for restoring a Backup with LDIF Examples ===&lt;br /&gt;
==== Step 01: Start the unretainSmallFiles process (Control instance daemon) ====&lt;br /&gt;
The first step of the restore process is to copy the small files (in this case the XML file and the LDIF) from the configured backup location to the configured retain location. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainSmallFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainSmallFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Starting the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingSmallFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the small files for the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Finalizing the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedSmallFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the small files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Start the unretainLargeFiles process (Control instance daemon) ====&lt;br /&gt;
Next step in the restore process is to copy the large files (state file and disk images) from the configured backup directory to the configured retain directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainLargeFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainLargeFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Starting the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingLargeFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the large files for the virtual machine or virtual machine template.&lt;br /&gt;
&lt;br /&gt;
In the meantime the vm-manager merges the LDIF we have unretained in [[#Step_02:_Starting_the_unretainSmallFiles_process_.28Provisioning-Backup-KVM_daemon.29 | step 02]] with the one in the live directory to sort out possible differences in the configuration of the virtual machine.  &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Finalizing the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedLargeFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the large files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the restore process (Control instance daemon) ====&lt;br /&gt;
Since we now have all necessary files in the configured retain location, the restore process can be started. There we simply copy the disk images back to their original location and restore the VM from the state file (which is also at the configured retain location)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# restore (this way the Provisioning-Backup-VKM daemon knows, that it must start the restore process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restore&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restoring&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is restoring the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to restoring by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restoring&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restored&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restored&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the restore process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;restored&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the Control instance daemon, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the restore process is finished.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Restore) ==&lt;br /&gt;
* Since the prov-backup-kvm daemon is not running on the vm-nodes (c.f. [[stoney_conductor:_Backup#State_of_the_art]]), the restore process does not work when clicking the icon in the webinterface. &lt;br /&gt;
* Resolving the conflicts in the backend and XML description file is not yet done&lt;br /&gt;
** Actually all steps not executed by prov-backup-kvm are not yet properly implemented (c.f. [[stoney_conductor:_prov_backup_kvm#Restore]])&lt;br /&gt;
* The implementation is done, but the last step from the [[#Restore_2 | restore process ]] is different:&lt;br /&gt;
** The &amp;lt;code&amp;gt;virsh restore&amp;lt;/code&amp;gt; command is not executed with the &amp;lt;code&amp;gt;--xml&amp;lt;/code&amp;gt; option, the XML from the state file is taken when restoring the machine. Therefore the conflicts are not properly resolved. &lt;br /&gt;
*** --[[User:Pat|Pat]] ([[User talk:Pat|talk]]) 09:41, 29 October 2013 (CET): Currently the [http://search.cpan.org/~danberr/Sys-Virt-1.1.3/lib/Sys/Virt.pm Sys::Virt] library does not support the --xml parameter when restoring a domain&lt;br /&gt;
&lt;br /&gt;
=== How to manually restore a machine from backup ===&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: Before you continue with this guide, make sure that you have no other possibility to restore the machine. It might be easier and safer to get lost files from the online backup if the machine has one set up.&lt;br /&gt;
&lt;br /&gt;
If you really have to restore the machine from the backup:&lt;br /&gt;
# Stop the machine from via the [https://cloud.stepping-stone.ch/vm-manager/ web interface]&lt;br /&gt;
# Login (as root) on the [[VM-Node]] the machine was running on&lt;br /&gt;
&lt;br /&gt;
As a first step, you would like to set some useful bash variables to be able to copy paste the following guide:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Double check all variables you are setting here. If one is not correct, you will restore a running machine or overwrite a live-disk image!&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
machinename=&amp;quot;&amp;lt;MACHINE-NAME&amp;gt;&amp;quot; # For example: machinename=&amp;quot;b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6&amp;quot;&lt;br /&gt;
vmpool=&amp;quot;&amp;lt;VM-POOL&amp;gt;&amp;quot; # For example vmpool=&amp;quot;0f83f084-8080-413e-b558-b678e504836e&amp;quot;&lt;br /&gt;
vmtype=&amp;quot;&amp;lt;VM-TYPE&amp;gt;&amp;quot; # For example vmtype=&amp;quot;vm-persistent&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change to the backup directory for the given machine and check the iterations:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change into the most recent iteration&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd 2014...&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In there you should have: &lt;br /&gt;
* The state file &amp;lt;MACHINE-NAME&amp;gt;.state.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.state.20140109T134445Z)&lt;br /&gt;
* The XML description &amp;lt;MACHINE-NAME&amp;gt;.xml.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.xml.20140109T134445Z)&lt;br /&gt;
* The ldif file &amp;lt;MACHINE-NAME&amp;gt;.ldif.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.ldif.20140109T134445Z)&lt;br /&gt;
* And at least one disk image &amp;lt;DISK-IMAGE&amp;gt;.qcow2.&amp;lt;BACKUP-DATE&amp;gt; (for example 8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2.20140109T134445Z)&lt;br /&gt;
Now you should save the backup date and the disk image(s) in a variable&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
backupdate=&amp;quot;&amp;lt;BACKUP-DATE&amp;gt;&amp;quot; # For example: backupdate=&amp;quot;20140109T134445Z&amp;quot;&lt;br /&gt;
diskimage1=&amp;quot;&amp;lt;DISK-IMAGE-1&amp;gt;.qcow2&amp;quot; # For example: diskimage1=&amp;quot;8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2&amp;quot;&lt;br /&gt;
diskimage2=&amp;quot;&amp;lt;DISK-IMAGE-2&amp;gt;.qcow2&amp;quot; # For example: diskimage2=&amp;quot;aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee.qcow2&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Have again a look at the different variables and &#039;&#039;&#039;double check them again&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
echo &amp;quot;Machine Name = ${machinename}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Pool = ${vmpool}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Type = ${vmtype}&amp;quot;&lt;br /&gt;
echo &amp;quot;Backup date = ${backupdate}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 1 = ${diskimage1}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 2 = ${diskimage2}&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all these files to the retain location:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
currentdate=`date --utc +&#039;%Y%m%dT%H%M%SZ&#039;`&lt;br /&gt;
mkdir -p /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.ldif.${backupdate} /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Check if there is a difference between the current XML file and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
diff -Naur /etc/libvirt/qemu/${machinename}.xml /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.xml.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Now you are entering the critical part. You won&#039;t be able to undo the following steps&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Check if there is a difference between the current LDAP entry and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
domain=&amp;quot;&amp;lt;DOMAIN&amp;gt;&amp;quot; # For example domain=&amp;quot;stoney-cloud.org&amp;quot;&lt;br /&gt;
ldapbase=&amp;quot;&amp;lt;LDAPBASE&amp;gt;&amp;quot; # For expample ldapbase=&amp;quot;dc=stoney-cloud,dc=org&amp;quot;&lt;br /&gt;
ldapsearch -H ldaps://ldapm.${domain} -b &amp;quot;sstVirtualMachine=${machinename},ou=virtual machines,ou=virtualization,ou=services,${ldapbase}&amp;quot; -s sub -x -LLL -o ldif-wrap=no -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W  &amp;quot;(objectclass=*)&amp;quot; &amp;gt; /tmp/${machinename}.ldif&lt;br /&gt;
diff -Naur /tmp/${machinename}.ldif /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.&lt;br /&gt;
&lt;br /&gt;
If there are no differences (or the differences are not important) you can skip the following step. Otherwise use the [https://cloud.stepping-stone.ch/phpldapadmin PhpLdapAdmin] to delete the machine from the LDAP directory (do not forget to delete the dhcp entry &amp;lt;code&amp;gt;dn: cn=&amp;lt;MACHINE-NAME&amp;gt;,ou=virtual machines,cn=192.168.140.0,cn=config-01,ou=dhcp,ou=networks,ou=virtualization,ou=services,dc=stoney-cloud,dc=org&amp;lt;/code&amp;gt;). Then add the LDIF (the one you just edited) to the LDAP (first do some general replacement)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sed -i\&lt;br /&gt;
 -e &#039;s/snapshotting/finished/&#039;\&lt;br /&gt;
 -e &#039;/member.*/d&#039;\&lt;br /&gt;
 /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&lt;br /&gt;
/usr/bin/ldapadd -H &amp;quot;ldaps://ldapm.${domain}&amp;quot; -x -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W -f /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Undefine the machine&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh undefine ${machinename}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all the disk images from the backup location back to their original location&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage1}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage1}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage2}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage2}&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And restore the domain from the state file from the backup location with the XML from the retain location (the one you might have edited)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh restore /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.state.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now the machine should be up and running again. Continuing where it was stopped when taking the backup.&lt;br /&gt;
&lt;br /&gt;
If everything is OK, you can cleanup the created files and directories&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rm -rf /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
rm /tmp/${machinename}.ldif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: stoney conductor]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3748</id>
		<title>stoney conductor: VM Backup</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3748"/>
		<updated>2014-06-26T13:49:35Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Step 09: Start the commit Process (Control instance daemon) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This page describes how the VMs and VM-Templates are backed-up and restored inside the [http://www.stoney-cloud.org stoney cloud].&lt;br /&gt;
&lt;br /&gt;
= Requirements =&lt;br /&gt;
* sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
** This directory might be a single partition which needs to have the same size as your partition for the live images (it&#039;s a &amp;quot;copy&amp;quot; of the live partition)&lt;br /&gt;
* sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
** This directory must be on the same partition as your life images are&lt;br /&gt;
* A working stoney cloud, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].&lt;br /&gt;
* The backup configuration must be set: [[stoney_conductor:_OpenLDAP_directory_data_organisation#Backup | stoney conductor: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Backup =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The main idea to backup a VM or a VM-Template is, to divide the task into three subtasks: &lt;br /&gt;
* createSnapshot: Create a disk only snapshot. A new overlay file is created, all write operations are performed to this file. The underlying disk-image is now read only.&lt;br /&gt;
* exportSnapshot: Copy the read only disk-image to the backup location.&lt;br /&gt;
* commitSnapshot: Commit the performed write operations from the overlay back to the underlying (original) disk image. Now the underlying image is read-write again and the overlay image can be deleted.&lt;br /&gt;
A more detailed and technical description for these three sub-processes can be found [[#Sub-Processes | here]].&lt;br /&gt;
&lt;br /&gt;
Furthermore there is an control instance, which can independently call these three sub-processes for a given machine. Like that, the stoney cloud is able to handle different cases:&lt;br /&gt;
=== Backup a single machine ===&lt;br /&gt;
The procedure for backing up a single machine is very simple. Just call the three sub-processes (snapshot, merge and retain) one after the other. So the control instance would do some very basic stuff: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machine = args[0];&lt;br /&gt;
&lt;br /&gt;
if( createSsnapshot( machine ) )&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
    if ( exportSnapshot( machine ) )&lt;br /&gt;
    {&lt;br /&gt;
&lt;br /&gt;
        if ( commitSnapshot( machine ) )&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Successfully backed up machine %s\n&amp;quot;, machine);&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
} else&lt;br /&gt;
{&lt;br /&gt;
    printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Backup multiple machines at the same time ===&lt;br /&gt;
When backing up multiple machines at the same time, we need to make sure that the snapshots for the machines are as close together as possible. Therefore the control instance should call first the createSnapshot process for all machines. After every machine has been snapshotted, the control instance can call the exportSnapshot and commitSnapshot process for every machine. The most important part here is, that the control instance somehow remembers, if the snapshot for a given machine was successful or not. Because if the snapshot failed, it must not call the exportSnapshot and commitSnapshot process. So the control instance needs a little bit more logic: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machines[] = args[0];&lt;br /&gt;
object successful_snapshots[];&lt;br /&gt;
&lt;br /&gt;
# Snapshot all machines&lt;br /&gt;
for( int i = 0; i &amp;lt;  sizeof(machines) / sizeof(object) ; i++ )&lt;br /&gt;
{&lt;br /&gt;
    # If the snapshot was successful, put the machine into the &lt;br /&gt;
    # successful_snapshots array&lt;br /&gt;
    if ( createSnapshot( machines[i] ) )&lt;br /&gt;
    {&lt;br /&gt;
        successful_snapshots[machines[i]];&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machines[i],error);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# export and commit all successful_snapshot machines&lt;br /&gt;
for ( int i = 0; i &amp;lt;  sizeof(successful_snapshots) / sizeof(object) ; i++ ) )&lt;br /&gt;
{&lt;br /&gt;
    # Check if the element at this position is not null, then the snapshot &lt;br /&gt;
    # for this machine was successful&lt;br /&gt;
    if ( successful_snapshots[i] )&lt;br /&gt;
    {&lt;br /&gt;
        if ( exportSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
        {&lt;br /&gt;
            if ( commitSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
            {&lt;br /&gt;
              printf(&amp;quot;Successfully backed-up machine %s\n&amp;quot;, successful_snapshots[i]);&lt;br /&gt;
            } else&lt;br /&gt;
            {&lt;br /&gt;
                printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sub-Processes ===&lt;br /&gt;
See also [[Libvirt_external_snapshot_with_GlusterFS]]&lt;br /&gt;
==== createSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Part_2:_Create_the_snapshot_using_virsh]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#createSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== exportSnapshot ====&lt;br /&gt;
# Simply copy the underlying image to the backup location&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;image&amp;gt;.qcow2 /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;backup&amp;gt;/&amp;lt;location&amp;gt;/.&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#exportSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== commitSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Cleanup.2FCommit_.28Online.29]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#commitSnapshot]]&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
Since the stoney cloud is (as the name says already) a cloud solution, it makes sense to have a backend (in our case openLDAP) involved in the whole process. Like that it is possible to run the backup jobs decentralized on every vm-node. The control instance can then modify the backend, and theses changes are seen by the diffenrent backup daemons on the vm-nodes. So the communication could look like shown in the following picture (Figure 1): &lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-communication.png|800px|thumbnail|none|Figure 1: Communication between the control instance and the prov-backup-kvm daemon through the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
=== Control-Instance Daemon Interaction for creating a Backup with LDIF Examples ===&lt;br /&gt;
The step numbers correspond with the graphical overview from above.&lt;br /&gt;
&lt;br /&gt;
==== Step 00: Backup Configuration for a virtual machine ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The following backup configuration says, that the backup should be done daily, at 03:00 hours (localtime).&lt;br /&gt;
# * * * * * command to be executed&lt;br /&gt;
# - - - - -&lt;br /&gt;
# | | | | |&lt;br /&gt;
# | | | | +----- day of week (0 - 6) (Sunday=0)&lt;br /&gt;
# | | | +------- month (1 - 12)&lt;br /&gt;
# | | +--------- day of month (1 - 31)&lt;br /&gt;
# | +----------- hour (0 - 23)&lt;br /&gt;
# +------------- min (0 - 59)&lt;br /&gt;
# localtime in the crontab entry&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
objectclass: sstCronObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
description: This sub tree contains the backup plan for the virtual machine kvm-005.&lt;br /&gt;
sstCronMinute: 0&lt;br /&gt;
sstCronHour: 3&lt;br /&gt;
sstCronDay: *&lt;br /&gt;
sstCronMonth: *&lt;br /&gt;
sstCronDayOfWeek: *&lt;br /&gt;
sstCronActive: TRUE&lt;br /&gt;
sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
sstBackupRamDiskLocation: file:///mnt/ramdisk-test&lt;br /&gt;
sstVirtualizationDiskImageFormat: qcow2&lt;br /&gt;
sstVirtualizationDiskImageOwner: root&lt;br /&gt;
sstVirtualizationDiskImageGroup: vm-storage&lt;br /&gt;
sstVirtualizationDiskImagePermission: 0660&lt;br /&gt;
sstBackupNumberOfIterations: 1&lt;br /&gt;
sstVirtualizationVirtualMachineForceStart: FALSE&lt;br /&gt;
sstVirtualizationBandwidthMerge: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 01: Initialize Backup Sub Tree (Control instance daemon) ====&lt;br /&gt;
The sub tree &#039;&#039;&#039; ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&#039;&#039;&#039; reflects the time, when the backup is planned (in the form of [YYYY][MM][DD]T[hh][mm][ss]Z ([http://en.wikipedia.org/wiki/ISO_8601 ISO 8601]) and it should be written at the time, when the backup is planned and should be executed. The section &#039;&#039;&#039;20121002T010000Z&#039;&#039;&#039; means the following:&lt;br /&gt;
* Year: 2012&lt;br /&gt;
* Month: 10&lt;br /&gt;
* Day of Month: 02&lt;br /&gt;
* Hour of Day: 01&lt;br /&gt;
* Minutes: 00&lt;br /&gt;
* Seconds: 00&lt;br /&gt;
Please be aware the the time is to be written in UTC (see also the comment in the LDIF example below).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# This entry is the place holder for the backup, which is to be executed at 03:00 hours (localtime with daylight-saving). This&lt;br /&gt;
# leads to the 20121002T010000Z timestamp (which is written in UTC).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: sstProvisioning&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
ou: 20121002T010000Z&lt;br /&gt;
sstProvisioningExecutionDate: 0&lt;br /&gt;
sstProvisioningMode: initialize&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
sstProvisioningState: 20121002T014513Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Finalize the Initialization (Control instance daemon) ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is modified.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: initialized&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Start the Snapshot Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshot&#039;&#039;&#039;, the actual backup process is kicked off by the Control instance daemon.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# snapshot (this way the Provisioning-Backup-VKM daemon knows, that it must start the snapshotting process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshot&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 04: Starting the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is snapshotting the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to snapshotting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Finalizing the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the snapshot of the virtual machine or virtual machine template is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010011Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Start the export Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;export&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to export the disk image to the backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# export (this way the Provisioning-Backup-VKM daemon knows, that it must start the export process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: export&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Starting the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exporting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is exporting the virtual machine or virtual machine template disk images.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to exporting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exporting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 08: Finalizing the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exported&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the export of the virtual machine or virtual machine template disk-images is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010500Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exported&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the commit Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;commit&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to commit the changes from the overlay file to the underlying disk-image&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# commit (this way the Provisioning-Backup-VKM daemon knows, that it must start the commit process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: commit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the Retain Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the retain command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;retaining&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is retaining the necessary files to the configured backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to retaining by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: retaining&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the Retaing Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the retain command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;retained&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the retaining of all the necessary files to the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: retained&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the Backup Process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;retained&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Backup) ==&lt;br /&gt;
Since we do not have a working control instance, we need to have a workaround for backing up the machines: &lt;br /&gt;
&lt;br /&gt;
* We do already have a BackupKVMWrapper.pl script (File-Backend) which executes the three [[#Sub-Processes | sub-processes ]] in the correct order for a given list of machines (see [[#Backup multiple machines at the same_time]]).&lt;br /&gt;
* We do already have the implementation for the whole backup with the LDAP-Backend (see [[ stoney conductor: prov backup kvm ]]).&lt;br /&gt;
* We can now combine these two existing scripts and create a wrapper (lets call it LDAPKVMWrapper) which, in some way, adds some logic to the BackupKVMWrapper.pl. In fact the LDAPKVMWrapper wrapper will generate the list of machines which need a backup.&lt;br /&gt;
&lt;br /&gt;
The behaviour on our servers is as follows (c.f. Figure 2):&lt;br /&gt;
# The (decentralized) LDAPKVMWrapper wrapper (which is executed everyday via cronjob) generates a list off all machines running on the current host.&lt;br /&gt;
#* Currently on the hosts the cronjobs looks like: &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
#* For each of these machines:&lt;br /&gt;
#** Check if the machine is excluded from the backup, if yes, remove the machine from the list&lt;br /&gt;
#** Check if the last backup was successful, if not, remove the machine from the list&lt;br /&gt;
# Update the backup subtree for each machine in the list&lt;br /&gt;
#* Remove the old backup leaf (the &amp;quot;yesterday-leaf&amp;quot;), and add a new one (the &amp;quot;today-leaf&amp;quot;) &lt;br /&gt;
#* After this step, the machines are ready to be backed up&lt;br /&gt;
# Call the BackupKVMWrapper.pl script with the machines list as a parameter&lt;br /&gt;
# Wait for the BackupKVMWrapper.pl script to finish&lt;br /&gt;
# Go again through all machines and update the backup subtree a last time&lt;br /&gt;
#* Check if the backup was successful, if yes, set sstProvisioningMode = finished (see also TBD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:wrapper-interaction.png|500px|thumbnail|none|Figure 2: How the two wrapper interact with the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
* If for some reason something does not work at all, the whole backup process can be deactivated by simply disabling the LDAPKVMWrapper cronjob&lt;br /&gt;
** &amp;lt;code&amp;gt;crontab -e&amp;lt;/code&amp;gt;&lt;br /&gt;
** Comment the LDAPKVMWrapper cronjob line: &amp;lt;code&amp;gt;#00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
=== How to exclude a machine from the backup ===&lt;br /&gt;
Login to one of the [[VM-Node | vm-nodes]] and execute the following command&lt;br /&gt;
&lt;br /&gt;
If you want to exclude a machine from the backup run you simply need to add the following entry to your LDAP directory: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the backup subtree in the LDAP directory already exists, you need to add the sstbackupexcludefrombackup attribute: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
add: objectClass&lt;br /&gt;
objectClass: sstVirtualizationBackupObjectClass&lt;br /&gt;
-&lt;br /&gt;
add: sstbackupexcludefrombackup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Re-include the machine to the backup ====&lt;br /&gt;
If you want to re include a machine, simply delete the machines whole backup subtree. It will be recreated during the next backup run.&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
= Restore =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The restore process, similar to the backup process, can be divided into three sub-processes: &lt;br /&gt;
* Unretain the small files: Copy the small files (backend entry, XML description) from the backup directory to the retain directory&lt;br /&gt;
* Unretain the big files: Copy the big files (state file, disk image(s)) form the backup directory to the retain directory&lt;br /&gt;
* Restore the machine: Replace the live disk image(s) by the one(s) from the backup and restore the machine from the state file&lt;br /&gt;
&lt;br /&gt;
Additionally the restore process can also be divided into two phases: &lt;br /&gt;
* User-Interaction phase: After the &amp;quot;unretain small files&amp;quot; the user needs to decide two things:&lt;br /&gt;
** On conflicts between the backend entry file and the XML description, the user need to decide how to resolve this conflict(s)&lt;br /&gt;
** The user can also abort the restore process up to this point. After that the restore can not be aborted or undone! &lt;br /&gt;
* Non-User-Interaction phase: The daemons communicate through the backend between each other and the restore process continues without further user input (c.f. [[#Communication_through_backend_2 | Communication through backend]])&lt;br /&gt;
&lt;br /&gt;
=== Sub Processes ===&lt;br /&gt;
==== Unretain small files ====&lt;br /&gt;
This workflow assumes that the backup directory is on the same physical server as the retain directory (protocol is file://)&lt;br /&gt;
# Copy the backend-entry file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.backend /path/to/retain/vm-001.backend&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the XML description from the from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.xml /path/to/retain/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Compare the backend-entry file (the one in the retain directory) with the live-backend entry&lt;br /&gt;
#* Resolve all conflicts between these two backend entries&lt;br /&gt;
#** Modify the backend entry at the retain location accordingly&lt;br /&gt;
# Apply the same changes for the XML description at the retain location (backend entry and XML description need to be consistent).&lt;br /&gt;
&lt;br /&gt;
==== Unretain large files ====&lt;br /&gt;
# Copy the state file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.state /path/to/retain/vm-001.state&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the disk image(s) from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.qcow2 /path/to/retain/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
&lt;br /&gt;
==== Restore the VM ====&lt;br /&gt;
# Shutdown the VM if it is running:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh shutdown vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Undefine the VM if it is still defined: &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh undefine vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Overwrite the original disk image:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;mv /path/to/retain/vm-001.qcow2 /path/to/images/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
# Restore the VMs backend entry: &lt;br /&gt;
#* Write the backend entry from the retain location (&amp;lt;code&amp;gt;/path/to/retain/vm-001.backend&amp;lt;/code&amp;gt;) to the backend&lt;br /&gt;
# Overwrite the VMs XML description with the one from the retain location &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/retain/vm-001.xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Restore the VM from the state file with the corrected XML&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh restore /path/to/retain/vm-001.state --xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
The actual KVM-Restore process is controlled completely by the Control instance daemon via the OpenLDAP directory. See [[#OpenLDAP Directory Integration|OpenLDAP Directory Integration]] the involved attributes and possible values.&lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-interaction-restore.png|thumb|500px|none|Figure 3: Communication between all involved parties during the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update these interactions by editing [[File:Restore-Interaction.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== Control instance Daemon Interaction for restoring a Backup with LDIF Examples ===&lt;br /&gt;
==== Step 01: Start the unretainSmallFiles process (Control instance daemon) ====&lt;br /&gt;
The first step of the restore process is to copy the small files (in this case the XML file and the LDIF) from the configured backup location to the configured retain location. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainSmallFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainSmallFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Starting the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingSmallFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the small files for the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Finalizing the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedSmallFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the small files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Start the unretainLargeFiles process (Control instance daemon) ====&lt;br /&gt;
Next step in the restore process is to copy the large files (state file and disk images) from the configured backup directory to the configured retain directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainLargeFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainLargeFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Starting the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingLargeFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the large files for the virtual machine or virtual machine template.&lt;br /&gt;
&lt;br /&gt;
In the meantime the vm-manager merges the LDIF we have unretained in [[#Step_02:_Starting_the_unretainSmallFiles_process_.28Provisioning-Backup-KVM_daemon.29 | step 02]] with the one in the live directory to sort out possible differences in the configuration of the virtual machine.  &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Finalizing the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedLargeFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the large files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the restore process (Control instance daemon) ====&lt;br /&gt;
Since we now have all necessary files in the configured retain location, the restore process can be started. There we simply copy the disk images back to their original location and restore the VM from the state file (which is also at the configured retain location)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# restore (this way the Provisioning-Backup-VKM daemon knows, that it must start the restore process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restore&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restoring&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is restoring the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to restoring by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restoring&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restored&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restored&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the restore process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;restored&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the Control instance daemon, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the restore process is finished.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Restore) ==&lt;br /&gt;
* Since the prov-backup-kvm daemon is not running on the vm-nodes (c.f. [[stoney_conductor:_Backup#State_of_the_art]]), the restore process does not work when clicking the icon in the webinterface. &lt;br /&gt;
* Resolving the conflicts in the backend and XML description file is not yet done&lt;br /&gt;
** Actually all steps not executed by prov-backup-kvm are not yet properly implemented (c.f. [[stoney_conductor:_prov_backup_kvm#Restore]])&lt;br /&gt;
* The implementation is done, but the last step from the [[#Restore_2 | restore process ]] is different:&lt;br /&gt;
** The &amp;lt;code&amp;gt;virsh restore&amp;lt;/code&amp;gt; command is not executed with the &amp;lt;code&amp;gt;--xml&amp;lt;/code&amp;gt; option, the XML from the state file is taken when restoring the machine. Therefore the conflicts are not properly resolved. &lt;br /&gt;
*** --[[User:Pat|Pat]] ([[User talk:Pat|talk]]) 09:41, 29 October 2013 (CET): Currently the [http://search.cpan.org/~danberr/Sys-Virt-1.1.3/lib/Sys/Virt.pm Sys::Virt] library does not support the --xml parameter when restoring a domain&lt;br /&gt;
&lt;br /&gt;
=== How to manually restore a machine from backup ===&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: Before you continue with this guide, make sure that you have no other possibility to restore the machine. It might be easier and safer to get lost files from the online backup if the machine has one set up.&lt;br /&gt;
&lt;br /&gt;
If you really have to restore the machine from the backup:&lt;br /&gt;
# Stop the machine from via the [https://cloud.stepping-stone.ch/vm-manager/ web interface]&lt;br /&gt;
# Login (as root) on the [[VM-Node]] the machine was running on&lt;br /&gt;
&lt;br /&gt;
As a first step, you would like to set some useful bash variables to be able to copy paste the following guide:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Double check all variables you are setting here. If one is not correct, you will restore a running machine or overwrite a live-disk image!&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
machinename=&amp;quot;&amp;lt;MACHINE-NAME&amp;gt;&amp;quot; # For example: machinename=&amp;quot;b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6&amp;quot;&lt;br /&gt;
vmpool=&amp;quot;&amp;lt;VM-POOL&amp;gt;&amp;quot; # For example vmpool=&amp;quot;0f83f084-8080-413e-b558-b678e504836e&amp;quot;&lt;br /&gt;
vmtype=&amp;quot;&amp;lt;VM-TYPE&amp;gt;&amp;quot; # For example vmtype=&amp;quot;vm-persistent&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change to the backup directory for the given machine and check the iterations:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change into the most recent iteration&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd 2014...&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In there you should have: &lt;br /&gt;
* The state file &amp;lt;MACHINE-NAME&amp;gt;.state.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.state.20140109T134445Z)&lt;br /&gt;
* The XML description &amp;lt;MACHINE-NAME&amp;gt;.xml.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.xml.20140109T134445Z)&lt;br /&gt;
* The ldif file &amp;lt;MACHINE-NAME&amp;gt;.ldif.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.ldif.20140109T134445Z)&lt;br /&gt;
* And at least one disk image &amp;lt;DISK-IMAGE&amp;gt;.qcow2.&amp;lt;BACKUP-DATE&amp;gt; (for example 8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2.20140109T134445Z)&lt;br /&gt;
Now you should save the backup date and the disk image(s) in a variable&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
backupdate=&amp;quot;&amp;lt;BACKUP-DATE&amp;gt;&amp;quot; # For example: backupdate=&amp;quot;20140109T134445Z&amp;quot;&lt;br /&gt;
diskimage1=&amp;quot;&amp;lt;DISK-IMAGE-1&amp;gt;.qcow2&amp;quot; # For example: diskimage1=&amp;quot;8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2&amp;quot;&lt;br /&gt;
diskimage2=&amp;quot;&amp;lt;DISK-IMAGE-2&amp;gt;.qcow2&amp;quot; # For example: diskimage2=&amp;quot;aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee.qcow2&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Have again a look at the different variables and &#039;&#039;&#039;double check them again&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
echo &amp;quot;Machine Name = ${machinename}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Pool = ${vmpool}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Type = ${vmtype}&amp;quot;&lt;br /&gt;
echo &amp;quot;Backup date = ${backupdate}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 1 = ${diskimage1}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 2 = ${diskimage2}&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all these files to the retain location:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
currentdate=`date --utc +&#039;%Y%m%dT%H%M%SZ&#039;`&lt;br /&gt;
mkdir -p /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.ldif.${backupdate} /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Check if there is a difference between the current XML file and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
diff -Naur /etc/libvirt/qemu/${machinename}.xml /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.xml.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Now you are entering the critical part. You won&#039;t be able to undo the following steps&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Check if there is a difference between the current LDAP entry and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
domain=&amp;quot;&amp;lt;DOMAIN&amp;gt;&amp;quot; # For example domain=&amp;quot;stoney-cloud.org&amp;quot;&lt;br /&gt;
ldapbase=&amp;quot;&amp;lt;LDAPBASE&amp;gt;&amp;quot; # For expample ldapbase=&amp;quot;dc=stoney-cloud,dc=org&amp;quot;&lt;br /&gt;
ldapsearch -H ldaps://ldapm.${domain} -b &amp;quot;sstVirtualMachine=${machinename},ou=virtual machines,ou=virtualization,ou=services,${ldapbase}&amp;quot; -s sub -x -LLL -o ldif-wrap=no -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W  &amp;quot;(objectclass=*)&amp;quot; &amp;gt; /tmp/${machinename}.ldif&lt;br /&gt;
diff -Naur /tmp/${machinename}.ldif /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.&lt;br /&gt;
&lt;br /&gt;
If there are no differences (or the differences are not important) you can skip the following step. Otherwise use the [https://cloud.stepping-stone.ch/phpldapadmin PhpLdapAdmin] to delete the machine from the LDAP directory (do not forget to delete the dhcp entry &amp;lt;code&amp;gt;dn: cn=&amp;lt;MACHINE-NAME&amp;gt;,ou=virtual machines,cn=192.168.140.0,cn=config-01,ou=dhcp,ou=networks,ou=virtualization,ou=services,dc=stoney-cloud,dc=org&amp;lt;/code&amp;gt;). Then add the LDIF (the one you just edited) to the LDAP (first do some general replacement)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sed -i\&lt;br /&gt;
 -e &#039;s/snapshotting/finished/&#039;\&lt;br /&gt;
 -e &#039;/member.*/d&#039;\&lt;br /&gt;
 /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&lt;br /&gt;
/usr/bin/ldapadd -H &amp;quot;ldaps://ldapm.${domain}&amp;quot; -x -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W -f /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Undefine the machine&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh undefine ${machinename}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all the disk images from the backup location back to their original location&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage1}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage1}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage2}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage2}&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And restore the domain from the state file from the backup location with the XML from the retain location (the one you might have edited)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh restore /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.state.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now the machine should be up and running again. Continuing where it was stopped when taking the backup.&lt;br /&gt;
&lt;br /&gt;
If everything is OK, you can cleanup the created files and directories&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rm -rf /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
rm /tmp/${machinename}.ldif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: stoney conductor]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3747</id>
		<title>stoney conductor: VM Backup</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3747"/>
		<updated>2014-06-26T13:49:24Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Step 09: Start the Retain Process (Control instance daemon) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This page describes how the VMs and VM-Templates are backed-up and restored inside the [http://www.stoney-cloud.org stoney cloud].&lt;br /&gt;
&lt;br /&gt;
= Requirements =&lt;br /&gt;
* sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
** This directory might be a single partition which needs to have the same size as your partition for the live images (it&#039;s a &amp;quot;copy&amp;quot; of the live partition)&lt;br /&gt;
* sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
** This directory must be on the same partition as your life images are&lt;br /&gt;
* A working stoney cloud, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].&lt;br /&gt;
* The backup configuration must be set: [[stoney_conductor:_OpenLDAP_directory_data_organisation#Backup | stoney conductor: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Backup =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The main idea to backup a VM or a VM-Template is, to divide the task into three subtasks: &lt;br /&gt;
* createSnapshot: Create a disk only snapshot. A new overlay file is created, all write operations are performed to this file. The underlying disk-image is now read only.&lt;br /&gt;
* exportSnapshot: Copy the read only disk-image to the backup location.&lt;br /&gt;
* commitSnapshot: Commit the performed write operations from the overlay back to the underlying (original) disk image. Now the underlying image is read-write again and the overlay image can be deleted.&lt;br /&gt;
A more detailed and technical description for these three sub-processes can be found [[#Sub-Processes | here]].&lt;br /&gt;
&lt;br /&gt;
Furthermore there is an control instance, which can independently call these three sub-processes for a given machine. Like that, the stoney cloud is able to handle different cases:&lt;br /&gt;
=== Backup a single machine ===&lt;br /&gt;
The procedure for backing up a single machine is very simple. Just call the three sub-processes (snapshot, merge and retain) one after the other. So the control instance would do some very basic stuff: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machine = args[0];&lt;br /&gt;
&lt;br /&gt;
if( createSsnapshot( machine ) )&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
    if ( exportSnapshot( machine ) )&lt;br /&gt;
    {&lt;br /&gt;
&lt;br /&gt;
        if ( commitSnapshot( machine ) )&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Successfully backed up machine %s\n&amp;quot;, machine);&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
} else&lt;br /&gt;
{&lt;br /&gt;
    printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Backup multiple machines at the same time ===&lt;br /&gt;
When backing up multiple machines at the same time, we need to make sure that the snapshots for the machines are as close together as possible. Therefore the control instance should call first the createSnapshot process for all machines. After every machine has been snapshotted, the control instance can call the exportSnapshot and commitSnapshot process for every machine. The most important part here is, that the control instance somehow remembers, if the snapshot for a given machine was successful or not. Because if the snapshot failed, it must not call the exportSnapshot and commitSnapshot process. So the control instance needs a little bit more logic: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machines[] = args[0];&lt;br /&gt;
object successful_snapshots[];&lt;br /&gt;
&lt;br /&gt;
# Snapshot all machines&lt;br /&gt;
for( int i = 0; i &amp;lt;  sizeof(machines) / sizeof(object) ; i++ )&lt;br /&gt;
{&lt;br /&gt;
    # If the snapshot was successful, put the machine into the &lt;br /&gt;
    # successful_snapshots array&lt;br /&gt;
    if ( createSnapshot( machines[i] ) )&lt;br /&gt;
    {&lt;br /&gt;
        successful_snapshots[machines[i]];&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machines[i],error);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# export and commit all successful_snapshot machines&lt;br /&gt;
for ( int i = 0; i &amp;lt;  sizeof(successful_snapshots) / sizeof(object) ; i++ ) )&lt;br /&gt;
{&lt;br /&gt;
    # Check if the element at this position is not null, then the snapshot &lt;br /&gt;
    # for this machine was successful&lt;br /&gt;
    if ( successful_snapshots[i] )&lt;br /&gt;
    {&lt;br /&gt;
        if ( exportSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
        {&lt;br /&gt;
            if ( commitSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
            {&lt;br /&gt;
              printf(&amp;quot;Successfully backed-up machine %s\n&amp;quot;, successful_snapshots[i]);&lt;br /&gt;
            } else&lt;br /&gt;
            {&lt;br /&gt;
                printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sub-Processes ===&lt;br /&gt;
See also [[Libvirt_external_snapshot_with_GlusterFS]]&lt;br /&gt;
==== createSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Part_2:_Create_the_snapshot_using_virsh]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#createSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== exportSnapshot ====&lt;br /&gt;
# Simply copy the underlying image to the backup location&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;image&amp;gt;.qcow2 /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;backup&amp;gt;/&amp;lt;location&amp;gt;/.&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#exportSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== commitSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Cleanup.2FCommit_.28Online.29]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#commitSnapshot]]&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
Since the stoney cloud is (as the name says already) a cloud solution, it makes sense to have a backend (in our case openLDAP) involved in the whole process. Like that it is possible to run the backup jobs decentralized on every vm-node. The control instance can then modify the backend, and theses changes are seen by the diffenrent backup daemons on the vm-nodes. So the communication could look like shown in the following picture (Figure 1): &lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-communication.png|800px|thumbnail|none|Figure 1: Communication between the control instance and the prov-backup-kvm daemon through the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
=== Control-Instance Daemon Interaction for creating a Backup with LDIF Examples ===&lt;br /&gt;
The step numbers correspond with the graphical overview from above.&lt;br /&gt;
&lt;br /&gt;
==== Step 00: Backup Configuration for a virtual machine ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The following backup configuration says, that the backup should be done daily, at 03:00 hours (localtime).&lt;br /&gt;
# * * * * * command to be executed&lt;br /&gt;
# - - - - -&lt;br /&gt;
# | | | | |&lt;br /&gt;
# | | | | +----- day of week (0 - 6) (Sunday=0)&lt;br /&gt;
# | | | +------- month (1 - 12)&lt;br /&gt;
# | | +--------- day of month (1 - 31)&lt;br /&gt;
# | +----------- hour (0 - 23)&lt;br /&gt;
# +------------- min (0 - 59)&lt;br /&gt;
# localtime in the crontab entry&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
objectclass: sstCronObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
description: This sub tree contains the backup plan for the virtual machine kvm-005.&lt;br /&gt;
sstCronMinute: 0&lt;br /&gt;
sstCronHour: 3&lt;br /&gt;
sstCronDay: *&lt;br /&gt;
sstCronMonth: *&lt;br /&gt;
sstCronDayOfWeek: *&lt;br /&gt;
sstCronActive: TRUE&lt;br /&gt;
sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
sstBackupRamDiskLocation: file:///mnt/ramdisk-test&lt;br /&gt;
sstVirtualizationDiskImageFormat: qcow2&lt;br /&gt;
sstVirtualizationDiskImageOwner: root&lt;br /&gt;
sstVirtualizationDiskImageGroup: vm-storage&lt;br /&gt;
sstVirtualizationDiskImagePermission: 0660&lt;br /&gt;
sstBackupNumberOfIterations: 1&lt;br /&gt;
sstVirtualizationVirtualMachineForceStart: FALSE&lt;br /&gt;
sstVirtualizationBandwidthMerge: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 01: Initialize Backup Sub Tree (Control instance daemon) ====&lt;br /&gt;
The sub tree &#039;&#039;&#039; ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&#039;&#039;&#039; reflects the time, when the backup is planned (in the form of [YYYY][MM][DD]T[hh][mm][ss]Z ([http://en.wikipedia.org/wiki/ISO_8601 ISO 8601]) and it should be written at the time, when the backup is planned and should be executed. The section &#039;&#039;&#039;20121002T010000Z&#039;&#039;&#039; means the following:&lt;br /&gt;
* Year: 2012&lt;br /&gt;
* Month: 10&lt;br /&gt;
* Day of Month: 02&lt;br /&gt;
* Hour of Day: 01&lt;br /&gt;
* Minutes: 00&lt;br /&gt;
* Seconds: 00&lt;br /&gt;
Please be aware the the time is to be written in UTC (see also the comment in the LDIF example below).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# This entry is the place holder for the backup, which is to be executed at 03:00 hours (localtime with daylight-saving). This&lt;br /&gt;
# leads to the 20121002T010000Z timestamp (which is written in UTC).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: sstProvisioning&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
ou: 20121002T010000Z&lt;br /&gt;
sstProvisioningExecutionDate: 0&lt;br /&gt;
sstProvisioningMode: initialize&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
sstProvisioningState: 20121002T014513Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Finalize the Initialization (Control instance daemon) ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is modified.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: initialized&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Start the Snapshot Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshot&#039;&#039;&#039;, the actual backup process is kicked off by the Control instance daemon.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# snapshot (this way the Provisioning-Backup-VKM daemon knows, that it must start the snapshotting process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshot&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 04: Starting the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is snapshotting the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to snapshotting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Finalizing the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the snapshot of the virtual machine or virtual machine template is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010011Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Start the export Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;export&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to export the disk image to the backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# export (this way the Provisioning-Backup-VKM daemon knows, that it must start the export process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: export&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Starting the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exporting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is exporting the virtual machine or virtual machine template disk images.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to exporting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exporting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 08: Finalizing the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exported&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the export of the virtual machine or virtual machine template disk-images is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010500Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exported&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the commit Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;copmmit&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to commit the changes from the overlay file to the underlying disk-image&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# commit (this way the Provisioning-Backup-VKM daemon knows, that it must start the commit process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: commit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the Retain Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the retain command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;retaining&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is retaining the necessary files to the configured backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to retaining by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: retaining&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the Retaing Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the retain command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;retained&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the retaining of all the necessary files to the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: retained&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the Backup Process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;retained&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Backup) ==&lt;br /&gt;
Since we do not have a working control instance, we need to have a workaround for backing up the machines: &lt;br /&gt;
&lt;br /&gt;
* We do already have a BackupKVMWrapper.pl script (File-Backend) which executes the three [[#Sub-Processes | sub-processes ]] in the correct order for a given list of machines (see [[#Backup multiple machines at the same_time]]).&lt;br /&gt;
* We do already have the implementation for the whole backup with the LDAP-Backend (see [[ stoney conductor: prov backup kvm ]]).&lt;br /&gt;
* We can now combine these two existing scripts and create a wrapper (lets call it LDAPKVMWrapper) which, in some way, adds some logic to the BackupKVMWrapper.pl. In fact the LDAPKVMWrapper wrapper will generate the list of machines which need a backup.&lt;br /&gt;
&lt;br /&gt;
The behaviour on our servers is as follows (c.f. Figure 2):&lt;br /&gt;
# The (decentralized) LDAPKVMWrapper wrapper (which is executed everyday via cronjob) generates a list off all machines running on the current host.&lt;br /&gt;
#* Currently on the hosts the cronjobs looks like: &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
#* For each of these machines:&lt;br /&gt;
#** Check if the machine is excluded from the backup, if yes, remove the machine from the list&lt;br /&gt;
#** Check if the last backup was successful, if not, remove the machine from the list&lt;br /&gt;
# Update the backup subtree for each machine in the list&lt;br /&gt;
#* Remove the old backup leaf (the &amp;quot;yesterday-leaf&amp;quot;), and add a new one (the &amp;quot;today-leaf&amp;quot;) &lt;br /&gt;
#* After this step, the machines are ready to be backed up&lt;br /&gt;
# Call the BackupKVMWrapper.pl script with the machines list as a parameter&lt;br /&gt;
# Wait for the BackupKVMWrapper.pl script to finish&lt;br /&gt;
# Go again through all machines and update the backup subtree a last time&lt;br /&gt;
#* Check if the backup was successful, if yes, set sstProvisioningMode = finished (see also TBD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:wrapper-interaction.png|500px|thumbnail|none|Figure 2: How the two wrapper interact with the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
* If for some reason something does not work at all, the whole backup process can be deactivated by simply disabling the LDAPKVMWrapper cronjob&lt;br /&gt;
** &amp;lt;code&amp;gt;crontab -e&amp;lt;/code&amp;gt;&lt;br /&gt;
** Comment the LDAPKVMWrapper cronjob line: &amp;lt;code&amp;gt;#00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
=== How to exclude a machine from the backup ===&lt;br /&gt;
Login to one of the [[VM-Node | vm-nodes]] and execute the following command&lt;br /&gt;
&lt;br /&gt;
If you want to exclude a machine from the backup run you simply need to add the following entry to your LDAP directory: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the backup subtree in the LDAP directory already exists, you need to add the sstbackupexcludefrombackup attribute: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
add: objectClass&lt;br /&gt;
objectClass: sstVirtualizationBackupObjectClass&lt;br /&gt;
-&lt;br /&gt;
add: sstbackupexcludefrombackup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Re-include the machine to the backup ====&lt;br /&gt;
If you want to re include a machine, simply delete the machines whole backup subtree. It will be recreated during the next backup run.&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
= Restore =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The restore process, similar to the backup process, can be divided into three sub-processes: &lt;br /&gt;
* Unretain the small files: Copy the small files (backend entry, XML description) from the backup directory to the retain directory&lt;br /&gt;
* Unretain the big files: Copy the big files (state file, disk image(s)) form the backup directory to the retain directory&lt;br /&gt;
* Restore the machine: Replace the live disk image(s) by the one(s) from the backup and restore the machine from the state file&lt;br /&gt;
&lt;br /&gt;
Additionally the restore process can also be divided into two phases: &lt;br /&gt;
* User-Interaction phase: After the &amp;quot;unretain small files&amp;quot; the user needs to decide two things:&lt;br /&gt;
** On conflicts between the backend entry file and the XML description, the user need to decide how to resolve this conflict(s)&lt;br /&gt;
** The user can also abort the restore process up to this point. After that the restore can not be aborted or undone! &lt;br /&gt;
* Non-User-Interaction phase: The daemons communicate through the backend between each other and the restore process continues without further user input (c.f. [[#Communication_through_backend_2 | Communication through backend]])&lt;br /&gt;
&lt;br /&gt;
=== Sub Processes ===&lt;br /&gt;
==== Unretain small files ====&lt;br /&gt;
This workflow assumes that the backup directory is on the same physical server as the retain directory (protocol is file://)&lt;br /&gt;
# Copy the backend-entry file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.backend /path/to/retain/vm-001.backend&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the XML description from the from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.xml /path/to/retain/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Compare the backend-entry file (the one in the retain directory) with the live-backend entry&lt;br /&gt;
#* Resolve all conflicts between these two backend entries&lt;br /&gt;
#** Modify the backend entry at the retain location accordingly&lt;br /&gt;
# Apply the same changes for the XML description at the retain location (backend entry and XML description need to be consistent).&lt;br /&gt;
&lt;br /&gt;
==== Unretain large files ====&lt;br /&gt;
# Copy the state file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.state /path/to/retain/vm-001.state&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the disk image(s) from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.qcow2 /path/to/retain/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
&lt;br /&gt;
==== Restore the VM ====&lt;br /&gt;
# Shutdown the VM if it is running:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh shutdown vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Undefine the VM if it is still defined: &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh undefine vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Overwrite the original disk image:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;mv /path/to/retain/vm-001.qcow2 /path/to/images/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
# Restore the VMs backend entry: &lt;br /&gt;
#* Write the backend entry from the retain location (&amp;lt;code&amp;gt;/path/to/retain/vm-001.backend&amp;lt;/code&amp;gt;) to the backend&lt;br /&gt;
# Overwrite the VMs XML description with the one from the retain location &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/retain/vm-001.xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Restore the VM from the state file with the corrected XML&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh restore /path/to/retain/vm-001.state --xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
The actual KVM-Restore process is controlled completely by the Control instance daemon via the OpenLDAP directory. See [[#OpenLDAP Directory Integration|OpenLDAP Directory Integration]] the involved attributes and possible values.&lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-interaction-restore.png|thumb|500px|none|Figure 3: Communication between all involved parties during the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update these interactions by editing [[File:Restore-Interaction.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== Control instance Daemon Interaction for restoring a Backup with LDIF Examples ===&lt;br /&gt;
==== Step 01: Start the unretainSmallFiles process (Control instance daemon) ====&lt;br /&gt;
The first step of the restore process is to copy the small files (in this case the XML file and the LDIF) from the configured backup location to the configured retain location. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainSmallFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainSmallFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Starting the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingSmallFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the small files for the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Finalizing the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedSmallFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the small files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Start the unretainLargeFiles process (Control instance daemon) ====&lt;br /&gt;
Next step in the restore process is to copy the large files (state file and disk images) from the configured backup directory to the configured retain directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainLargeFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainLargeFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Starting the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingLargeFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the large files for the virtual machine or virtual machine template.&lt;br /&gt;
&lt;br /&gt;
In the meantime the vm-manager merges the LDIF we have unretained in [[#Step_02:_Starting_the_unretainSmallFiles_process_.28Provisioning-Backup-KVM_daemon.29 | step 02]] with the one in the live directory to sort out possible differences in the configuration of the virtual machine.  &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Finalizing the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedLargeFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the large files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the restore process (Control instance daemon) ====&lt;br /&gt;
Since we now have all necessary files in the configured retain location, the restore process can be started. There we simply copy the disk images back to their original location and restore the VM from the state file (which is also at the configured retain location)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# restore (this way the Provisioning-Backup-VKM daemon knows, that it must start the restore process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restore&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restoring&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is restoring the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to restoring by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restoring&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restored&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restored&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the restore process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;restored&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the Control instance daemon, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the restore process is finished.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Restore) ==&lt;br /&gt;
* Since the prov-backup-kvm daemon is not running on the vm-nodes (c.f. [[stoney_conductor:_Backup#State_of_the_art]]), the restore process does not work when clicking the icon in the webinterface. &lt;br /&gt;
* Resolving the conflicts in the backend and XML description file is not yet done&lt;br /&gt;
** Actually all steps not executed by prov-backup-kvm are not yet properly implemented (c.f. [[stoney_conductor:_prov_backup_kvm#Restore]])&lt;br /&gt;
* The implementation is done, but the last step from the [[#Restore_2 | restore process ]] is different:&lt;br /&gt;
** The &amp;lt;code&amp;gt;virsh restore&amp;lt;/code&amp;gt; command is not executed with the &amp;lt;code&amp;gt;--xml&amp;lt;/code&amp;gt; option, the XML from the state file is taken when restoring the machine. Therefore the conflicts are not properly resolved. &lt;br /&gt;
*** --[[User:Pat|Pat]] ([[User talk:Pat|talk]]) 09:41, 29 October 2013 (CET): Currently the [http://search.cpan.org/~danberr/Sys-Virt-1.1.3/lib/Sys/Virt.pm Sys::Virt] library does not support the --xml parameter when restoring a domain&lt;br /&gt;
&lt;br /&gt;
=== How to manually restore a machine from backup ===&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: Before you continue with this guide, make sure that you have no other possibility to restore the machine. It might be easier and safer to get lost files from the online backup if the machine has one set up.&lt;br /&gt;
&lt;br /&gt;
If you really have to restore the machine from the backup:&lt;br /&gt;
# Stop the machine from via the [https://cloud.stepping-stone.ch/vm-manager/ web interface]&lt;br /&gt;
# Login (as root) on the [[VM-Node]] the machine was running on&lt;br /&gt;
&lt;br /&gt;
As a first step, you would like to set some useful bash variables to be able to copy paste the following guide:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Double check all variables you are setting here. If one is not correct, you will restore a running machine or overwrite a live-disk image!&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
machinename=&amp;quot;&amp;lt;MACHINE-NAME&amp;gt;&amp;quot; # For example: machinename=&amp;quot;b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6&amp;quot;&lt;br /&gt;
vmpool=&amp;quot;&amp;lt;VM-POOL&amp;gt;&amp;quot; # For example vmpool=&amp;quot;0f83f084-8080-413e-b558-b678e504836e&amp;quot;&lt;br /&gt;
vmtype=&amp;quot;&amp;lt;VM-TYPE&amp;gt;&amp;quot; # For example vmtype=&amp;quot;vm-persistent&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change to the backup directory for the given machine and check the iterations:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change into the most recent iteration&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd 2014...&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In there you should have: &lt;br /&gt;
* The state file &amp;lt;MACHINE-NAME&amp;gt;.state.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.state.20140109T134445Z)&lt;br /&gt;
* The XML description &amp;lt;MACHINE-NAME&amp;gt;.xml.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.xml.20140109T134445Z)&lt;br /&gt;
* The ldif file &amp;lt;MACHINE-NAME&amp;gt;.ldif.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.ldif.20140109T134445Z)&lt;br /&gt;
* And at least one disk image &amp;lt;DISK-IMAGE&amp;gt;.qcow2.&amp;lt;BACKUP-DATE&amp;gt; (for example 8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2.20140109T134445Z)&lt;br /&gt;
Now you should save the backup date and the disk image(s) in a variable&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
backupdate=&amp;quot;&amp;lt;BACKUP-DATE&amp;gt;&amp;quot; # For example: backupdate=&amp;quot;20140109T134445Z&amp;quot;&lt;br /&gt;
diskimage1=&amp;quot;&amp;lt;DISK-IMAGE-1&amp;gt;.qcow2&amp;quot; # For example: diskimage1=&amp;quot;8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2&amp;quot;&lt;br /&gt;
diskimage2=&amp;quot;&amp;lt;DISK-IMAGE-2&amp;gt;.qcow2&amp;quot; # For example: diskimage2=&amp;quot;aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee.qcow2&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Have again a look at the different variables and &#039;&#039;&#039;double check them again&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
echo &amp;quot;Machine Name = ${machinename}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Pool = ${vmpool}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Type = ${vmtype}&amp;quot;&lt;br /&gt;
echo &amp;quot;Backup date = ${backupdate}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 1 = ${diskimage1}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 2 = ${diskimage2}&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all these files to the retain location:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
currentdate=`date --utc +&#039;%Y%m%dT%H%M%SZ&#039;`&lt;br /&gt;
mkdir -p /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.ldif.${backupdate} /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Check if there is a difference between the current XML file and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
diff -Naur /etc/libvirt/qemu/${machinename}.xml /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.xml.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Now you are entering the critical part. You won&#039;t be able to undo the following steps&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Check if there is a difference between the current LDAP entry and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
domain=&amp;quot;&amp;lt;DOMAIN&amp;gt;&amp;quot; # For example domain=&amp;quot;stoney-cloud.org&amp;quot;&lt;br /&gt;
ldapbase=&amp;quot;&amp;lt;LDAPBASE&amp;gt;&amp;quot; # For expample ldapbase=&amp;quot;dc=stoney-cloud,dc=org&amp;quot;&lt;br /&gt;
ldapsearch -H ldaps://ldapm.${domain} -b &amp;quot;sstVirtualMachine=${machinename},ou=virtual machines,ou=virtualization,ou=services,${ldapbase}&amp;quot; -s sub -x -LLL -o ldif-wrap=no -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W  &amp;quot;(objectclass=*)&amp;quot; &amp;gt; /tmp/${machinename}.ldif&lt;br /&gt;
diff -Naur /tmp/${machinename}.ldif /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.&lt;br /&gt;
&lt;br /&gt;
If there are no differences (or the differences are not important) you can skip the following step. Otherwise use the [https://cloud.stepping-stone.ch/phpldapadmin PhpLdapAdmin] to delete the machine from the LDAP directory (do not forget to delete the dhcp entry &amp;lt;code&amp;gt;dn: cn=&amp;lt;MACHINE-NAME&amp;gt;,ou=virtual machines,cn=192.168.140.0,cn=config-01,ou=dhcp,ou=networks,ou=virtualization,ou=services,dc=stoney-cloud,dc=org&amp;lt;/code&amp;gt;). Then add the LDIF (the one you just edited) to the LDAP (first do some general replacement)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sed -i\&lt;br /&gt;
 -e &#039;s/snapshotting/finished/&#039;\&lt;br /&gt;
 -e &#039;/member.*/d&#039;\&lt;br /&gt;
 /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&lt;br /&gt;
/usr/bin/ldapadd -H &amp;quot;ldaps://ldapm.${domain}&amp;quot; -x -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W -f /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Undefine the machine&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh undefine ${machinename}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all the disk images from the backup location back to their original location&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage1}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage1}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage2}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage2}&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And restore the domain from the state file from the backup location with the XML from the retain location (the one you might have edited)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh restore /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.state.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now the machine should be up and running again. Continuing where it was stopped when taking the backup.&lt;br /&gt;
&lt;br /&gt;
If everything is OK, you can cleanup the created files and directories&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rm -rf /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
rm /tmp/${machinename}.ldif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: stoney conductor]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3746</id>
		<title>stoney conductor: VM Backup</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3746"/>
		<updated>2014-06-26T13:48:23Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Step 08: Finalizing the Merging Process (Provisioning-Backup-KVM daemon) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This page describes how the VMs and VM-Templates are backed-up and restored inside the [http://www.stoney-cloud.org stoney cloud].&lt;br /&gt;
&lt;br /&gt;
= Requirements =&lt;br /&gt;
* sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
** This directory might be a single partition which needs to have the same size as your partition for the live images (it&#039;s a &amp;quot;copy&amp;quot; of the live partition)&lt;br /&gt;
* sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
** This directory must be on the same partition as your life images are&lt;br /&gt;
* A working stoney cloud, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].&lt;br /&gt;
* The backup configuration must be set: [[stoney_conductor:_OpenLDAP_directory_data_organisation#Backup | stoney conductor: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Backup =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The main idea to backup a VM or a VM-Template is, to divide the task into three subtasks: &lt;br /&gt;
* createSnapshot: Create a disk only snapshot. A new overlay file is created, all write operations are performed to this file. The underlying disk-image is now read only.&lt;br /&gt;
* exportSnapshot: Copy the read only disk-image to the backup location.&lt;br /&gt;
* commitSnapshot: Commit the performed write operations from the overlay back to the underlying (original) disk image. Now the underlying image is read-write again and the overlay image can be deleted.&lt;br /&gt;
A more detailed and technical description for these three sub-processes can be found [[#Sub-Processes | here]].&lt;br /&gt;
&lt;br /&gt;
Furthermore there is an control instance, which can independently call these three sub-processes for a given machine. Like that, the stoney cloud is able to handle different cases:&lt;br /&gt;
=== Backup a single machine ===&lt;br /&gt;
The procedure for backing up a single machine is very simple. Just call the three sub-processes (snapshot, merge and retain) one after the other. So the control instance would do some very basic stuff: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machine = args[0];&lt;br /&gt;
&lt;br /&gt;
if( createSsnapshot( machine ) )&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
    if ( exportSnapshot( machine ) )&lt;br /&gt;
    {&lt;br /&gt;
&lt;br /&gt;
        if ( commitSnapshot( machine ) )&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Successfully backed up machine %s\n&amp;quot;, machine);&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
} else&lt;br /&gt;
{&lt;br /&gt;
    printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Backup multiple machines at the same time ===&lt;br /&gt;
When backing up multiple machines at the same time, we need to make sure that the snapshots for the machines are as close together as possible. Therefore the control instance should call first the createSnapshot process for all machines. After every machine has been snapshotted, the control instance can call the exportSnapshot and commitSnapshot process for every machine. The most important part here is, that the control instance somehow remembers, if the snapshot for a given machine was successful or not. Because if the snapshot failed, it must not call the exportSnapshot and commitSnapshot process. So the control instance needs a little bit more logic: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machines[] = args[0];&lt;br /&gt;
object successful_snapshots[];&lt;br /&gt;
&lt;br /&gt;
# Snapshot all machines&lt;br /&gt;
for( int i = 0; i &amp;lt;  sizeof(machines) / sizeof(object) ; i++ )&lt;br /&gt;
{&lt;br /&gt;
    # If the snapshot was successful, put the machine into the &lt;br /&gt;
    # successful_snapshots array&lt;br /&gt;
    if ( createSnapshot( machines[i] ) )&lt;br /&gt;
    {&lt;br /&gt;
        successful_snapshots[machines[i]];&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machines[i],error);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# export and commit all successful_snapshot machines&lt;br /&gt;
for ( int i = 0; i &amp;lt;  sizeof(successful_snapshots) / sizeof(object) ; i++ ) )&lt;br /&gt;
{&lt;br /&gt;
    # Check if the element at this position is not null, then the snapshot &lt;br /&gt;
    # for this machine was successful&lt;br /&gt;
    if ( successful_snapshots[i] )&lt;br /&gt;
    {&lt;br /&gt;
        if ( exportSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
        {&lt;br /&gt;
            if ( commitSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
            {&lt;br /&gt;
              printf(&amp;quot;Successfully backed-up machine %s\n&amp;quot;, successful_snapshots[i]);&lt;br /&gt;
            } else&lt;br /&gt;
            {&lt;br /&gt;
                printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sub-Processes ===&lt;br /&gt;
See also [[Libvirt_external_snapshot_with_GlusterFS]]&lt;br /&gt;
==== createSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Part_2:_Create_the_snapshot_using_virsh]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#createSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== exportSnapshot ====&lt;br /&gt;
# Simply copy the underlying image to the backup location&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;image&amp;gt;.qcow2 /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;backup&amp;gt;/&amp;lt;location&amp;gt;/.&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#exportSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== commitSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Cleanup.2FCommit_.28Online.29]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#commitSnapshot]]&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
Since the stoney cloud is (as the name says already) a cloud solution, it makes sense to have a backend (in our case openLDAP) involved in the whole process. Like that it is possible to run the backup jobs decentralized on every vm-node. The control instance can then modify the backend, and theses changes are seen by the diffenrent backup daemons on the vm-nodes. So the communication could look like shown in the following picture (Figure 1): &lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-communication.png|800px|thumbnail|none|Figure 1: Communication between the control instance and the prov-backup-kvm daemon through the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
=== Control-Instance Daemon Interaction for creating a Backup with LDIF Examples ===&lt;br /&gt;
The step numbers correspond with the graphical overview from above.&lt;br /&gt;
&lt;br /&gt;
==== Step 00: Backup Configuration for a virtual machine ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The following backup configuration says, that the backup should be done daily, at 03:00 hours (localtime).&lt;br /&gt;
# * * * * * command to be executed&lt;br /&gt;
# - - - - -&lt;br /&gt;
# | | | | |&lt;br /&gt;
# | | | | +----- day of week (0 - 6) (Sunday=0)&lt;br /&gt;
# | | | +------- month (1 - 12)&lt;br /&gt;
# | | +--------- day of month (1 - 31)&lt;br /&gt;
# | +----------- hour (0 - 23)&lt;br /&gt;
# +------------- min (0 - 59)&lt;br /&gt;
# localtime in the crontab entry&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
objectclass: sstCronObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
description: This sub tree contains the backup plan for the virtual machine kvm-005.&lt;br /&gt;
sstCronMinute: 0&lt;br /&gt;
sstCronHour: 3&lt;br /&gt;
sstCronDay: *&lt;br /&gt;
sstCronMonth: *&lt;br /&gt;
sstCronDayOfWeek: *&lt;br /&gt;
sstCronActive: TRUE&lt;br /&gt;
sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
sstBackupRamDiskLocation: file:///mnt/ramdisk-test&lt;br /&gt;
sstVirtualizationDiskImageFormat: qcow2&lt;br /&gt;
sstVirtualizationDiskImageOwner: root&lt;br /&gt;
sstVirtualizationDiskImageGroup: vm-storage&lt;br /&gt;
sstVirtualizationDiskImagePermission: 0660&lt;br /&gt;
sstBackupNumberOfIterations: 1&lt;br /&gt;
sstVirtualizationVirtualMachineForceStart: FALSE&lt;br /&gt;
sstVirtualizationBandwidthMerge: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 01: Initialize Backup Sub Tree (Control instance daemon) ====&lt;br /&gt;
The sub tree &#039;&#039;&#039; ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&#039;&#039;&#039; reflects the time, when the backup is planned (in the form of [YYYY][MM][DD]T[hh][mm][ss]Z ([http://en.wikipedia.org/wiki/ISO_8601 ISO 8601]) and it should be written at the time, when the backup is planned and should be executed. The section &#039;&#039;&#039;20121002T010000Z&#039;&#039;&#039; means the following:&lt;br /&gt;
* Year: 2012&lt;br /&gt;
* Month: 10&lt;br /&gt;
* Day of Month: 02&lt;br /&gt;
* Hour of Day: 01&lt;br /&gt;
* Minutes: 00&lt;br /&gt;
* Seconds: 00&lt;br /&gt;
Please be aware the the time is to be written in UTC (see also the comment in the LDIF example below).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# This entry is the place holder for the backup, which is to be executed at 03:00 hours (localtime with daylight-saving). This&lt;br /&gt;
# leads to the 20121002T010000Z timestamp (which is written in UTC).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: sstProvisioning&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
ou: 20121002T010000Z&lt;br /&gt;
sstProvisioningExecutionDate: 0&lt;br /&gt;
sstProvisioningMode: initialize&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
sstProvisioningState: 20121002T014513Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Finalize the Initialization (Control instance daemon) ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is modified.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: initialized&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Start the Snapshot Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshot&#039;&#039;&#039;, the actual backup process is kicked off by the Control instance daemon.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# snapshot (this way the Provisioning-Backup-VKM daemon knows, that it must start the snapshotting process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshot&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 04: Starting the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is snapshotting the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to snapshotting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Finalizing the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the snapshot of the virtual machine or virtual machine template is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010011Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Start the export Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;export&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to export the disk image to the backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# export (this way the Provisioning-Backup-VKM daemon knows, that it must start the export process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: export&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Starting the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exporting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is exporting the virtual machine or virtual machine template disk images.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to exporting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exporting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 08: Finalizing the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exported&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the export of the virtual machine or virtual machine template disk-images is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010500Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exported&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the Retain Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;retain&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to retain (copy and then delete) all the necessary files to the configured backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# retain (this way the Provisioning-Backup-VKM daemon knows, that it must start the retaining process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: retain&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the Retain Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the retain command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;retaining&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is retaining the necessary files to the configured backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to retaining by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: retaining&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the Retaing Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the retain command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;retained&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the retaining of all the necessary files to the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: retained&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the Backup Process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;retained&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Backup) ==&lt;br /&gt;
Since we do not have a working control instance, we need to have a workaround for backing up the machines: &lt;br /&gt;
&lt;br /&gt;
* We do already have a BackupKVMWrapper.pl script (File-Backend) which executes the three [[#Sub-Processes | sub-processes ]] in the correct order for a given list of machines (see [[#Backup multiple machines at the same_time]]).&lt;br /&gt;
* We do already have the implementation for the whole backup with the LDAP-Backend (see [[ stoney conductor: prov backup kvm ]]).&lt;br /&gt;
* We can now combine these two existing scripts and create a wrapper (lets call it LDAPKVMWrapper) which, in some way, adds some logic to the BackupKVMWrapper.pl. In fact the LDAPKVMWrapper wrapper will generate the list of machines which need a backup.&lt;br /&gt;
&lt;br /&gt;
The behaviour on our servers is as follows (c.f. Figure 2):&lt;br /&gt;
# The (decentralized) LDAPKVMWrapper wrapper (which is executed everyday via cronjob) generates a list off all machines running on the current host.&lt;br /&gt;
#* Currently on the hosts the cronjobs looks like: &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
#* For each of these machines:&lt;br /&gt;
#** Check if the machine is excluded from the backup, if yes, remove the machine from the list&lt;br /&gt;
#** Check if the last backup was successful, if not, remove the machine from the list&lt;br /&gt;
# Update the backup subtree for each machine in the list&lt;br /&gt;
#* Remove the old backup leaf (the &amp;quot;yesterday-leaf&amp;quot;), and add a new one (the &amp;quot;today-leaf&amp;quot;) &lt;br /&gt;
#* After this step, the machines are ready to be backed up&lt;br /&gt;
# Call the BackupKVMWrapper.pl script with the machines list as a parameter&lt;br /&gt;
# Wait for the BackupKVMWrapper.pl script to finish&lt;br /&gt;
# Go again through all machines and update the backup subtree a last time&lt;br /&gt;
#* Check if the backup was successful, if yes, set sstProvisioningMode = finished (see also TBD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:wrapper-interaction.png|500px|thumbnail|none|Figure 2: How the two wrapper interact with the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
* If for some reason something does not work at all, the whole backup process can be deactivated by simply disabling the LDAPKVMWrapper cronjob&lt;br /&gt;
** &amp;lt;code&amp;gt;crontab -e&amp;lt;/code&amp;gt;&lt;br /&gt;
** Comment the LDAPKVMWrapper cronjob line: &amp;lt;code&amp;gt;#00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
=== How to exclude a machine from the backup ===&lt;br /&gt;
Login to one of the [[VM-Node | vm-nodes]] and execute the following command&lt;br /&gt;
&lt;br /&gt;
If you want to exclude a machine from the backup run you simply need to add the following entry to your LDAP directory: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the backup subtree in the LDAP directory already exists, you need to add the sstbackupexcludefrombackup attribute: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
add: objectClass&lt;br /&gt;
objectClass: sstVirtualizationBackupObjectClass&lt;br /&gt;
-&lt;br /&gt;
add: sstbackupexcludefrombackup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Re-include the machine to the backup ====&lt;br /&gt;
If you want to re include a machine, simply delete the machines whole backup subtree. It will be recreated during the next backup run.&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
= Restore =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The restore process, similar to the backup process, can be divided into three sub-processes: &lt;br /&gt;
* Unretain the small files: Copy the small files (backend entry, XML description) from the backup directory to the retain directory&lt;br /&gt;
* Unretain the big files: Copy the big files (state file, disk image(s)) form the backup directory to the retain directory&lt;br /&gt;
* Restore the machine: Replace the live disk image(s) by the one(s) from the backup and restore the machine from the state file&lt;br /&gt;
&lt;br /&gt;
Additionally the restore process can also be divided into two phases: &lt;br /&gt;
* User-Interaction phase: After the &amp;quot;unretain small files&amp;quot; the user needs to decide two things:&lt;br /&gt;
** On conflicts between the backend entry file and the XML description, the user need to decide how to resolve this conflict(s)&lt;br /&gt;
** The user can also abort the restore process up to this point. After that the restore can not be aborted or undone! &lt;br /&gt;
* Non-User-Interaction phase: The daemons communicate through the backend between each other and the restore process continues without further user input (c.f. [[#Communication_through_backend_2 | Communication through backend]])&lt;br /&gt;
&lt;br /&gt;
=== Sub Processes ===&lt;br /&gt;
==== Unretain small files ====&lt;br /&gt;
This workflow assumes that the backup directory is on the same physical server as the retain directory (protocol is file://)&lt;br /&gt;
# Copy the backend-entry file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.backend /path/to/retain/vm-001.backend&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the XML description from the from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.xml /path/to/retain/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Compare the backend-entry file (the one in the retain directory) with the live-backend entry&lt;br /&gt;
#* Resolve all conflicts between these two backend entries&lt;br /&gt;
#** Modify the backend entry at the retain location accordingly&lt;br /&gt;
# Apply the same changes for the XML description at the retain location (backend entry and XML description need to be consistent).&lt;br /&gt;
&lt;br /&gt;
==== Unretain large files ====&lt;br /&gt;
# Copy the state file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.state /path/to/retain/vm-001.state&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the disk image(s) from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.qcow2 /path/to/retain/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
&lt;br /&gt;
==== Restore the VM ====&lt;br /&gt;
# Shutdown the VM if it is running:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh shutdown vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Undefine the VM if it is still defined: &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh undefine vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Overwrite the original disk image:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;mv /path/to/retain/vm-001.qcow2 /path/to/images/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
# Restore the VMs backend entry: &lt;br /&gt;
#* Write the backend entry from the retain location (&amp;lt;code&amp;gt;/path/to/retain/vm-001.backend&amp;lt;/code&amp;gt;) to the backend&lt;br /&gt;
# Overwrite the VMs XML description with the one from the retain location &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/retain/vm-001.xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Restore the VM from the state file with the corrected XML&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh restore /path/to/retain/vm-001.state --xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
The actual KVM-Restore process is controlled completely by the Control instance daemon via the OpenLDAP directory. See [[#OpenLDAP Directory Integration|OpenLDAP Directory Integration]] the involved attributes and possible values.&lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-interaction-restore.png|thumb|500px|none|Figure 3: Communication between all involved parties during the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update these interactions by editing [[File:Restore-Interaction.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== Control instance Daemon Interaction for restoring a Backup with LDIF Examples ===&lt;br /&gt;
==== Step 01: Start the unretainSmallFiles process (Control instance daemon) ====&lt;br /&gt;
The first step of the restore process is to copy the small files (in this case the XML file and the LDIF) from the configured backup location to the configured retain location. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainSmallFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainSmallFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Starting the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingSmallFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the small files for the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Finalizing the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedSmallFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the small files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Start the unretainLargeFiles process (Control instance daemon) ====&lt;br /&gt;
Next step in the restore process is to copy the large files (state file and disk images) from the configured backup directory to the configured retain directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainLargeFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainLargeFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Starting the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingLargeFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the large files for the virtual machine or virtual machine template.&lt;br /&gt;
&lt;br /&gt;
In the meantime the vm-manager merges the LDIF we have unretained in [[#Step_02:_Starting_the_unretainSmallFiles_process_.28Provisioning-Backup-KVM_daemon.29 | step 02]] with the one in the live directory to sort out possible differences in the configuration of the virtual machine.  &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Finalizing the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedLargeFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the large files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the restore process (Control instance daemon) ====&lt;br /&gt;
Since we now have all necessary files in the configured retain location, the restore process can be started. There we simply copy the disk images back to their original location and restore the VM from the state file (which is also at the configured retain location)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# restore (this way the Provisioning-Backup-VKM daemon knows, that it must start the restore process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restore&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restoring&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is restoring the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to restoring by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restoring&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restored&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restored&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the restore process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;restored&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the Control instance daemon, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the restore process is finished.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Restore) ==&lt;br /&gt;
* Since the prov-backup-kvm daemon is not running on the vm-nodes (c.f. [[stoney_conductor:_Backup#State_of_the_art]]), the restore process does not work when clicking the icon in the webinterface. &lt;br /&gt;
* Resolving the conflicts in the backend and XML description file is not yet done&lt;br /&gt;
** Actually all steps not executed by prov-backup-kvm are not yet properly implemented (c.f. [[stoney_conductor:_prov_backup_kvm#Restore]])&lt;br /&gt;
* The implementation is done, but the last step from the [[#Restore_2 | restore process ]] is different:&lt;br /&gt;
** The &amp;lt;code&amp;gt;virsh restore&amp;lt;/code&amp;gt; command is not executed with the &amp;lt;code&amp;gt;--xml&amp;lt;/code&amp;gt; option, the XML from the state file is taken when restoring the machine. Therefore the conflicts are not properly resolved. &lt;br /&gt;
*** --[[User:Pat|Pat]] ([[User talk:Pat|talk]]) 09:41, 29 October 2013 (CET): Currently the [http://search.cpan.org/~danberr/Sys-Virt-1.1.3/lib/Sys/Virt.pm Sys::Virt] library does not support the --xml parameter when restoring a domain&lt;br /&gt;
&lt;br /&gt;
=== How to manually restore a machine from backup ===&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: Before you continue with this guide, make sure that you have no other possibility to restore the machine. It might be easier and safer to get lost files from the online backup if the machine has one set up.&lt;br /&gt;
&lt;br /&gt;
If you really have to restore the machine from the backup:&lt;br /&gt;
# Stop the machine from via the [https://cloud.stepping-stone.ch/vm-manager/ web interface]&lt;br /&gt;
# Login (as root) on the [[VM-Node]] the machine was running on&lt;br /&gt;
&lt;br /&gt;
As a first step, you would like to set some useful bash variables to be able to copy paste the following guide:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Double check all variables you are setting here. If one is not correct, you will restore a running machine or overwrite a live-disk image!&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
machinename=&amp;quot;&amp;lt;MACHINE-NAME&amp;gt;&amp;quot; # For example: machinename=&amp;quot;b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6&amp;quot;&lt;br /&gt;
vmpool=&amp;quot;&amp;lt;VM-POOL&amp;gt;&amp;quot; # For example vmpool=&amp;quot;0f83f084-8080-413e-b558-b678e504836e&amp;quot;&lt;br /&gt;
vmtype=&amp;quot;&amp;lt;VM-TYPE&amp;gt;&amp;quot; # For example vmtype=&amp;quot;vm-persistent&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change to the backup directory for the given machine and check the iterations:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change into the most recent iteration&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd 2014...&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In there you should have: &lt;br /&gt;
* The state file &amp;lt;MACHINE-NAME&amp;gt;.state.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.state.20140109T134445Z)&lt;br /&gt;
* The XML description &amp;lt;MACHINE-NAME&amp;gt;.xml.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.xml.20140109T134445Z)&lt;br /&gt;
* The ldif file &amp;lt;MACHINE-NAME&amp;gt;.ldif.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.ldif.20140109T134445Z)&lt;br /&gt;
* And at least one disk image &amp;lt;DISK-IMAGE&amp;gt;.qcow2.&amp;lt;BACKUP-DATE&amp;gt; (for example 8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2.20140109T134445Z)&lt;br /&gt;
Now you should save the backup date and the disk image(s) in a variable&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
backupdate=&amp;quot;&amp;lt;BACKUP-DATE&amp;gt;&amp;quot; # For example: backupdate=&amp;quot;20140109T134445Z&amp;quot;&lt;br /&gt;
diskimage1=&amp;quot;&amp;lt;DISK-IMAGE-1&amp;gt;.qcow2&amp;quot; # For example: diskimage1=&amp;quot;8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2&amp;quot;&lt;br /&gt;
diskimage2=&amp;quot;&amp;lt;DISK-IMAGE-2&amp;gt;.qcow2&amp;quot; # For example: diskimage2=&amp;quot;aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee.qcow2&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Have again a look at the different variables and &#039;&#039;&#039;double check them again&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
echo &amp;quot;Machine Name = ${machinename}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Pool = ${vmpool}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Type = ${vmtype}&amp;quot;&lt;br /&gt;
echo &amp;quot;Backup date = ${backupdate}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 1 = ${diskimage1}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 2 = ${diskimage2}&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all these files to the retain location:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
currentdate=`date --utc +&#039;%Y%m%dT%H%M%SZ&#039;`&lt;br /&gt;
mkdir -p /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.ldif.${backupdate} /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Check if there is a difference between the current XML file and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
diff -Naur /etc/libvirt/qemu/${machinename}.xml /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.xml.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Now you are entering the critical part. You won&#039;t be able to undo the following steps&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Check if there is a difference between the current LDAP entry and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
domain=&amp;quot;&amp;lt;DOMAIN&amp;gt;&amp;quot; # For example domain=&amp;quot;stoney-cloud.org&amp;quot;&lt;br /&gt;
ldapbase=&amp;quot;&amp;lt;LDAPBASE&amp;gt;&amp;quot; # For expample ldapbase=&amp;quot;dc=stoney-cloud,dc=org&amp;quot;&lt;br /&gt;
ldapsearch -H ldaps://ldapm.${domain} -b &amp;quot;sstVirtualMachine=${machinename},ou=virtual machines,ou=virtualization,ou=services,${ldapbase}&amp;quot; -s sub -x -LLL -o ldif-wrap=no -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W  &amp;quot;(objectclass=*)&amp;quot; &amp;gt; /tmp/${machinename}.ldif&lt;br /&gt;
diff -Naur /tmp/${machinename}.ldif /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.&lt;br /&gt;
&lt;br /&gt;
If there are no differences (or the differences are not important) you can skip the following step. Otherwise use the [https://cloud.stepping-stone.ch/phpldapadmin PhpLdapAdmin] to delete the machine from the LDAP directory (do not forget to delete the dhcp entry &amp;lt;code&amp;gt;dn: cn=&amp;lt;MACHINE-NAME&amp;gt;,ou=virtual machines,cn=192.168.140.0,cn=config-01,ou=dhcp,ou=networks,ou=virtualization,ou=services,dc=stoney-cloud,dc=org&amp;lt;/code&amp;gt;). Then add the LDIF (the one you just edited) to the LDAP (first do some general replacement)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sed -i\&lt;br /&gt;
 -e &#039;s/snapshotting/finished/&#039;\&lt;br /&gt;
 -e &#039;/member.*/d&#039;\&lt;br /&gt;
 /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&lt;br /&gt;
/usr/bin/ldapadd -H &amp;quot;ldaps://ldapm.${domain}&amp;quot; -x -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W -f /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Undefine the machine&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh undefine ${machinename}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all the disk images from the backup location back to their original location&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage1}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage1}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage2}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage2}&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And restore the domain from the state file from the backup location with the XML from the retain location (the one you might have edited)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh restore /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.state.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now the machine should be up and running again. Continuing where it was stopped when taking the backup.&lt;br /&gt;
&lt;br /&gt;
If everything is OK, you can cleanup the created files and directories&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rm -rf /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
rm /tmp/${machinename}.ldif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: stoney conductor]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3745</id>
		<title>stoney conductor: VM Backup</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3745"/>
		<updated>2014-06-26T13:47:31Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Step 07: Starting the Merge Process (Provisioning-Backup-KVM daemon) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This page describes how the VMs and VM-Templates are backed-up and restored inside the [http://www.stoney-cloud.org stoney cloud].&lt;br /&gt;
&lt;br /&gt;
= Requirements =&lt;br /&gt;
* sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
** This directory might be a single partition which needs to have the same size as your partition for the live images (it&#039;s a &amp;quot;copy&amp;quot; of the live partition)&lt;br /&gt;
* sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
** This directory must be on the same partition as your life images are&lt;br /&gt;
* A working stoney cloud, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].&lt;br /&gt;
* The backup configuration must be set: [[stoney_conductor:_OpenLDAP_directory_data_organisation#Backup | stoney conductor: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Backup =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The main idea to backup a VM or a VM-Template is, to divide the task into three subtasks: &lt;br /&gt;
* createSnapshot: Create a disk only snapshot. A new overlay file is created, all write operations are performed to this file. The underlying disk-image is now read only.&lt;br /&gt;
* exportSnapshot: Copy the read only disk-image to the backup location.&lt;br /&gt;
* commitSnapshot: Commit the performed write operations from the overlay back to the underlying (original) disk image. Now the underlying image is read-write again and the overlay image can be deleted.&lt;br /&gt;
A more detailed and technical description for these three sub-processes can be found [[#Sub-Processes | here]].&lt;br /&gt;
&lt;br /&gt;
Furthermore there is an control instance, which can independently call these three sub-processes for a given machine. Like that, the stoney cloud is able to handle different cases:&lt;br /&gt;
=== Backup a single machine ===&lt;br /&gt;
The procedure for backing up a single machine is very simple. Just call the three sub-processes (snapshot, merge and retain) one after the other. So the control instance would do some very basic stuff: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machine = args[0];&lt;br /&gt;
&lt;br /&gt;
if( createSsnapshot( machine ) )&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
    if ( exportSnapshot( machine ) )&lt;br /&gt;
    {&lt;br /&gt;
&lt;br /&gt;
        if ( commitSnapshot( machine ) )&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Successfully backed up machine %s\n&amp;quot;, machine);&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
} else&lt;br /&gt;
{&lt;br /&gt;
    printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Backup multiple machines at the same time ===&lt;br /&gt;
When backing up multiple machines at the same time, we need to make sure that the snapshots for the machines are as close together as possible. Therefore the control instance should call first the createSnapshot process for all machines. After every machine has been snapshotted, the control instance can call the exportSnapshot and commitSnapshot process for every machine. The most important part here is, that the control instance somehow remembers, if the snapshot for a given machine was successful or not. Because if the snapshot failed, it must not call the exportSnapshot and commitSnapshot process. So the control instance needs a little bit more logic: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machines[] = args[0];&lt;br /&gt;
object successful_snapshots[];&lt;br /&gt;
&lt;br /&gt;
# Snapshot all machines&lt;br /&gt;
for( int i = 0; i &amp;lt;  sizeof(machines) / sizeof(object) ; i++ )&lt;br /&gt;
{&lt;br /&gt;
    # If the snapshot was successful, put the machine into the &lt;br /&gt;
    # successful_snapshots array&lt;br /&gt;
    if ( createSnapshot( machines[i] ) )&lt;br /&gt;
    {&lt;br /&gt;
        successful_snapshots[machines[i]];&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machines[i],error);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# export and commit all successful_snapshot machines&lt;br /&gt;
for ( int i = 0; i &amp;lt;  sizeof(successful_snapshots) / sizeof(object) ; i++ ) )&lt;br /&gt;
{&lt;br /&gt;
    # Check if the element at this position is not null, then the snapshot &lt;br /&gt;
    # for this machine was successful&lt;br /&gt;
    if ( successful_snapshots[i] )&lt;br /&gt;
    {&lt;br /&gt;
        if ( exportSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
        {&lt;br /&gt;
            if ( commitSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
            {&lt;br /&gt;
              printf(&amp;quot;Successfully backed-up machine %s\n&amp;quot;, successful_snapshots[i]);&lt;br /&gt;
            } else&lt;br /&gt;
            {&lt;br /&gt;
                printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sub-Processes ===&lt;br /&gt;
See also [[Libvirt_external_snapshot_with_GlusterFS]]&lt;br /&gt;
==== createSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Part_2:_Create_the_snapshot_using_virsh]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#createSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== exportSnapshot ====&lt;br /&gt;
# Simply copy the underlying image to the backup location&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;image&amp;gt;.qcow2 /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;backup&amp;gt;/&amp;lt;location&amp;gt;/.&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#exportSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== commitSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Cleanup.2FCommit_.28Online.29]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#commitSnapshot]]&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
Since the stoney cloud is (as the name says already) a cloud solution, it makes sense to have a backend (in our case openLDAP) involved in the whole process. Like that it is possible to run the backup jobs decentralized on every vm-node. The control instance can then modify the backend, and theses changes are seen by the diffenrent backup daemons on the vm-nodes. So the communication could look like shown in the following picture (Figure 1): &lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-communication.png|800px|thumbnail|none|Figure 1: Communication between the control instance and the prov-backup-kvm daemon through the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
=== Control-Instance Daemon Interaction for creating a Backup with LDIF Examples ===&lt;br /&gt;
The step numbers correspond with the graphical overview from above.&lt;br /&gt;
&lt;br /&gt;
==== Step 00: Backup Configuration for a virtual machine ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The following backup configuration says, that the backup should be done daily, at 03:00 hours (localtime).&lt;br /&gt;
# * * * * * command to be executed&lt;br /&gt;
# - - - - -&lt;br /&gt;
# | | | | |&lt;br /&gt;
# | | | | +----- day of week (0 - 6) (Sunday=0)&lt;br /&gt;
# | | | +------- month (1 - 12)&lt;br /&gt;
# | | +--------- day of month (1 - 31)&lt;br /&gt;
# | +----------- hour (0 - 23)&lt;br /&gt;
# +------------- min (0 - 59)&lt;br /&gt;
# localtime in the crontab entry&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
objectclass: sstCronObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
description: This sub tree contains the backup plan for the virtual machine kvm-005.&lt;br /&gt;
sstCronMinute: 0&lt;br /&gt;
sstCronHour: 3&lt;br /&gt;
sstCronDay: *&lt;br /&gt;
sstCronMonth: *&lt;br /&gt;
sstCronDayOfWeek: *&lt;br /&gt;
sstCronActive: TRUE&lt;br /&gt;
sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
sstBackupRamDiskLocation: file:///mnt/ramdisk-test&lt;br /&gt;
sstVirtualizationDiskImageFormat: qcow2&lt;br /&gt;
sstVirtualizationDiskImageOwner: root&lt;br /&gt;
sstVirtualizationDiskImageGroup: vm-storage&lt;br /&gt;
sstVirtualizationDiskImagePermission: 0660&lt;br /&gt;
sstBackupNumberOfIterations: 1&lt;br /&gt;
sstVirtualizationVirtualMachineForceStart: FALSE&lt;br /&gt;
sstVirtualizationBandwidthMerge: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 01: Initialize Backup Sub Tree (Control instance daemon) ====&lt;br /&gt;
The sub tree &#039;&#039;&#039; ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&#039;&#039;&#039; reflects the time, when the backup is planned (in the form of [YYYY][MM][DD]T[hh][mm][ss]Z ([http://en.wikipedia.org/wiki/ISO_8601 ISO 8601]) and it should be written at the time, when the backup is planned and should be executed. The section &#039;&#039;&#039;20121002T010000Z&#039;&#039;&#039; means the following:&lt;br /&gt;
* Year: 2012&lt;br /&gt;
* Month: 10&lt;br /&gt;
* Day of Month: 02&lt;br /&gt;
* Hour of Day: 01&lt;br /&gt;
* Minutes: 00&lt;br /&gt;
* Seconds: 00&lt;br /&gt;
Please be aware the the time is to be written in UTC (see also the comment in the LDIF example below).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# This entry is the place holder for the backup, which is to be executed at 03:00 hours (localtime with daylight-saving). This&lt;br /&gt;
# leads to the 20121002T010000Z timestamp (which is written in UTC).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: sstProvisioning&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
ou: 20121002T010000Z&lt;br /&gt;
sstProvisioningExecutionDate: 0&lt;br /&gt;
sstProvisioningMode: initialize&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
sstProvisioningState: 20121002T014513Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Finalize the Initialization (Control instance daemon) ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is modified.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: initialized&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Start the Snapshot Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshot&#039;&#039;&#039;, the actual backup process is kicked off by the Control instance daemon.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# snapshot (this way the Provisioning-Backup-VKM daemon knows, that it must start the snapshotting process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshot&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 04: Starting the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is snapshotting the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to snapshotting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Finalizing the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the snapshot of the virtual machine or virtual machine template is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010011Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Start the export Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;export&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to export the disk image to the backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# export (this way the Provisioning-Backup-VKM daemon knows, that it must start the export process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: export&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Starting the export Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the export command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;exporting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is exporting the virtual machine or virtual machine template disk images.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to exporting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: exporting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 08: Finalizing the Merging Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the merge command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;merged&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the merging of the virtual machine or virtual machine template is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010500Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: merged&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the Retain Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;retain&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to retain (copy and then delete) all the necessary files to the configured backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# retain (this way the Provisioning-Backup-VKM daemon knows, that it must start the retaining process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: retain&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the Retain Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the retain command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;retaining&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is retaining the necessary files to the configured backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to retaining by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: retaining&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the Retaing Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the retain command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;retained&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the retaining of all the necessary files to the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: retained&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the Backup Process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;retained&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Backup) ==&lt;br /&gt;
Since we do not have a working control instance, we need to have a workaround for backing up the machines: &lt;br /&gt;
&lt;br /&gt;
* We do already have a BackupKVMWrapper.pl script (File-Backend) which executes the three [[#Sub-Processes | sub-processes ]] in the correct order for a given list of machines (see [[#Backup multiple machines at the same_time]]).&lt;br /&gt;
* We do already have the implementation for the whole backup with the LDAP-Backend (see [[ stoney conductor: prov backup kvm ]]).&lt;br /&gt;
* We can now combine these two existing scripts and create a wrapper (lets call it LDAPKVMWrapper) which, in some way, adds some logic to the BackupKVMWrapper.pl. In fact the LDAPKVMWrapper wrapper will generate the list of machines which need a backup.&lt;br /&gt;
&lt;br /&gt;
The behaviour on our servers is as follows (c.f. Figure 2):&lt;br /&gt;
# The (decentralized) LDAPKVMWrapper wrapper (which is executed everyday via cronjob) generates a list off all machines running on the current host.&lt;br /&gt;
#* Currently on the hosts the cronjobs looks like: &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
#* For each of these machines:&lt;br /&gt;
#** Check if the machine is excluded from the backup, if yes, remove the machine from the list&lt;br /&gt;
#** Check if the last backup was successful, if not, remove the machine from the list&lt;br /&gt;
# Update the backup subtree for each machine in the list&lt;br /&gt;
#* Remove the old backup leaf (the &amp;quot;yesterday-leaf&amp;quot;), and add a new one (the &amp;quot;today-leaf&amp;quot;) &lt;br /&gt;
#* After this step, the machines are ready to be backed up&lt;br /&gt;
# Call the BackupKVMWrapper.pl script with the machines list as a parameter&lt;br /&gt;
# Wait for the BackupKVMWrapper.pl script to finish&lt;br /&gt;
# Go again through all machines and update the backup subtree a last time&lt;br /&gt;
#* Check if the backup was successful, if yes, set sstProvisioningMode = finished (see also TBD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:wrapper-interaction.png|500px|thumbnail|none|Figure 2: How the two wrapper interact with the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
* If for some reason something does not work at all, the whole backup process can be deactivated by simply disabling the LDAPKVMWrapper cronjob&lt;br /&gt;
** &amp;lt;code&amp;gt;crontab -e&amp;lt;/code&amp;gt;&lt;br /&gt;
** Comment the LDAPKVMWrapper cronjob line: &amp;lt;code&amp;gt;#00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
=== How to exclude a machine from the backup ===&lt;br /&gt;
Login to one of the [[VM-Node | vm-nodes]] and execute the following command&lt;br /&gt;
&lt;br /&gt;
If you want to exclude a machine from the backup run you simply need to add the following entry to your LDAP directory: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the backup subtree in the LDAP directory already exists, you need to add the sstbackupexcludefrombackup attribute: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
add: objectClass&lt;br /&gt;
objectClass: sstVirtualizationBackupObjectClass&lt;br /&gt;
-&lt;br /&gt;
add: sstbackupexcludefrombackup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Re-include the machine to the backup ====&lt;br /&gt;
If you want to re include a machine, simply delete the machines whole backup subtree. It will be recreated during the next backup run.&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
= Restore =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The restore process, similar to the backup process, can be divided into three sub-processes: &lt;br /&gt;
* Unretain the small files: Copy the small files (backend entry, XML description) from the backup directory to the retain directory&lt;br /&gt;
* Unretain the big files: Copy the big files (state file, disk image(s)) form the backup directory to the retain directory&lt;br /&gt;
* Restore the machine: Replace the live disk image(s) by the one(s) from the backup and restore the machine from the state file&lt;br /&gt;
&lt;br /&gt;
Additionally the restore process can also be divided into two phases: &lt;br /&gt;
* User-Interaction phase: After the &amp;quot;unretain small files&amp;quot; the user needs to decide two things:&lt;br /&gt;
** On conflicts between the backend entry file and the XML description, the user need to decide how to resolve this conflict(s)&lt;br /&gt;
** The user can also abort the restore process up to this point. After that the restore can not be aborted or undone! &lt;br /&gt;
* Non-User-Interaction phase: The daemons communicate through the backend between each other and the restore process continues without further user input (c.f. [[#Communication_through_backend_2 | Communication through backend]])&lt;br /&gt;
&lt;br /&gt;
=== Sub Processes ===&lt;br /&gt;
==== Unretain small files ====&lt;br /&gt;
This workflow assumes that the backup directory is on the same physical server as the retain directory (protocol is file://)&lt;br /&gt;
# Copy the backend-entry file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.backend /path/to/retain/vm-001.backend&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the XML description from the from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.xml /path/to/retain/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Compare the backend-entry file (the one in the retain directory) with the live-backend entry&lt;br /&gt;
#* Resolve all conflicts between these two backend entries&lt;br /&gt;
#** Modify the backend entry at the retain location accordingly&lt;br /&gt;
# Apply the same changes for the XML description at the retain location (backend entry and XML description need to be consistent).&lt;br /&gt;
&lt;br /&gt;
==== Unretain large files ====&lt;br /&gt;
# Copy the state file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.state /path/to/retain/vm-001.state&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the disk image(s) from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.qcow2 /path/to/retain/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
&lt;br /&gt;
==== Restore the VM ====&lt;br /&gt;
# Shutdown the VM if it is running:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh shutdown vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Undefine the VM if it is still defined: &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh undefine vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Overwrite the original disk image:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;mv /path/to/retain/vm-001.qcow2 /path/to/images/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
# Restore the VMs backend entry: &lt;br /&gt;
#* Write the backend entry from the retain location (&amp;lt;code&amp;gt;/path/to/retain/vm-001.backend&amp;lt;/code&amp;gt;) to the backend&lt;br /&gt;
# Overwrite the VMs XML description with the one from the retain location &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/retain/vm-001.xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Restore the VM from the state file with the corrected XML&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh restore /path/to/retain/vm-001.state --xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
The actual KVM-Restore process is controlled completely by the Control instance daemon via the OpenLDAP directory. See [[#OpenLDAP Directory Integration|OpenLDAP Directory Integration]] the involved attributes and possible values.&lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-interaction-restore.png|thumb|500px|none|Figure 3: Communication between all involved parties during the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update these interactions by editing [[File:Restore-Interaction.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== Control instance Daemon Interaction for restoring a Backup with LDIF Examples ===&lt;br /&gt;
==== Step 01: Start the unretainSmallFiles process (Control instance daemon) ====&lt;br /&gt;
The first step of the restore process is to copy the small files (in this case the XML file and the LDIF) from the configured backup location to the configured retain location. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainSmallFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainSmallFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Starting the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingSmallFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the small files for the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Finalizing the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedSmallFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the small files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Start the unretainLargeFiles process (Control instance daemon) ====&lt;br /&gt;
Next step in the restore process is to copy the large files (state file and disk images) from the configured backup directory to the configured retain directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainLargeFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainLargeFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Starting the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingLargeFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the large files for the virtual machine or virtual machine template.&lt;br /&gt;
&lt;br /&gt;
In the meantime the vm-manager merges the LDIF we have unretained in [[#Step_02:_Starting_the_unretainSmallFiles_process_.28Provisioning-Backup-KVM_daemon.29 | step 02]] with the one in the live directory to sort out possible differences in the configuration of the virtual machine.  &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Finalizing the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedLargeFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the large files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the restore process (Control instance daemon) ====&lt;br /&gt;
Since we now have all necessary files in the configured retain location, the restore process can be started. There we simply copy the disk images back to their original location and restore the VM from the state file (which is also at the configured retain location)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# restore (this way the Provisioning-Backup-VKM daemon knows, that it must start the restore process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restore&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restoring&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is restoring the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to restoring by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restoring&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restored&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restored&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the restore process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;restored&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the Control instance daemon, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the restore process is finished.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Restore) ==&lt;br /&gt;
* Since the prov-backup-kvm daemon is not running on the vm-nodes (c.f. [[stoney_conductor:_Backup#State_of_the_art]]), the restore process does not work when clicking the icon in the webinterface. &lt;br /&gt;
* Resolving the conflicts in the backend and XML description file is not yet done&lt;br /&gt;
** Actually all steps not executed by prov-backup-kvm are not yet properly implemented (c.f. [[stoney_conductor:_prov_backup_kvm#Restore]])&lt;br /&gt;
* The implementation is done, but the last step from the [[#Restore_2 | restore process ]] is different:&lt;br /&gt;
** The &amp;lt;code&amp;gt;virsh restore&amp;lt;/code&amp;gt; command is not executed with the &amp;lt;code&amp;gt;--xml&amp;lt;/code&amp;gt; option, the XML from the state file is taken when restoring the machine. Therefore the conflicts are not properly resolved. &lt;br /&gt;
*** --[[User:Pat|Pat]] ([[User talk:Pat|talk]]) 09:41, 29 October 2013 (CET): Currently the [http://search.cpan.org/~danberr/Sys-Virt-1.1.3/lib/Sys/Virt.pm Sys::Virt] library does not support the --xml parameter when restoring a domain&lt;br /&gt;
&lt;br /&gt;
=== How to manually restore a machine from backup ===&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: Before you continue with this guide, make sure that you have no other possibility to restore the machine. It might be easier and safer to get lost files from the online backup if the machine has one set up.&lt;br /&gt;
&lt;br /&gt;
If you really have to restore the machine from the backup:&lt;br /&gt;
# Stop the machine from via the [https://cloud.stepping-stone.ch/vm-manager/ web interface]&lt;br /&gt;
# Login (as root) on the [[VM-Node]] the machine was running on&lt;br /&gt;
&lt;br /&gt;
As a first step, you would like to set some useful bash variables to be able to copy paste the following guide:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Double check all variables you are setting here. If one is not correct, you will restore a running machine or overwrite a live-disk image!&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
machinename=&amp;quot;&amp;lt;MACHINE-NAME&amp;gt;&amp;quot; # For example: machinename=&amp;quot;b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6&amp;quot;&lt;br /&gt;
vmpool=&amp;quot;&amp;lt;VM-POOL&amp;gt;&amp;quot; # For example vmpool=&amp;quot;0f83f084-8080-413e-b558-b678e504836e&amp;quot;&lt;br /&gt;
vmtype=&amp;quot;&amp;lt;VM-TYPE&amp;gt;&amp;quot; # For example vmtype=&amp;quot;vm-persistent&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change to the backup directory for the given machine and check the iterations:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change into the most recent iteration&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd 2014...&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In there you should have: &lt;br /&gt;
* The state file &amp;lt;MACHINE-NAME&amp;gt;.state.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.state.20140109T134445Z)&lt;br /&gt;
* The XML description &amp;lt;MACHINE-NAME&amp;gt;.xml.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.xml.20140109T134445Z)&lt;br /&gt;
* The ldif file &amp;lt;MACHINE-NAME&amp;gt;.ldif.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.ldif.20140109T134445Z)&lt;br /&gt;
* And at least one disk image &amp;lt;DISK-IMAGE&amp;gt;.qcow2.&amp;lt;BACKUP-DATE&amp;gt; (for example 8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2.20140109T134445Z)&lt;br /&gt;
Now you should save the backup date and the disk image(s) in a variable&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
backupdate=&amp;quot;&amp;lt;BACKUP-DATE&amp;gt;&amp;quot; # For example: backupdate=&amp;quot;20140109T134445Z&amp;quot;&lt;br /&gt;
diskimage1=&amp;quot;&amp;lt;DISK-IMAGE-1&amp;gt;.qcow2&amp;quot; # For example: diskimage1=&amp;quot;8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2&amp;quot;&lt;br /&gt;
diskimage2=&amp;quot;&amp;lt;DISK-IMAGE-2&amp;gt;.qcow2&amp;quot; # For example: diskimage2=&amp;quot;aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee.qcow2&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Have again a look at the different variables and &#039;&#039;&#039;double check them again&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
echo &amp;quot;Machine Name = ${machinename}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Pool = ${vmpool}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Type = ${vmtype}&amp;quot;&lt;br /&gt;
echo &amp;quot;Backup date = ${backupdate}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 1 = ${diskimage1}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 2 = ${diskimage2}&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all these files to the retain location:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
currentdate=`date --utc +&#039;%Y%m%dT%H%M%SZ&#039;`&lt;br /&gt;
mkdir -p /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.ldif.${backupdate} /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Check if there is a difference between the current XML file and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
diff -Naur /etc/libvirt/qemu/${machinename}.xml /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.xml.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Now you are entering the critical part. You won&#039;t be able to undo the following steps&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Check if there is a difference between the current LDAP entry and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
domain=&amp;quot;&amp;lt;DOMAIN&amp;gt;&amp;quot; # For example domain=&amp;quot;stoney-cloud.org&amp;quot;&lt;br /&gt;
ldapbase=&amp;quot;&amp;lt;LDAPBASE&amp;gt;&amp;quot; # For expample ldapbase=&amp;quot;dc=stoney-cloud,dc=org&amp;quot;&lt;br /&gt;
ldapsearch -H ldaps://ldapm.${domain} -b &amp;quot;sstVirtualMachine=${machinename},ou=virtual machines,ou=virtualization,ou=services,${ldapbase}&amp;quot; -s sub -x -LLL -o ldif-wrap=no -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W  &amp;quot;(objectclass=*)&amp;quot; &amp;gt; /tmp/${machinename}.ldif&lt;br /&gt;
diff -Naur /tmp/${machinename}.ldif /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.&lt;br /&gt;
&lt;br /&gt;
If there are no differences (or the differences are not important) you can skip the following step. Otherwise use the [https://cloud.stepping-stone.ch/phpldapadmin PhpLdapAdmin] to delete the machine from the LDAP directory (do not forget to delete the dhcp entry &amp;lt;code&amp;gt;dn: cn=&amp;lt;MACHINE-NAME&amp;gt;,ou=virtual machines,cn=192.168.140.0,cn=config-01,ou=dhcp,ou=networks,ou=virtualization,ou=services,dc=stoney-cloud,dc=org&amp;lt;/code&amp;gt;). Then add the LDIF (the one you just edited) to the LDAP (first do some general replacement)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sed -i\&lt;br /&gt;
 -e &#039;s/snapshotting/finished/&#039;\&lt;br /&gt;
 -e &#039;/member.*/d&#039;\&lt;br /&gt;
 /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&lt;br /&gt;
/usr/bin/ldapadd -H &amp;quot;ldaps://ldapm.${domain}&amp;quot; -x -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W -f /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Undefine the machine&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh undefine ${machinename}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all the disk images from the backup location back to their original location&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage1}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage1}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage2}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage2}&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And restore the domain from the state file from the backup location with the XML from the retain location (the one you might have edited)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh restore /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.state.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now the machine should be up and running again. Continuing where it was stopped when taking the backup.&lt;br /&gt;
&lt;br /&gt;
If everything is OK, you can cleanup the created files and directories&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rm -rf /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
rm /tmp/${machinename}.ldif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: stoney conductor]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3744</id>
		<title>stoney conductor: VM Backup</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3744"/>
		<updated>2014-06-26T13:46:41Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Step 06: Start the Merge Process (Control instance daemon) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This page describes how the VMs and VM-Templates are backed-up and restored inside the [http://www.stoney-cloud.org stoney cloud].&lt;br /&gt;
&lt;br /&gt;
= Requirements =&lt;br /&gt;
* sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
** This directory might be a single partition which needs to have the same size as your partition for the live images (it&#039;s a &amp;quot;copy&amp;quot; of the live partition)&lt;br /&gt;
* sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
** This directory must be on the same partition as your life images are&lt;br /&gt;
* A working stoney cloud, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].&lt;br /&gt;
* The backup configuration must be set: [[stoney_conductor:_OpenLDAP_directory_data_organisation#Backup | stoney conductor: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Backup =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The main idea to backup a VM or a VM-Template is, to divide the task into three subtasks: &lt;br /&gt;
* createSnapshot: Create a disk only snapshot. A new overlay file is created, all write operations are performed to this file. The underlying disk-image is now read only.&lt;br /&gt;
* exportSnapshot: Copy the read only disk-image to the backup location.&lt;br /&gt;
* commitSnapshot: Commit the performed write operations from the overlay back to the underlying (original) disk image. Now the underlying image is read-write again and the overlay image can be deleted.&lt;br /&gt;
A more detailed and technical description for these three sub-processes can be found [[#Sub-Processes | here]].&lt;br /&gt;
&lt;br /&gt;
Furthermore there is an control instance, which can independently call these three sub-processes for a given machine. Like that, the stoney cloud is able to handle different cases:&lt;br /&gt;
=== Backup a single machine ===&lt;br /&gt;
The procedure for backing up a single machine is very simple. Just call the three sub-processes (snapshot, merge and retain) one after the other. So the control instance would do some very basic stuff: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machine = args[0];&lt;br /&gt;
&lt;br /&gt;
if( createSsnapshot( machine ) )&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
    if ( exportSnapshot( machine ) )&lt;br /&gt;
    {&lt;br /&gt;
&lt;br /&gt;
        if ( commitSnapshot( machine ) )&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Successfully backed up machine %s\n&amp;quot;, machine);&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
} else&lt;br /&gt;
{&lt;br /&gt;
    printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Backup multiple machines at the same time ===&lt;br /&gt;
When backing up multiple machines at the same time, we need to make sure that the snapshots for the machines are as close together as possible. Therefore the control instance should call first the createSnapshot process for all machines. After every machine has been snapshotted, the control instance can call the exportSnapshot and commitSnapshot process for every machine. The most important part here is, that the control instance somehow remembers, if the snapshot for a given machine was successful or not. Because if the snapshot failed, it must not call the exportSnapshot and commitSnapshot process. So the control instance needs a little bit more logic: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machines[] = args[0];&lt;br /&gt;
object successful_snapshots[];&lt;br /&gt;
&lt;br /&gt;
# Snapshot all machines&lt;br /&gt;
for( int i = 0; i &amp;lt;  sizeof(machines) / sizeof(object) ; i++ )&lt;br /&gt;
{&lt;br /&gt;
    # If the snapshot was successful, put the machine into the &lt;br /&gt;
    # successful_snapshots array&lt;br /&gt;
    if ( createSnapshot( machines[i] ) )&lt;br /&gt;
    {&lt;br /&gt;
        successful_snapshots[machines[i]];&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machines[i],error);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# export and commit all successful_snapshot machines&lt;br /&gt;
for ( int i = 0; i &amp;lt;  sizeof(successful_snapshots) / sizeof(object) ; i++ ) )&lt;br /&gt;
{&lt;br /&gt;
    # Check if the element at this position is not null, then the snapshot &lt;br /&gt;
    # for this machine was successful&lt;br /&gt;
    if ( successful_snapshots[i] )&lt;br /&gt;
    {&lt;br /&gt;
        if ( exportSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
        {&lt;br /&gt;
            if ( commitSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
            {&lt;br /&gt;
              printf(&amp;quot;Successfully backed-up machine %s\n&amp;quot;, successful_snapshots[i]);&lt;br /&gt;
            } else&lt;br /&gt;
            {&lt;br /&gt;
                printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sub-Processes ===&lt;br /&gt;
See also [[Libvirt_external_snapshot_with_GlusterFS]]&lt;br /&gt;
==== createSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Part_2:_Create_the_snapshot_using_virsh]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#createSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== exportSnapshot ====&lt;br /&gt;
# Simply copy the underlying image to the backup location&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;image&amp;gt;.qcow2 /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;backup&amp;gt;/&amp;lt;location&amp;gt;/.&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#exportSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== commitSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Cleanup.2FCommit_.28Online.29]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#commitSnapshot]]&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
Since the stoney cloud is (as the name says already) a cloud solution, it makes sense to have a backend (in our case openLDAP) involved in the whole process. Like that it is possible to run the backup jobs decentralized on every vm-node. The control instance can then modify the backend, and theses changes are seen by the diffenrent backup daemons on the vm-nodes. So the communication could look like shown in the following picture (Figure 1): &lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-communication.png|800px|thumbnail|none|Figure 1: Communication between the control instance and the prov-backup-kvm daemon through the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
=== Control-Instance Daemon Interaction for creating a Backup with LDIF Examples ===&lt;br /&gt;
The step numbers correspond with the graphical overview from above.&lt;br /&gt;
&lt;br /&gt;
==== Step 00: Backup Configuration for a virtual machine ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The following backup configuration says, that the backup should be done daily, at 03:00 hours (localtime).&lt;br /&gt;
# * * * * * command to be executed&lt;br /&gt;
# - - - - -&lt;br /&gt;
# | | | | |&lt;br /&gt;
# | | | | +----- day of week (0 - 6) (Sunday=0)&lt;br /&gt;
# | | | +------- month (1 - 12)&lt;br /&gt;
# | | +--------- day of month (1 - 31)&lt;br /&gt;
# | +----------- hour (0 - 23)&lt;br /&gt;
# +------------- min (0 - 59)&lt;br /&gt;
# localtime in the crontab entry&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
objectclass: sstCronObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
description: This sub tree contains the backup plan for the virtual machine kvm-005.&lt;br /&gt;
sstCronMinute: 0&lt;br /&gt;
sstCronHour: 3&lt;br /&gt;
sstCronDay: *&lt;br /&gt;
sstCronMonth: *&lt;br /&gt;
sstCronDayOfWeek: *&lt;br /&gt;
sstCronActive: TRUE&lt;br /&gt;
sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
sstBackupRamDiskLocation: file:///mnt/ramdisk-test&lt;br /&gt;
sstVirtualizationDiskImageFormat: qcow2&lt;br /&gt;
sstVirtualizationDiskImageOwner: root&lt;br /&gt;
sstVirtualizationDiskImageGroup: vm-storage&lt;br /&gt;
sstVirtualizationDiskImagePermission: 0660&lt;br /&gt;
sstBackupNumberOfIterations: 1&lt;br /&gt;
sstVirtualizationVirtualMachineForceStart: FALSE&lt;br /&gt;
sstVirtualizationBandwidthMerge: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 01: Initialize Backup Sub Tree (Control instance daemon) ====&lt;br /&gt;
The sub tree &#039;&#039;&#039; ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&#039;&#039;&#039; reflects the time, when the backup is planned (in the form of [YYYY][MM][DD]T[hh][mm][ss]Z ([http://en.wikipedia.org/wiki/ISO_8601 ISO 8601]) and it should be written at the time, when the backup is planned and should be executed. The section &#039;&#039;&#039;20121002T010000Z&#039;&#039;&#039; means the following:&lt;br /&gt;
* Year: 2012&lt;br /&gt;
* Month: 10&lt;br /&gt;
* Day of Month: 02&lt;br /&gt;
* Hour of Day: 01&lt;br /&gt;
* Minutes: 00&lt;br /&gt;
* Seconds: 00&lt;br /&gt;
Please be aware the the time is to be written in UTC (see also the comment in the LDIF example below).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# This entry is the place holder for the backup, which is to be executed at 03:00 hours (localtime with daylight-saving). This&lt;br /&gt;
# leads to the 20121002T010000Z timestamp (which is written in UTC).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: sstProvisioning&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
ou: 20121002T010000Z&lt;br /&gt;
sstProvisioningExecutionDate: 0&lt;br /&gt;
sstProvisioningMode: initialize&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
sstProvisioningState: 20121002T014513Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Finalize the Initialization (Control instance daemon) ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is modified.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: initialized&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Start the Snapshot Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshot&#039;&#039;&#039;, the actual backup process is kicked off by the Control instance daemon.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# snapshot (this way the Provisioning-Backup-VKM daemon knows, that it must start the snapshotting process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshot&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 04: Starting the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is snapshotting the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to snapshotting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Finalizing the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the snapshot of the virtual machine or virtual machine template is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010011Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Start the export Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;export&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to export the disk image to the backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# export (this way the Provisioning-Backup-VKM daemon knows, that it must start the export process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: export&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Starting the Merge Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the merge command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;merging&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is merging the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to merging by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: merging&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 08: Finalizing the Merging Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the merge command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;merged&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the merging of the virtual machine or virtual machine template is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010500Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: merged&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the Retain Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;retain&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to retain (copy and then delete) all the necessary files to the configured backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# retain (this way the Provisioning-Backup-VKM daemon knows, that it must start the retaining process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: retain&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the Retain Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the retain command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;retaining&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is retaining the necessary files to the configured backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to retaining by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: retaining&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the Retaing Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the retain command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;retained&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the retaining of all the necessary files to the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: retained&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the Backup Process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;retained&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Backup) ==&lt;br /&gt;
Since we do not have a working control instance, we need to have a workaround for backing up the machines: &lt;br /&gt;
&lt;br /&gt;
* We do already have a BackupKVMWrapper.pl script (File-Backend) which executes the three [[#Sub-Processes | sub-processes ]] in the correct order for a given list of machines (see [[#Backup multiple machines at the same_time]]).&lt;br /&gt;
* We do already have the implementation for the whole backup with the LDAP-Backend (see [[ stoney conductor: prov backup kvm ]]).&lt;br /&gt;
* We can now combine these two existing scripts and create a wrapper (lets call it LDAPKVMWrapper) which, in some way, adds some logic to the BackupKVMWrapper.pl. In fact the LDAPKVMWrapper wrapper will generate the list of machines which need a backup.&lt;br /&gt;
&lt;br /&gt;
The behaviour on our servers is as follows (c.f. Figure 2):&lt;br /&gt;
# The (decentralized) LDAPKVMWrapper wrapper (which is executed everyday via cronjob) generates a list off all machines running on the current host.&lt;br /&gt;
#* Currently on the hosts the cronjobs looks like: &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
#* For each of these machines:&lt;br /&gt;
#** Check if the machine is excluded from the backup, if yes, remove the machine from the list&lt;br /&gt;
#** Check if the last backup was successful, if not, remove the machine from the list&lt;br /&gt;
# Update the backup subtree for each machine in the list&lt;br /&gt;
#* Remove the old backup leaf (the &amp;quot;yesterday-leaf&amp;quot;), and add a new one (the &amp;quot;today-leaf&amp;quot;) &lt;br /&gt;
#* After this step, the machines are ready to be backed up&lt;br /&gt;
# Call the BackupKVMWrapper.pl script with the machines list as a parameter&lt;br /&gt;
# Wait for the BackupKVMWrapper.pl script to finish&lt;br /&gt;
# Go again through all machines and update the backup subtree a last time&lt;br /&gt;
#* Check if the backup was successful, if yes, set sstProvisioningMode = finished (see also TBD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:wrapper-interaction.png|500px|thumbnail|none|Figure 2: How the two wrapper interact with the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
* If for some reason something does not work at all, the whole backup process can be deactivated by simply disabling the LDAPKVMWrapper cronjob&lt;br /&gt;
** &amp;lt;code&amp;gt;crontab -e&amp;lt;/code&amp;gt;&lt;br /&gt;
** Comment the LDAPKVMWrapper cronjob line: &amp;lt;code&amp;gt;#00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
=== How to exclude a machine from the backup ===&lt;br /&gt;
Login to one of the [[VM-Node | vm-nodes]] and execute the following command&lt;br /&gt;
&lt;br /&gt;
If you want to exclude a machine from the backup run you simply need to add the following entry to your LDAP directory: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the backup subtree in the LDAP directory already exists, you need to add the sstbackupexcludefrombackup attribute: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
add: objectClass&lt;br /&gt;
objectClass: sstVirtualizationBackupObjectClass&lt;br /&gt;
-&lt;br /&gt;
add: sstbackupexcludefrombackup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Re-include the machine to the backup ====&lt;br /&gt;
If you want to re include a machine, simply delete the machines whole backup subtree. It will be recreated during the next backup run.&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
= Restore =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The restore process, similar to the backup process, can be divided into three sub-processes: &lt;br /&gt;
* Unretain the small files: Copy the small files (backend entry, XML description) from the backup directory to the retain directory&lt;br /&gt;
* Unretain the big files: Copy the big files (state file, disk image(s)) form the backup directory to the retain directory&lt;br /&gt;
* Restore the machine: Replace the live disk image(s) by the one(s) from the backup and restore the machine from the state file&lt;br /&gt;
&lt;br /&gt;
Additionally the restore process can also be divided into two phases: &lt;br /&gt;
* User-Interaction phase: After the &amp;quot;unretain small files&amp;quot; the user needs to decide two things:&lt;br /&gt;
** On conflicts between the backend entry file and the XML description, the user need to decide how to resolve this conflict(s)&lt;br /&gt;
** The user can also abort the restore process up to this point. After that the restore can not be aborted or undone! &lt;br /&gt;
* Non-User-Interaction phase: The daemons communicate through the backend between each other and the restore process continues without further user input (c.f. [[#Communication_through_backend_2 | Communication through backend]])&lt;br /&gt;
&lt;br /&gt;
=== Sub Processes ===&lt;br /&gt;
==== Unretain small files ====&lt;br /&gt;
This workflow assumes that the backup directory is on the same physical server as the retain directory (protocol is file://)&lt;br /&gt;
# Copy the backend-entry file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.backend /path/to/retain/vm-001.backend&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the XML description from the from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.xml /path/to/retain/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Compare the backend-entry file (the one in the retain directory) with the live-backend entry&lt;br /&gt;
#* Resolve all conflicts between these two backend entries&lt;br /&gt;
#** Modify the backend entry at the retain location accordingly&lt;br /&gt;
# Apply the same changes for the XML description at the retain location (backend entry and XML description need to be consistent).&lt;br /&gt;
&lt;br /&gt;
==== Unretain large files ====&lt;br /&gt;
# Copy the state file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.state /path/to/retain/vm-001.state&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the disk image(s) from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.qcow2 /path/to/retain/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
&lt;br /&gt;
==== Restore the VM ====&lt;br /&gt;
# Shutdown the VM if it is running:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh shutdown vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Undefine the VM if it is still defined: &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh undefine vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Overwrite the original disk image:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;mv /path/to/retain/vm-001.qcow2 /path/to/images/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
# Restore the VMs backend entry: &lt;br /&gt;
#* Write the backend entry from the retain location (&amp;lt;code&amp;gt;/path/to/retain/vm-001.backend&amp;lt;/code&amp;gt;) to the backend&lt;br /&gt;
# Overwrite the VMs XML description with the one from the retain location &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/retain/vm-001.xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Restore the VM from the state file with the corrected XML&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh restore /path/to/retain/vm-001.state --xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
The actual KVM-Restore process is controlled completely by the Control instance daemon via the OpenLDAP directory. See [[#OpenLDAP Directory Integration|OpenLDAP Directory Integration]] the involved attributes and possible values.&lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-interaction-restore.png|thumb|500px|none|Figure 3: Communication between all involved parties during the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update these interactions by editing [[File:Restore-Interaction.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== Control instance Daemon Interaction for restoring a Backup with LDIF Examples ===&lt;br /&gt;
==== Step 01: Start the unretainSmallFiles process (Control instance daemon) ====&lt;br /&gt;
The first step of the restore process is to copy the small files (in this case the XML file and the LDIF) from the configured backup location to the configured retain location. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainSmallFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainSmallFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Starting the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingSmallFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the small files for the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Finalizing the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedSmallFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the small files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Start the unretainLargeFiles process (Control instance daemon) ====&lt;br /&gt;
Next step in the restore process is to copy the large files (state file and disk images) from the configured backup directory to the configured retain directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainLargeFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainLargeFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Starting the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingLargeFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the large files for the virtual machine or virtual machine template.&lt;br /&gt;
&lt;br /&gt;
In the meantime the vm-manager merges the LDIF we have unretained in [[#Step_02:_Starting_the_unretainSmallFiles_process_.28Provisioning-Backup-KVM_daemon.29 | step 02]] with the one in the live directory to sort out possible differences in the configuration of the virtual machine.  &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Finalizing the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedLargeFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the large files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the restore process (Control instance daemon) ====&lt;br /&gt;
Since we now have all necessary files in the configured retain location, the restore process can be started. There we simply copy the disk images back to their original location and restore the VM from the state file (which is also at the configured retain location)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# restore (this way the Provisioning-Backup-VKM daemon knows, that it must start the restore process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restore&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restoring&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is restoring the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to restoring by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restoring&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restored&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restored&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the restore process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;restored&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the Control instance daemon, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the restore process is finished.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Restore) ==&lt;br /&gt;
* Since the prov-backup-kvm daemon is not running on the vm-nodes (c.f. [[stoney_conductor:_Backup#State_of_the_art]]), the restore process does not work when clicking the icon in the webinterface. &lt;br /&gt;
* Resolving the conflicts in the backend and XML description file is not yet done&lt;br /&gt;
** Actually all steps not executed by prov-backup-kvm are not yet properly implemented (c.f. [[stoney_conductor:_prov_backup_kvm#Restore]])&lt;br /&gt;
* The implementation is done, but the last step from the [[#Restore_2 | restore process ]] is different:&lt;br /&gt;
** The &amp;lt;code&amp;gt;virsh restore&amp;lt;/code&amp;gt; command is not executed with the &amp;lt;code&amp;gt;--xml&amp;lt;/code&amp;gt; option, the XML from the state file is taken when restoring the machine. Therefore the conflicts are not properly resolved. &lt;br /&gt;
*** --[[User:Pat|Pat]] ([[User talk:Pat|talk]]) 09:41, 29 October 2013 (CET): Currently the [http://search.cpan.org/~danberr/Sys-Virt-1.1.3/lib/Sys/Virt.pm Sys::Virt] library does not support the --xml parameter when restoring a domain&lt;br /&gt;
&lt;br /&gt;
=== How to manually restore a machine from backup ===&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: Before you continue with this guide, make sure that you have no other possibility to restore the machine. It might be easier and safer to get lost files from the online backup if the machine has one set up.&lt;br /&gt;
&lt;br /&gt;
If you really have to restore the machine from the backup:&lt;br /&gt;
# Stop the machine from via the [https://cloud.stepping-stone.ch/vm-manager/ web interface]&lt;br /&gt;
# Login (as root) on the [[VM-Node]] the machine was running on&lt;br /&gt;
&lt;br /&gt;
As a first step, you would like to set some useful bash variables to be able to copy paste the following guide:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Double check all variables you are setting here. If one is not correct, you will restore a running machine or overwrite a live-disk image!&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
machinename=&amp;quot;&amp;lt;MACHINE-NAME&amp;gt;&amp;quot; # For example: machinename=&amp;quot;b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6&amp;quot;&lt;br /&gt;
vmpool=&amp;quot;&amp;lt;VM-POOL&amp;gt;&amp;quot; # For example vmpool=&amp;quot;0f83f084-8080-413e-b558-b678e504836e&amp;quot;&lt;br /&gt;
vmtype=&amp;quot;&amp;lt;VM-TYPE&amp;gt;&amp;quot; # For example vmtype=&amp;quot;vm-persistent&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change to the backup directory for the given machine and check the iterations:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change into the most recent iteration&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd 2014...&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In there you should have: &lt;br /&gt;
* The state file &amp;lt;MACHINE-NAME&amp;gt;.state.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.state.20140109T134445Z)&lt;br /&gt;
* The XML description &amp;lt;MACHINE-NAME&amp;gt;.xml.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.xml.20140109T134445Z)&lt;br /&gt;
* The ldif file &amp;lt;MACHINE-NAME&amp;gt;.ldif.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.ldif.20140109T134445Z)&lt;br /&gt;
* And at least one disk image &amp;lt;DISK-IMAGE&amp;gt;.qcow2.&amp;lt;BACKUP-DATE&amp;gt; (for example 8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2.20140109T134445Z)&lt;br /&gt;
Now you should save the backup date and the disk image(s) in a variable&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
backupdate=&amp;quot;&amp;lt;BACKUP-DATE&amp;gt;&amp;quot; # For example: backupdate=&amp;quot;20140109T134445Z&amp;quot;&lt;br /&gt;
diskimage1=&amp;quot;&amp;lt;DISK-IMAGE-1&amp;gt;.qcow2&amp;quot; # For example: diskimage1=&amp;quot;8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2&amp;quot;&lt;br /&gt;
diskimage2=&amp;quot;&amp;lt;DISK-IMAGE-2&amp;gt;.qcow2&amp;quot; # For example: diskimage2=&amp;quot;aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee.qcow2&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Have again a look at the different variables and &#039;&#039;&#039;double check them again&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
echo &amp;quot;Machine Name = ${machinename}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Pool = ${vmpool}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Type = ${vmtype}&amp;quot;&lt;br /&gt;
echo &amp;quot;Backup date = ${backupdate}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 1 = ${diskimage1}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 2 = ${diskimage2}&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all these files to the retain location:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
currentdate=`date --utc +&#039;%Y%m%dT%H%M%SZ&#039;`&lt;br /&gt;
mkdir -p /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.ldif.${backupdate} /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Check if there is a difference between the current XML file and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
diff -Naur /etc/libvirt/qemu/${machinename}.xml /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.xml.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Now you are entering the critical part. You won&#039;t be able to undo the following steps&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Check if there is a difference between the current LDAP entry and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
domain=&amp;quot;&amp;lt;DOMAIN&amp;gt;&amp;quot; # For example domain=&amp;quot;stoney-cloud.org&amp;quot;&lt;br /&gt;
ldapbase=&amp;quot;&amp;lt;LDAPBASE&amp;gt;&amp;quot; # For expample ldapbase=&amp;quot;dc=stoney-cloud,dc=org&amp;quot;&lt;br /&gt;
ldapsearch -H ldaps://ldapm.${domain} -b &amp;quot;sstVirtualMachine=${machinename},ou=virtual machines,ou=virtualization,ou=services,${ldapbase}&amp;quot; -s sub -x -LLL -o ldif-wrap=no -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W  &amp;quot;(objectclass=*)&amp;quot; &amp;gt; /tmp/${machinename}.ldif&lt;br /&gt;
diff -Naur /tmp/${machinename}.ldif /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.&lt;br /&gt;
&lt;br /&gt;
If there are no differences (or the differences are not important) you can skip the following step. Otherwise use the [https://cloud.stepping-stone.ch/phpldapadmin PhpLdapAdmin] to delete the machine from the LDAP directory (do not forget to delete the dhcp entry &amp;lt;code&amp;gt;dn: cn=&amp;lt;MACHINE-NAME&amp;gt;,ou=virtual machines,cn=192.168.140.0,cn=config-01,ou=dhcp,ou=networks,ou=virtualization,ou=services,dc=stoney-cloud,dc=org&amp;lt;/code&amp;gt;). Then add the LDIF (the one you just edited) to the LDAP (first do some general replacement)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sed -i\&lt;br /&gt;
 -e &#039;s/snapshotting/finished/&#039;\&lt;br /&gt;
 -e &#039;/member.*/d&#039;\&lt;br /&gt;
 /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&lt;br /&gt;
/usr/bin/ldapadd -H &amp;quot;ldaps://ldapm.${domain}&amp;quot; -x -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W -f /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Undefine the machine&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh undefine ${machinename}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all the disk images from the backup location back to their original location&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage1}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage1}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage2}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage2}&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And restore the domain from the state file from the backup location with the XML from the retain location (the one you might have edited)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh restore /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.state.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now the machine should be up and running again. Continuing where it was stopped when taking the backup.&lt;br /&gt;
&lt;br /&gt;
If everything is OK, you can cleanup the created files and directories&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rm -rf /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
rm /tmp/${machinename}.ldif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: stoney conductor]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=File:wrapper-interaction.png&amp;diff=3743</id>
		<title>File:wrapper-interaction.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=File:wrapper-interaction.png&amp;diff=3743"/>
		<updated>2014-06-26T13:43:55Z</updated>

		<summary type="html">&lt;p&gt;Pat: Pat uploaded a new version of &amp;amp;quot;File:wrapper-interaction.png&amp;amp;quot;: Adapted to the new sub steps (create-, export- and commitSnapshot)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Shows how the two wrapper interact with the LDAP backend&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=File:wrapper-interaction.png&amp;diff=3742</id>
		<title>File:wrapper-interaction.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=File:wrapper-interaction.png&amp;diff=3742"/>
		<updated>2014-06-26T13:43:32Z</updated>

		<summary type="html">&lt;p&gt;Pat: Pat uploaded a new version of &amp;amp;quot;File:wrapper-interaction.png&amp;amp;quot;: Reverted to version as of 13:40, 26 June 2014&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Shows how the two wrapper interact with the LDAP backend&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=File:wrapper-interaction.png&amp;diff=3741</id>
		<title>File:wrapper-interaction.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=File:wrapper-interaction.png&amp;diff=3741"/>
		<updated>2014-06-26T13:43:09Z</updated>

		<summary type="html">&lt;p&gt;Pat: Pat uploaded a new version of &amp;amp;quot;File:wrapper-interaction.png&amp;amp;quot;: Adapted to the new sub steps (create-, export- and commitSnapshot)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Shows how the two wrapper interact with the LDAP backend&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=File:Daemon-communication.png&amp;diff=3740</id>
		<title>File:Daemon-communication.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=File:Daemon-communication.png&amp;diff=3740"/>
		<updated>2014-06-26T13:41:16Z</updated>

		<summary type="html">&lt;p&gt;Pat: Pat uploaded a new version of &amp;amp;quot;File:Daemon-communication.png&amp;amp;quot;: Adapted to the new sub-steps (create-, export- and commitSnapshot)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Communication between control instance and prov-backup-kvm daemon&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=File:wrapper-interaction.png&amp;diff=3739</id>
		<title>File:wrapper-interaction.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=File:wrapper-interaction.png&amp;diff=3739"/>
		<updated>2014-06-26T13:40:48Z</updated>

		<summary type="html">&lt;p&gt;Pat: Pat uploaded a new version of &amp;amp;quot;File:wrapper-interaction.png&amp;amp;quot;: Reverted to version as of 11:58, 23 October 2013&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Shows how the two wrapper interact with the LDAP backend&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=File:wrapper-interaction.png&amp;diff=3738</id>
		<title>File:wrapper-interaction.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=File:wrapper-interaction.png&amp;diff=3738"/>
		<updated>2014-06-26T13:37:56Z</updated>

		<summary type="html">&lt;p&gt;Pat: Pat uploaded a new version of &amp;amp;quot;File:wrapper-interaction.png&amp;amp;quot;: Adapted to the new substeps (create-, export- and commitSnapshot)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Shows how the two wrapper interact with the LDAP backend&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3737</id>
		<title>stoney conductor: prov-backup-kvm</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3737"/>
		<updated>2014-06-26T13:25:28Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* commitSnapshot */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
The &#039;&#039;&#039;Provisioning-Backup-KVM Daemon&#039;&#039;&#039; is written in Perl and uses the mechanisms described under [[stoney core: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Workflow =&lt;br /&gt;
== Backup ==&lt;br /&gt;
This is the simplified workflow for the Provisioning-Backup-KVM Daemon. The Subroutines (snapshot, merge and retain) are shown later.&lt;br /&gt;
&lt;br /&gt;
[[File:KVM-Backup-Workflow.png|thumb|none|400px|Figure 1: Simplified prov-backup-kvm workflow]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update this workflow by editing [[File:KVM-Backup-simple.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== createSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Snapshot.png|thumb|none|500px|Figure 2: Detailed workflow for the snaphshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[stoney_conductor:_Backup#createSnapshot | createSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== exportSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Merge.png|thumb|none|500px|Figure 2: Detailed workflow for the merge process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[ stoney_conductor:_Backup#exportSnapshot | exportSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== commitSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Retain.png|thumb|none|500px|Figure 3: Detailed workflow for the retain process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#commitSnaphsot | commitSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
== Restore ==&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#00FF00&amp;quot;&amp;gt;Task for the control-instance daemon&amp;lt;/span&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#FFFF00&amp;quot;&amp;gt;Task for the prov-backup-kvm daemon&amp;lt;/span&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#FF8000&amp;quot;&amp;gt;Task for the vm-manager&amp;lt;/span&amp;gt;&lt;br /&gt;
[[File:KVM-Backup-Workflow-Restore.png|thumb|none|500px|Figure 4: Detailed workflow for the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-restore.xmi]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#Basic_idea_2 | Restore: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Global ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright (C) 2013 stepping stone GmbH&lt;br /&gt;
#                    Switzerland&lt;br /&gt;
#                    http://www.stepping-stone.ch&lt;br /&gt;
#                    support@stepping-stone.ch&lt;br /&gt;
#&lt;br /&gt;
# Authors:&lt;br /&gt;
#  Pat Kläy &amp;lt;pat.klaey@stepping-stone.ch&amp;gt;&lt;br /&gt;
#  &lt;br /&gt;
# Licensed under the EUPL, Version 1.1.&lt;br /&gt;
#&lt;br /&gt;
# You may not use this work except in compliance with the&lt;br /&gt;
# Licence.&lt;br /&gt;
# You may obtain a copy of the Licence at:&lt;br /&gt;
#&lt;br /&gt;
# http://www.osor.eu/eupl&lt;br /&gt;
#&lt;br /&gt;
# Unless required by applicable law or agreed to in&lt;br /&gt;
# writing, software distributed under the Licence is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; basis,&lt;br /&gt;
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either&lt;br /&gt;
# express or implied.&lt;br /&gt;
# See the Licence for the specific language governing&lt;br /&gt;
# permissions and limitations under the Licence.&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Global]&lt;br /&gt;
# If true the script logs every information to the log-file.&lt;br /&gt;
LOG_DEBUG = 1&lt;br /&gt;
&lt;br /&gt;
# If true the script logs additional information to the log-file.&lt;br /&gt;
LOG_INFO = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs warnings to the log-file.&lt;br /&gt;
LOG_WARNING = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs errors to the log-file.&lt;br /&gt;
LOG_ERR = 1&lt;br /&gt;
&lt;br /&gt;
# The environment indicates the hostname (fqdn) on which the prov-backup-kvm &lt;br /&gt;
# daemon is running&lt;br /&gt;
ENVIRONMENT = &amp;lt;STONEY-CLOUD-NODE-NAME&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
# All information related to the database (backend) the daemon connects to&lt;br /&gt;
[Database]&lt;br /&gt;
BACKEND = LDAP&lt;br /&gt;
SERVER = &amp;lt;STONEY-CLOUD-LDAP-SERVER&amp;gt;&lt;br /&gt;
PORT = &amp;lt;STONEY-CLOUD-LDAP-PORT&amp;gt;&lt;br /&gt;
ADMIN_USER = &amp;lt;STONEY-CLOUD-LDAP-BINDDN&amp;gt;&lt;br /&gt;
ADMIN_PASSWORD = &amp;lt;STONEY-CLOUD-LDAP-BIND-PASSWORD&amp;gt;&lt;br /&gt;
SERVICE_SUBTREE = &amp;lt;STONEY-CLOUD-LDAP-SERVICE-SUBTREE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# A cookie file will be used to be able to restart the daemon without&lt;br /&gt;
# processing every entry again (they appear as new if the daemon is started) &lt;br /&gt;
COOKIE_FILE = &amp;lt;STONEY-CLOUD-LDAP-COOKIE-FILE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# The default cookie just contains an empty CSN, in that way, all entries&lt;br /&gt;
# are processed&lt;br /&gt;
DEFAULT_COOKIE = rid=001,csn=&lt;br /&gt;
&lt;br /&gt;
# The search filter for the database. Only process entries found with this&lt;br /&gt;
# filter&lt;br /&gt;
SEARCH_FILTER = (&amp;amp;(entryCSN&amp;gt;=%entryCSN%)(objectClass=*))&lt;br /&gt;
&lt;br /&gt;
# Indicates the prov-backup-kvm configuration which applies for every&lt;br /&gt;
# VM-Pool and every VM if not overwritten by a VM-Pool- or VM-specific &lt;br /&gt;
# configuration&lt;br /&gt;
STONEY_CLOUD_WIDE_CONFIGURATION = &amp;lt;STONEY-CLOUD-LDAP-PROV-BACKUP-KVM-DEFAULT-CONFIGURATION&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Configuration concerining the provisioning module&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
# The modus should always be selfcare&lt;br /&gt;
MODUS = selfcare&lt;br /&gt;
&lt;br /&gt;
# Which TransportApi is used to execute the commands on the destination system&lt;br /&gt;
# TransportApi can be &amp;quot;LocalCLI&amp;quot; or &amp;quot;CLISSH&amp;quot;&lt;br /&gt;
TRANSPORTAPI = LocalCLI&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning service&lt;br /&gt;
SERVICE = Backup&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning type&lt;br /&gt;
TYPE = KVM&lt;br /&gt;
&lt;br /&gt;
# The syslog tag (normally service-type)&lt;br /&gt;
SYSLOG = Backup-KVM&lt;br /&gt;
&lt;br /&gt;
# All information concerning the gateway (TransportApi)&lt;br /&gt;
[Gateway]&lt;br /&gt;
HOST = localhost&lt;br /&gt;
USER = provisioning&lt;br /&gt;
DSA_FILE = none&lt;br /&gt;
&lt;br /&gt;
# Service specific configuration which is not present in the backend&lt;br /&gt;
[Backup]&lt;br /&gt;
&lt;br /&gt;
# Which command is used to export files&lt;br /&gt;
EXPORT_COMMAND = cp -p&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Backend ==&lt;br /&gt;
In the backend, you need to have at least one configuration which applies for the whole stoney cloud. This configuration is referenced in the [[#Global|global configuration]]. You are able to overwrite the stoney-cloud-wide configuration for&lt;br /&gt;
* A VM-Pool&lt;br /&gt;
* A single VM&lt;br /&gt;
The configuration which applies for the VM is evaluated in the following way:&lt;br /&gt;
# Check if the VM has a VM-specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# Check if the VM-Pool has a specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# The stoney-cloud-wide configuration applies&lt;br /&gt;
&lt;br /&gt;
=== Mandatory Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupNumberOfIterations&#039;&#039;&#039;: An integer value how many backup iterations should be kept. Default is 1 (for disaster recovery).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRootDirectory&#039;&#039;&#039;: The path to the backup root directory where all iterations of disk-images and state files are stored. Default is file:///var/backup/virtualization.&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRetainDirectory&#039;&#039;&#039;: The path to the local retain directory where the temporary snapshots (disk-image and state file) are stored. Default is file:///var/virtualization/retain.&lt;br /&gt;
* &#039;&#039;&#039;sstRestoreVMWithoutState&#039;&#039;&#039;: Boolean value which indicates whether or not to restore a virtual machine without the state. Default is FALSE (most often we want to restore the state together with the virtual machine).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRamDiskLocation&#039;&#039;&#039;: Path to the RAM-Disk. Default is /mnt/ramdisk. Because this attribute can be set for the whole FOSS-Cloud, for a specific VM-Pool, for a specific virtual machine or a specific virtual machine template, this attribute is independent from the VM-Nodes. There for no guarantee can be given, that this RAM-Disk exists on all the VM-Nodes. A check for its existence is mandatory!&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineForceStart&#039;&#039;&#039;: Force start VM in the case of not being able to restore the VM State during the backup process. TRUE or FALSE, default is FALSE. Attention: If set to TRUE, this could lead to file system inconsistencies in the virtual machine.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationBandwidthMerge&#039;&#039;&#039;: Bandwidth of the disk merging process (specifies the maximum I/O rate to allow in Megabyte/s). Default is 0 (unlimited). Integer Attribute, single value.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageFormat&#039;&#039;&#039;: The format for the new disk image that is created during the backup process. Default is qcow2.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageOwner&#039;&#039;&#039;: The owner for the new disk image that is created during the backup process. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageGroup&#039;&#039;&#039; : The group for the new disk image that is created during the backup process. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImagePermission&#039;&#039;&#039;: The permission (in octal representation) for the new disk image that is created during the backup process. Default is 660 (equivalent to 0660).&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryOwner&#039;&#039;&#039;: The owner for the new directory where the disk image is located. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryGroup&#039;&#039;&#039;: The group for the new directory where the disk image is located. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryPermission&#039;&#039;&#039;: The permission (in octal representation) for the new directory where the disk image is located. Default is 770 (equivalent to 0770).&lt;br /&gt;
&lt;br /&gt;
=== Optional Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupExcludeFromBackup&#039;&#039;&#039;: Do we want to exclude a virtual machine from the default backup plan? Default is FALSE.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStop&#039;&#039;&#039;: Multiple dependencies for the stopping order can be defined. Example: a web VM depends on the corresponding database VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be stopped in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID1&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID3&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStart&#039;&#039;&#039;: Multiple dependencies for the starting order can be defined. Example: a database VM must be started before the corresponding web VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be started in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID3&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID1&lt;br /&gt;
&lt;br /&gt;
= Exit codes =&lt;br /&gt;
The following list defines the return codes and their meaning for the KVM-Backup script see also [https://github.com/stepping-stone/prov-backup-kvm/blob/master/lib/Provisioning/Backup/KVM/Constants.pm KVMConstants.pm]:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
use constant SUCCESS_CODE                               =&amp;gt; 0;&lt;br /&gt;
&lt;br /&gt;
### Error codes constants&lt;br /&gt;
use constant UNDEFINED_ERROR                            =&amp;gt; 1; # Always the first!&lt;br /&gt;
use constant MISSING_PARAMETER_IN_CONFIG_FILE           =&amp;gt; 2;&lt;br /&gt;
use constant CONFIGURED_RAM_DISK_IS_NOT_VALUD           =&amp;gt; 3;&lt;br /&gt;
use constant NOT_ENOUGH_SPACE_ON_RAM_DISK               =&amp;gt; 4;&lt;br /&gt;
use constant CANNOT_SAVE_MACHINE_STATE                  =&amp;gt; 5;&lt;br /&gt;
use constant CANNOT_WRITE_TO_BACKUP_LOCATION            =&amp;gt; 6;&lt;br /&gt;
use constant CANNOT_COPY_FILE_TO_BACKUP_LOCATION        =&amp;gt; 7;&lt;br /&gt;
use constant CANNOT_COPY_IMAGE_TO_BACKUP_LOCATION       =&amp;gt; 8;&lt;br /&gt;
use constant CANNOT_COPY_XML_TO_BACKUP_LOCATION         =&amp;gt; 9;&lt;br /&gt;
use constant CANNOT_COPY_BACKEND_FILE_TO_BACKUP_LOCATION=&amp;gt; 10;&lt;br /&gt;
use constant CANNOT_MERGE_DISK_IMAGES                   =&amp;gt; 11;&lt;br /&gt;
use constant CANNOT_REMOVE_OLD_DISK_IMAGE               =&amp;gt; 12;&lt;br /&gt;
use constant CANNOT_REMOVE_FILE                         =&amp;gt; 13;&lt;br /&gt;
use constant CANNOT_CREATE_EMPTY_DISK_IMAGE             =&amp;gt; 15;&lt;br /&gt;
use constant CANNOT_RENAME_DISK_IMAGE                   =&amp;gt; 16;&lt;br /&gt;
use constant CANNOT_CONNECT_TO_BACKEND                  =&amp;gt; 17;&lt;br /&gt;
use constant WRONG_STATE_INFORMATION                    =&amp;gt; 18;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_OWNERSHIP            =&amp;gt; 19;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_PERMISSION           =&amp;gt; 20;&lt;br /&gt;
use constant CANNOT_RESTORE_MACHINE                     =&amp;gt; 21;&lt;br /&gt;
use constant CANNOT_LOCK_MACHINE                        =&amp;gt; 22;&lt;br /&gt;
use constant CANNOT_FIND_MACHINE                        =&amp;gt; 23;&lt;br /&gt;
use constant CANNOT_COPY_STATE_FILE_TO_RETAIN           =&amp;gt; 24;&lt;br /&gt;
use constant RETAIN_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 25;&lt;br /&gt;
use constant BACKUP_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 26;&lt;br /&gt;
use constant CANNOT_CREATE_DIRECTORY                    =&amp;gt; 27;&lt;br /&gt;
use constant CANNOT_SAVE_XML                            =&amp;gt; 28;&lt;br /&gt;
use constant CANNOT_SAVE_BACKEND_ENTRY                  =&amp;gt; 29;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_OWNERSHIP             =&amp;gt; 30;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_PERMISSION            =&amp;gt; 31;&lt;br /&gt;
use constant CANNOT_FIND_CONFIGURATION_ENTRY            =&amp;gt; 32;&lt;br /&gt;
use constant BACKEND_XML_UNCONSISTENCY                  =&amp;gt; 33;&lt;br /&gt;
use constant CANNOT_CREATE_TARBALL                      =&amp;gt; 34;&lt;br /&gt;
use constant UNSUPPORTED_FILE_TRANSFER_PROTOCOL         =&amp;gt; 35;&lt;br /&gt;
use constant UNKNOWN_BACKEND_TYPE                       =&amp;gt; 36;&lt;br /&gt;
use constant MISSING_NECESSARY_FILES                    =&amp;gt; 37;&lt;br /&gt;
use constant CORRUPT_DISK_IMAGE_FOUND                   =&amp;gt; 38;&lt;br /&gt;
use constant UNSUPPORTED_CONFIGURATION_PARAMETER        =&amp;gt; 39;&lt;br /&gt;
use constant CANNOT_MOVE_DISK_IMAGE_TO_ORIGINAL_LOCATION=&amp;gt; 40;&lt;br /&gt;
use constant CANNOT_DEFINE_MACHINE                      =&amp;gt; 41;&lt;br /&gt;
use constant CANNOT_START_MACHINE                       =&amp;gt; 42;&lt;br /&gt;
use constant CANNOT_WORK_ON_UNDEFINED_OBJECT            =&amp;gt; 43;&lt;br /&gt;
use constant CANNOT_READ_STATE_FILE                     =&amp;gt; 44;&lt;br /&gt;
use constant CANNOT_READ_XML_FILE                       =&amp;gt; 45;&lt;br /&gt;
use constant NOT_ALL_FILES_DELETED_FROM_RETAIN_LOCATION =&amp;gt; 46;&lt;br /&gt;
use constant NOT_ENOUGH_DISK_SPACE                      =&amp;gt; 47;&lt;br /&gt;
use constant NO_DISK_SPACE_INFORMATION                  =&amp;gt; 48;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Next steps =&lt;br /&gt;
* Change the behaviour of the snapshot/merge process&lt;br /&gt;
** No longer merge the original file into the new one but merge (commit) backing store file back into original one&lt;br /&gt;
*** Like that we are able to reduce the backup (merge) time a lot.&lt;br /&gt;
*** Needs different behaviour for save -&amp;gt; copy/move -&amp;gt; create new image -&amp;gt; restore -&amp;gt; merge&lt;br /&gt;
&lt;br /&gt;
= Source Code =&lt;br /&gt;
The source code is located in our GitHub Repository:&lt;br /&gt;
&lt;br /&gt;
https://github.com/stepping-stone/prov-backup-kvm&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney conductor]][[Category:Provisioning Modules]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3736</id>
		<title>stoney conductor: prov-backup-kvm</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3736"/>
		<updated>2014-06-26T13:25:06Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* exportSnapshot */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
The &#039;&#039;&#039;Provisioning-Backup-KVM Daemon&#039;&#039;&#039; is written in Perl and uses the mechanisms described under [[stoney core: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Workflow =&lt;br /&gt;
== Backup ==&lt;br /&gt;
This is the simplified workflow for the Provisioning-Backup-KVM Daemon. The Subroutines (snapshot, merge and retain) are shown later.&lt;br /&gt;
&lt;br /&gt;
[[File:KVM-Backup-Workflow.png|thumb|none|400px|Figure 1: Simplified prov-backup-kvm workflow]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update this workflow by editing [[File:KVM-Backup-simple.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== createSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Snapshot.png|thumb|none|500px|Figure 2: Detailed workflow for the snaphshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[stoney_conductor:_Backup#createSnapshot | createSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== exportSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Merge.png|thumb|none|500px|Figure 2: Detailed workflow for the merge process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[ stoney_conductor:_Backup#exportSnapshot | exportSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== commitSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Retain.png|thumb|none|500px|Figure 3: Detailed workflow for the retain process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#Retain | Retain: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
== Restore ==&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#00FF00&amp;quot;&amp;gt;Task for the control-instance daemon&amp;lt;/span&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#FFFF00&amp;quot;&amp;gt;Task for the prov-backup-kvm daemon&amp;lt;/span&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#FF8000&amp;quot;&amp;gt;Task for the vm-manager&amp;lt;/span&amp;gt;&lt;br /&gt;
[[File:KVM-Backup-Workflow-Restore.png|thumb|none|500px|Figure 4: Detailed workflow for the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-restore.xmi]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#Basic_idea_2 | Restore: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Global ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright (C) 2013 stepping stone GmbH&lt;br /&gt;
#                    Switzerland&lt;br /&gt;
#                    http://www.stepping-stone.ch&lt;br /&gt;
#                    support@stepping-stone.ch&lt;br /&gt;
#&lt;br /&gt;
# Authors:&lt;br /&gt;
#  Pat Kläy &amp;lt;pat.klaey@stepping-stone.ch&amp;gt;&lt;br /&gt;
#  &lt;br /&gt;
# Licensed under the EUPL, Version 1.1.&lt;br /&gt;
#&lt;br /&gt;
# You may not use this work except in compliance with the&lt;br /&gt;
# Licence.&lt;br /&gt;
# You may obtain a copy of the Licence at:&lt;br /&gt;
#&lt;br /&gt;
# http://www.osor.eu/eupl&lt;br /&gt;
#&lt;br /&gt;
# Unless required by applicable law or agreed to in&lt;br /&gt;
# writing, software distributed under the Licence is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; basis,&lt;br /&gt;
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either&lt;br /&gt;
# express or implied.&lt;br /&gt;
# See the Licence for the specific language governing&lt;br /&gt;
# permissions and limitations under the Licence.&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Global]&lt;br /&gt;
# If true the script logs every information to the log-file.&lt;br /&gt;
LOG_DEBUG = 1&lt;br /&gt;
&lt;br /&gt;
# If true the script logs additional information to the log-file.&lt;br /&gt;
LOG_INFO = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs warnings to the log-file.&lt;br /&gt;
LOG_WARNING = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs errors to the log-file.&lt;br /&gt;
LOG_ERR = 1&lt;br /&gt;
&lt;br /&gt;
# The environment indicates the hostname (fqdn) on which the prov-backup-kvm &lt;br /&gt;
# daemon is running&lt;br /&gt;
ENVIRONMENT = &amp;lt;STONEY-CLOUD-NODE-NAME&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
# All information related to the database (backend) the daemon connects to&lt;br /&gt;
[Database]&lt;br /&gt;
BACKEND = LDAP&lt;br /&gt;
SERVER = &amp;lt;STONEY-CLOUD-LDAP-SERVER&amp;gt;&lt;br /&gt;
PORT = &amp;lt;STONEY-CLOUD-LDAP-PORT&amp;gt;&lt;br /&gt;
ADMIN_USER = &amp;lt;STONEY-CLOUD-LDAP-BINDDN&amp;gt;&lt;br /&gt;
ADMIN_PASSWORD = &amp;lt;STONEY-CLOUD-LDAP-BIND-PASSWORD&amp;gt;&lt;br /&gt;
SERVICE_SUBTREE = &amp;lt;STONEY-CLOUD-LDAP-SERVICE-SUBTREE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# A cookie file will be used to be able to restart the daemon without&lt;br /&gt;
# processing every entry again (they appear as new if the daemon is started) &lt;br /&gt;
COOKIE_FILE = &amp;lt;STONEY-CLOUD-LDAP-COOKIE-FILE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# The default cookie just contains an empty CSN, in that way, all entries&lt;br /&gt;
# are processed&lt;br /&gt;
DEFAULT_COOKIE = rid=001,csn=&lt;br /&gt;
&lt;br /&gt;
# The search filter for the database. Only process entries found with this&lt;br /&gt;
# filter&lt;br /&gt;
SEARCH_FILTER = (&amp;amp;(entryCSN&amp;gt;=%entryCSN%)(objectClass=*))&lt;br /&gt;
&lt;br /&gt;
# Indicates the prov-backup-kvm configuration which applies for every&lt;br /&gt;
# VM-Pool and every VM if not overwritten by a VM-Pool- or VM-specific &lt;br /&gt;
# configuration&lt;br /&gt;
STONEY_CLOUD_WIDE_CONFIGURATION = &amp;lt;STONEY-CLOUD-LDAP-PROV-BACKUP-KVM-DEFAULT-CONFIGURATION&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Configuration concerining the provisioning module&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
# The modus should always be selfcare&lt;br /&gt;
MODUS = selfcare&lt;br /&gt;
&lt;br /&gt;
# Which TransportApi is used to execute the commands on the destination system&lt;br /&gt;
# TransportApi can be &amp;quot;LocalCLI&amp;quot; or &amp;quot;CLISSH&amp;quot;&lt;br /&gt;
TRANSPORTAPI = LocalCLI&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning service&lt;br /&gt;
SERVICE = Backup&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning type&lt;br /&gt;
TYPE = KVM&lt;br /&gt;
&lt;br /&gt;
# The syslog tag (normally service-type)&lt;br /&gt;
SYSLOG = Backup-KVM&lt;br /&gt;
&lt;br /&gt;
# All information concerning the gateway (TransportApi)&lt;br /&gt;
[Gateway]&lt;br /&gt;
HOST = localhost&lt;br /&gt;
USER = provisioning&lt;br /&gt;
DSA_FILE = none&lt;br /&gt;
&lt;br /&gt;
# Service specific configuration which is not present in the backend&lt;br /&gt;
[Backup]&lt;br /&gt;
&lt;br /&gt;
# Which command is used to export files&lt;br /&gt;
EXPORT_COMMAND = cp -p&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Backend ==&lt;br /&gt;
In the backend, you need to have at least one configuration which applies for the whole stoney cloud. This configuration is referenced in the [[#Global|global configuration]]. You are able to overwrite the stoney-cloud-wide configuration for&lt;br /&gt;
* A VM-Pool&lt;br /&gt;
* A single VM&lt;br /&gt;
The configuration which applies for the VM is evaluated in the following way:&lt;br /&gt;
# Check if the VM has a VM-specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# Check if the VM-Pool has a specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# The stoney-cloud-wide configuration applies&lt;br /&gt;
&lt;br /&gt;
=== Mandatory Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupNumberOfIterations&#039;&#039;&#039;: An integer value how many backup iterations should be kept. Default is 1 (for disaster recovery).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRootDirectory&#039;&#039;&#039;: The path to the backup root directory where all iterations of disk-images and state files are stored. Default is file:///var/backup/virtualization.&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRetainDirectory&#039;&#039;&#039;: The path to the local retain directory where the temporary snapshots (disk-image and state file) are stored. Default is file:///var/virtualization/retain.&lt;br /&gt;
* &#039;&#039;&#039;sstRestoreVMWithoutState&#039;&#039;&#039;: Boolean value which indicates whether or not to restore a virtual machine without the state. Default is FALSE (most often we want to restore the state together with the virtual machine).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRamDiskLocation&#039;&#039;&#039;: Path to the RAM-Disk. Default is /mnt/ramdisk. Because this attribute can be set for the whole FOSS-Cloud, for a specific VM-Pool, for a specific virtual machine or a specific virtual machine template, this attribute is independent from the VM-Nodes. There for no guarantee can be given, that this RAM-Disk exists on all the VM-Nodes. A check for its existence is mandatory!&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineForceStart&#039;&#039;&#039;: Force start VM in the case of not being able to restore the VM State during the backup process. TRUE or FALSE, default is FALSE. Attention: If set to TRUE, this could lead to file system inconsistencies in the virtual machine.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationBandwidthMerge&#039;&#039;&#039;: Bandwidth of the disk merging process (specifies the maximum I/O rate to allow in Megabyte/s). Default is 0 (unlimited). Integer Attribute, single value.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageFormat&#039;&#039;&#039;: The format for the new disk image that is created during the backup process. Default is qcow2.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageOwner&#039;&#039;&#039;: The owner for the new disk image that is created during the backup process. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageGroup&#039;&#039;&#039; : The group for the new disk image that is created during the backup process. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImagePermission&#039;&#039;&#039;: The permission (in octal representation) for the new disk image that is created during the backup process. Default is 660 (equivalent to 0660).&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryOwner&#039;&#039;&#039;: The owner for the new directory where the disk image is located. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryGroup&#039;&#039;&#039;: The group for the new directory where the disk image is located. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryPermission&#039;&#039;&#039;: The permission (in octal representation) for the new directory where the disk image is located. Default is 770 (equivalent to 0770).&lt;br /&gt;
&lt;br /&gt;
=== Optional Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupExcludeFromBackup&#039;&#039;&#039;: Do we want to exclude a virtual machine from the default backup plan? Default is FALSE.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStop&#039;&#039;&#039;: Multiple dependencies for the stopping order can be defined. Example: a web VM depends on the corresponding database VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be stopped in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID1&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID3&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStart&#039;&#039;&#039;: Multiple dependencies for the starting order can be defined. Example: a database VM must be started before the corresponding web VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be started in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID3&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID1&lt;br /&gt;
&lt;br /&gt;
= Exit codes =&lt;br /&gt;
The following list defines the return codes and their meaning for the KVM-Backup script see also [https://github.com/stepping-stone/prov-backup-kvm/blob/master/lib/Provisioning/Backup/KVM/Constants.pm KVMConstants.pm]:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
use constant SUCCESS_CODE                               =&amp;gt; 0;&lt;br /&gt;
&lt;br /&gt;
### Error codes constants&lt;br /&gt;
use constant UNDEFINED_ERROR                            =&amp;gt; 1; # Always the first!&lt;br /&gt;
use constant MISSING_PARAMETER_IN_CONFIG_FILE           =&amp;gt; 2;&lt;br /&gt;
use constant CONFIGURED_RAM_DISK_IS_NOT_VALUD           =&amp;gt; 3;&lt;br /&gt;
use constant NOT_ENOUGH_SPACE_ON_RAM_DISK               =&amp;gt; 4;&lt;br /&gt;
use constant CANNOT_SAVE_MACHINE_STATE                  =&amp;gt; 5;&lt;br /&gt;
use constant CANNOT_WRITE_TO_BACKUP_LOCATION            =&amp;gt; 6;&lt;br /&gt;
use constant CANNOT_COPY_FILE_TO_BACKUP_LOCATION        =&amp;gt; 7;&lt;br /&gt;
use constant CANNOT_COPY_IMAGE_TO_BACKUP_LOCATION       =&amp;gt; 8;&lt;br /&gt;
use constant CANNOT_COPY_XML_TO_BACKUP_LOCATION         =&amp;gt; 9;&lt;br /&gt;
use constant CANNOT_COPY_BACKEND_FILE_TO_BACKUP_LOCATION=&amp;gt; 10;&lt;br /&gt;
use constant CANNOT_MERGE_DISK_IMAGES                   =&amp;gt; 11;&lt;br /&gt;
use constant CANNOT_REMOVE_OLD_DISK_IMAGE               =&amp;gt; 12;&lt;br /&gt;
use constant CANNOT_REMOVE_FILE                         =&amp;gt; 13;&lt;br /&gt;
use constant CANNOT_CREATE_EMPTY_DISK_IMAGE             =&amp;gt; 15;&lt;br /&gt;
use constant CANNOT_RENAME_DISK_IMAGE                   =&amp;gt; 16;&lt;br /&gt;
use constant CANNOT_CONNECT_TO_BACKEND                  =&amp;gt; 17;&lt;br /&gt;
use constant WRONG_STATE_INFORMATION                    =&amp;gt; 18;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_OWNERSHIP            =&amp;gt; 19;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_PERMISSION           =&amp;gt; 20;&lt;br /&gt;
use constant CANNOT_RESTORE_MACHINE                     =&amp;gt; 21;&lt;br /&gt;
use constant CANNOT_LOCK_MACHINE                        =&amp;gt; 22;&lt;br /&gt;
use constant CANNOT_FIND_MACHINE                        =&amp;gt; 23;&lt;br /&gt;
use constant CANNOT_COPY_STATE_FILE_TO_RETAIN           =&amp;gt; 24;&lt;br /&gt;
use constant RETAIN_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 25;&lt;br /&gt;
use constant BACKUP_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 26;&lt;br /&gt;
use constant CANNOT_CREATE_DIRECTORY                    =&amp;gt; 27;&lt;br /&gt;
use constant CANNOT_SAVE_XML                            =&amp;gt; 28;&lt;br /&gt;
use constant CANNOT_SAVE_BACKEND_ENTRY                  =&amp;gt; 29;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_OWNERSHIP             =&amp;gt; 30;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_PERMISSION            =&amp;gt; 31;&lt;br /&gt;
use constant CANNOT_FIND_CONFIGURATION_ENTRY            =&amp;gt; 32;&lt;br /&gt;
use constant BACKEND_XML_UNCONSISTENCY                  =&amp;gt; 33;&lt;br /&gt;
use constant CANNOT_CREATE_TARBALL                      =&amp;gt; 34;&lt;br /&gt;
use constant UNSUPPORTED_FILE_TRANSFER_PROTOCOL         =&amp;gt; 35;&lt;br /&gt;
use constant UNKNOWN_BACKEND_TYPE                       =&amp;gt; 36;&lt;br /&gt;
use constant MISSING_NECESSARY_FILES                    =&amp;gt; 37;&lt;br /&gt;
use constant CORRUPT_DISK_IMAGE_FOUND                   =&amp;gt; 38;&lt;br /&gt;
use constant UNSUPPORTED_CONFIGURATION_PARAMETER        =&amp;gt; 39;&lt;br /&gt;
use constant CANNOT_MOVE_DISK_IMAGE_TO_ORIGINAL_LOCATION=&amp;gt; 40;&lt;br /&gt;
use constant CANNOT_DEFINE_MACHINE                      =&amp;gt; 41;&lt;br /&gt;
use constant CANNOT_START_MACHINE                       =&amp;gt; 42;&lt;br /&gt;
use constant CANNOT_WORK_ON_UNDEFINED_OBJECT            =&amp;gt; 43;&lt;br /&gt;
use constant CANNOT_READ_STATE_FILE                     =&amp;gt; 44;&lt;br /&gt;
use constant CANNOT_READ_XML_FILE                       =&amp;gt; 45;&lt;br /&gt;
use constant NOT_ALL_FILES_DELETED_FROM_RETAIN_LOCATION =&amp;gt; 46;&lt;br /&gt;
use constant NOT_ENOUGH_DISK_SPACE                      =&amp;gt; 47;&lt;br /&gt;
use constant NO_DISK_SPACE_INFORMATION                  =&amp;gt; 48;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Next steps =&lt;br /&gt;
* Change the behaviour of the snapshot/merge process&lt;br /&gt;
** No longer merge the original file into the new one but merge (commit) backing store file back into original one&lt;br /&gt;
*** Like that we are able to reduce the backup (merge) time a lot.&lt;br /&gt;
*** Needs different behaviour for save -&amp;gt; copy/move -&amp;gt; create new image -&amp;gt; restore -&amp;gt; merge&lt;br /&gt;
&lt;br /&gt;
= Source Code =&lt;br /&gt;
The source code is located in our GitHub Repository:&lt;br /&gt;
&lt;br /&gt;
https://github.com/stepping-stone/prov-backup-kvm&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney conductor]][[Category:Provisioning Modules]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3735</id>
		<title>stoney conductor: prov-backup-kvm</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3735"/>
		<updated>2014-06-26T13:24:37Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* createSnapshot */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
The &#039;&#039;&#039;Provisioning-Backup-KVM Daemon&#039;&#039;&#039; is written in Perl and uses the mechanisms described under [[stoney core: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Workflow =&lt;br /&gt;
== Backup ==&lt;br /&gt;
This is the simplified workflow for the Provisioning-Backup-KVM Daemon. The Subroutines (snapshot, merge and retain) are shown later.&lt;br /&gt;
&lt;br /&gt;
[[File:KVM-Backup-Workflow.png|thumb|none|400px|Figure 1: Simplified prov-backup-kvm workflow]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update this workflow by editing [[File:KVM-Backup-simple.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== createSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Snapshot.png|thumb|none|500px|Figure 2: Detailed workflow for the snaphshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[stoney_conductor:_Backup#createSnapshot | createSnapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== exportSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Merge.png|thumb|none|500px|Figure 2: Detailed workflow for the merge process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[ stoney_conductor:_Backup#Merge | Merge: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== commitSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Retain.png|thumb|none|500px|Figure 3: Detailed workflow for the retain process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#Retain | Retain: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
== Restore ==&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#00FF00&amp;quot;&amp;gt;Task for the control-instance daemon&amp;lt;/span&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#FFFF00&amp;quot;&amp;gt;Task for the prov-backup-kvm daemon&amp;lt;/span&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#FF8000&amp;quot;&amp;gt;Task for the vm-manager&amp;lt;/span&amp;gt;&lt;br /&gt;
[[File:KVM-Backup-Workflow-Restore.png|thumb|none|500px|Figure 4: Detailed workflow for the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-restore.xmi]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#Basic_idea_2 | Restore: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Global ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright (C) 2013 stepping stone GmbH&lt;br /&gt;
#                    Switzerland&lt;br /&gt;
#                    http://www.stepping-stone.ch&lt;br /&gt;
#                    support@stepping-stone.ch&lt;br /&gt;
#&lt;br /&gt;
# Authors:&lt;br /&gt;
#  Pat Kläy &amp;lt;pat.klaey@stepping-stone.ch&amp;gt;&lt;br /&gt;
#  &lt;br /&gt;
# Licensed under the EUPL, Version 1.1.&lt;br /&gt;
#&lt;br /&gt;
# You may not use this work except in compliance with the&lt;br /&gt;
# Licence.&lt;br /&gt;
# You may obtain a copy of the Licence at:&lt;br /&gt;
#&lt;br /&gt;
# http://www.osor.eu/eupl&lt;br /&gt;
#&lt;br /&gt;
# Unless required by applicable law or agreed to in&lt;br /&gt;
# writing, software distributed under the Licence is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; basis,&lt;br /&gt;
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either&lt;br /&gt;
# express or implied.&lt;br /&gt;
# See the Licence for the specific language governing&lt;br /&gt;
# permissions and limitations under the Licence.&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Global]&lt;br /&gt;
# If true the script logs every information to the log-file.&lt;br /&gt;
LOG_DEBUG = 1&lt;br /&gt;
&lt;br /&gt;
# If true the script logs additional information to the log-file.&lt;br /&gt;
LOG_INFO = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs warnings to the log-file.&lt;br /&gt;
LOG_WARNING = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs errors to the log-file.&lt;br /&gt;
LOG_ERR = 1&lt;br /&gt;
&lt;br /&gt;
# The environment indicates the hostname (fqdn) on which the prov-backup-kvm &lt;br /&gt;
# daemon is running&lt;br /&gt;
ENVIRONMENT = &amp;lt;STONEY-CLOUD-NODE-NAME&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
# All information related to the database (backend) the daemon connects to&lt;br /&gt;
[Database]&lt;br /&gt;
BACKEND = LDAP&lt;br /&gt;
SERVER = &amp;lt;STONEY-CLOUD-LDAP-SERVER&amp;gt;&lt;br /&gt;
PORT = &amp;lt;STONEY-CLOUD-LDAP-PORT&amp;gt;&lt;br /&gt;
ADMIN_USER = &amp;lt;STONEY-CLOUD-LDAP-BINDDN&amp;gt;&lt;br /&gt;
ADMIN_PASSWORD = &amp;lt;STONEY-CLOUD-LDAP-BIND-PASSWORD&amp;gt;&lt;br /&gt;
SERVICE_SUBTREE = &amp;lt;STONEY-CLOUD-LDAP-SERVICE-SUBTREE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# A cookie file will be used to be able to restart the daemon without&lt;br /&gt;
# processing every entry again (they appear as new if the daemon is started) &lt;br /&gt;
COOKIE_FILE = &amp;lt;STONEY-CLOUD-LDAP-COOKIE-FILE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# The default cookie just contains an empty CSN, in that way, all entries&lt;br /&gt;
# are processed&lt;br /&gt;
DEFAULT_COOKIE = rid=001,csn=&lt;br /&gt;
&lt;br /&gt;
# The search filter for the database. Only process entries found with this&lt;br /&gt;
# filter&lt;br /&gt;
SEARCH_FILTER = (&amp;amp;(entryCSN&amp;gt;=%entryCSN%)(objectClass=*))&lt;br /&gt;
&lt;br /&gt;
# Indicates the prov-backup-kvm configuration which applies for every&lt;br /&gt;
# VM-Pool and every VM if not overwritten by a VM-Pool- or VM-specific &lt;br /&gt;
# configuration&lt;br /&gt;
STONEY_CLOUD_WIDE_CONFIGURATION = &amp;lt;STONEY-CLOUD-LDAP-PROV-BACKUP-KVM-DEFAULT-CONFIGURATION&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Configuration concerining the provisioning module&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
# The modus should always be selfcare&lt;br /&gt;
MODUS = selfcare&lt;br /&gt;
&lt;br /&gt;
# Which TransportApi is used to execute the commands on the destination system&lt;br /&gt;
# TransportApi can be &amp;quot;LocalCLI&amp;quot; or &amp;quot;CLISSH&amp;quot;&lt;br /&gt;
TRANSPORTAPI = LocalCLI&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning service&lt;br /&gt;
SERVICE = Backup&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning type&lt;br /&gt;
TYPE = KVM&lt;br /&gt;
&lt;br /&gt;
# The syslog tag (normally service-type)&lt;br /&gt;
SYSLOG = Backup-KVM&lt;br /&gt;
&lt;br /&gt;
# All information concerning the gateway (TransportApi)&lt;br /&gt;
[Gateway]&lt;br /&gt;
HOST = localhost&lt;br /&gt;
USER = provisioning&lt;br /&gt;
DSA_FILE = none&lt;br /&gt;
&lt;br /&gt;
# Service specific configuration which is not present in the backend&lt;br /&gt;
[Backup]&lt;br /&gt;
&lt;br /&gt;
# Which command is used to export files&lt;br /&gt;
EXPORT_COMMAND = cp -p&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Backend ==&lt;br /&gt;
In the backend, you need to have at least one configuration which applies for the whole stoney cloud. This configuration is referenced in the [[#Global|global configuration]]. You are able to overwrite the stoney-cloud-wide configuration for&lt;br /&gt;
* A VM-Pool&lt;br /&gt;
* A single VM&lt;br /&gt;
The configuration which applies for the VM is evaluated in the following way:&lt;br /&gt;
# Check if the VM has a VM-specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# Check if the VM-Pool has a specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# The stoney-cloud-wide configuration applies&lt;br /&gt;
&lt;br /&gt;
=== Mandatory Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupNumberOfIterations&#039;&#039;&#039;: An integer value how many backup iterations should be kept. Default is 1 (for disaster recovery).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRootDirectory&#039;&#039;&#039;: The path to the backup root directory where all iterations of disk-images and state files are stored. Default is file:///var/backup/virtualization.&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRetainDirectory&#039;&#039;&#039;: The path to the local retain directory where the temporary snapshots (disk-image and state file) are stored. Default is file:///var/virtualization/retain.&lt;br /&gt;
* &#039;&#039;&#039;sstRestoreVMWithoutState&#039;&#039;&#039;: Boolean value which indicates whether or not to restore a virtual machine without the state. Default is FALSE (most often we want to restore the state together with the virtual machine).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRamDiskLocation&#039;&#039;&#039;: Path to the RAM-Disk. Default is /mnt/ramdisk. Because this attribute can be set for the whole FOSS-Cloud, for a specific VM-Pool, for a specific virtual machine or a specific virtual machine template, this attribute is independent from the VM-Nodes. There for no guarantee can be given, that this RAM-Disk exists on all the VM-Nodes. A check for its existence is mandatory!&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineForceStart&#039;&#039;&#039;: Force start VM in the case of not being able to restore the VM State during the backup process. TRUE or FALSE, default is FALSE. Attention: If set to TRUE, this could lead to file system inconsistencies in the virtual machine.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationBandwidthMerge&#039;&#039;&#039;: Bandwidth of the disk merging process (specifies the maximum I/O rate to allow in Megabyte/s). Default is 0 (unlimited). Integer Attribute, single value.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageFormat&#039;&#039;&#039;: The format for the new disk image that is created during the backup process. Default is qcow2.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageOwner&#039;&#039;&#039;: The owner for the new disk image that is created during the backup process. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageGroup&#039;&#039;&#039; : The group for the new disk image that is created during the backup process. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImagePermission&#039;&#039;&#039;: The permission (in octal representation) for the new disk image that is created during the backup process. Default is 660 (equivalent to 0660).&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryOwner&#039;&#039;&#039;: The owner for the new directory where the disk image is located. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryGroup&#039;&#039;&#039;: The group for the new directory where the disk image is located. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryPermission&#039;&#039;&#039;: The permission (in octal representation) for the new directory where the disk image is located. Default is 770 (equivalent to 0770).&lt;br /&gt;
&lt;br /&gt;
=== Optional Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupExcludeFromBackup&#039;&#039;&#039;: Do we want to exclude a virtual machine from the default backup plan? Default is FALSE.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStop&#039;&#039;&#039;: Multiple dependencies for the stopping order can be defined. Example: a web VM depends on the corresponding database VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be stopped in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID1&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID3&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStart&#039;&#039;&#039;: Multiple dependencies for the starting order can be defined. Example: a database VM must be started before the corresponding web VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be started in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID3&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID1&lt;br /&gt;
&lt;br /&gt;
= Exit codes =&lt;br /&gt;
The following list defines the return codes and their meaning for the KVM-Backup script see also [https://github.com/stepping-stone/prov-backup-kvm/blob/master/lib/Provisioning/Backup/KVM/Constants.pm KVMConstants.pm]:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
use constant SUCCESS_CODE                               =&amp;gt; 0;&lt;br /&gt;
&lt;br /&gt;
### Error codes constants&lt;br /&gt;
use constant UNDEFINED_ERROR                            =&amp;gt; 1; # Always the first!&lt;br /&gt;
use constant MISSING_PARAMETER_IN_CONFIG_FILE           =&amp;gt; 2;&lt;br /&gt;
use constant CONFIGURED_RAM_DISK_IS_NOT_VALUD           =&amp;gt; 3;&lt;br /&gt;
use constant NOT_ENOUGH_SPACE_ON_RAM_DISK               =&amp;gt; 4;&lt;br /&gt;
use constant CANNOT_SAVE_MACHINE_STATE                  =&amp;gt; 5;&lt;br /&gt;
use constant CANNOT_WRITE_TO_BACKUP_LOCATION            =&amp;gt; 6;&lt;br /&gt;
use constant CANNOT_COPY_FILE_TO_BACKUP_LOCATION        =&amp;gt; 7;&lt;br /&gt;
use constant CANNOT_COPY_IMAGE_TO_BACKUP_LOCATION       =&amp;gt; 8;&lt;br /&gt;
use constant CANNOT_COPY_XML_TO_BACKUP_LOCATION         =&amp;gt; 9;&lt;br /&gt;
use constant CANNOT_COPY_BACKEND_FILE_TO_BACKUP_LOCATION=&amp;gt; 10;&lt;br /&gt;
use constant CANNOT_MERGE_DISK_IMAGES                   =&amp;gt; 11;&lt;br /&gt;
use constant CANNOT_REMOVE_OLD_DISK_IMAGE               =&amp;gt; 12;&lt;br /&gt;
use constant CANNOT_REMOVE_FILE                         =&amp;gt; 13;&lt;br /&gt;
use constant CANNOT_CREATE_EMPTY_DISK_IMAGE             =&amp;gt; 15;&lt;br /&gt;
use constant CANNOT_RENAME_DISK_IMAGE                   =&amp;gt; 16;&lt;br /&gt;
use constant CANNOT_CONNECT_TO_BACKEND                  =&amp;gt; 17;&lt;br /&gt;
use constant WRONG_STATE_INFORMATION                    =&amp;gt; 18;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_OWNERSHIP            =&amp;gt; 19;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_PERMISSION           =&amp;gt; 20;&lt;br /&gt;
use constant CANNOT_RESTORE_MACHINE                     =&amp;gt; 21;&lt;br /&gt;
use constant CANNOT_LOCK_MACHINE                        =&amp;gt; 22;&lt;br /&gt;
use constant CANNOT_FIND_MACHINE                        =&amp;gt; 23;&lt;br /&gt;
use constant CANNOT_COPY_STATE_FILE_TO_RETAIN           =&amp;gt; 24;&lt;br /&gt;
use constant RETAIN_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 25;&lt;br /&gt;
use constant BACKUP_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 26;&lt;br /&gt;
use constant CANNOT_CREATE_DIRECTORY                    =&amp;gt; 27;&lt;br /&gt;
use constant CANNOT_SAVE_XML                            =&amp;gt; 28;&lt;br /&gt;
use constant CANNOT_SAVE_BACKEND_ENTRY                  =&amp;gt; 29;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_OWNERSHIP             =&amp;gt; 30;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_PERMISSION            =&amp;gt; 31;&lt;br /&gt;
use constant CANNOT_FIND_CONFIGURATION_ENTRY            =&amp;gt; 32;&lt;br /&gt;
use constant BACKEND_XML_UNCONSISTENCY                  =&amp;gt; 33;&lt;br /&gt;
use constant CANNOT_CREATE_TARBALL                      =&amp;gt; 34;&lt;br /&gt;
use constant UNSUPPORTED_FILE_TRANSFER_PROTOCOL         =&amp;gt; 35;&lt;br /&gt;
use constant UNKNOWN_BACKEND_TYPE                       =&amp;gt; 36;&lt;br /&gt;
use constant MISSING_NECESSARY_FILES                    =&amp;gt; 37;&lt;br /&gt;
use constant CORRUPT_DISK_IMAGE_FOUND                   =&amp;gt; 38;&lt;br /&gt;
use constant UNSUPPORTED_CONFIGURATION_PARAMETER        =&amp;gt; 39;&lt;br /&gt;
use constant CANNOT_MOVE_DISK_IMAGE_TO_ORIGINAL_LOCATION=&amp;gt; 40;&lt;br /&gt;
use constant CANNOT_DEFINE_MACHINE                      =&amp;gt; 41;&lt;br /&gt;
use constant CANNOT_START_MACHINE                       =&amp;gt; 42;&lt;br /&gt;
use constant CANNOT_WORK_ON_UNDEFINED_OBJECT            =&amp;gt; 43;&lt;br /&gt;
use constant CANNOT_READ_STATE_FILE                     =&amp;gt; 44;&lt;br /&gt;
use constant CANNOT_READ_XML_FILE                       =&amp;gt; 45;&lt;br /&gt;
use constant NOT_ALL_FILES_DELETED_FROM_RETAIN_LOCATION =&amp;gt; 46;&lt;br /&gt;
use constant NOT_ENOUGH_DISK_SPACE                      =&amp;gt; 47;&lt;br /&gt;
use constant NO_DISK_SPACE_INFORMATION                  =&amp;gt; 48;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Next steps =&lt;br /&gt;
* Change the behaviour of the snapshot/merge process&lt;br /&gt;
** No longer merge the original file into the new one but merge (commit) backing store file back into original one&lt;br /&gt;
*** Like that we are able to reduce the backup (merge) time a lot.&lt;br /&gt;
*** Needs different behaviour for save -&amp;gt; copy/move -&amp;gt; create new image -&amp;gt; restore -&amp;gt; merge&lt;br /&gt;
&lt;br /&gt;
= Source Code =&lt;br /&gt;
The source code is located in our GitHub Repository:&lt;br /&gt;
&lt;br /&gt;
https://github.com/stepping-stone/prov-backup-kvm&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney conductor]][[Category:Provisioning Modules]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3734</id>
		<title>stoney conductor: VM Backup</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_VM_Backup&amp;diff=3734"/>
		<updated>2014-06-26T13:24:07Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* Snapshot */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This page describes how the VMs and VM-Templates are backed-up and restored inside the [http://www.stoney-cloud.org stoney cloud].&lt;br /&gt;
&lt;br /&gt;
= Requirements =&lt;br /&gt;
* sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
** This directory might be a single partition which needs to have the same size as your partition for the live images (it&#039;s a &amp;quot;copy&amp;quot; of the live partition)&lt;br /&gt;
* sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
** This directory must be on the same partition as your life images are&lt;br /&gt;
* A working stoney cloud, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].&lt;br /&gt;
* The backup configuration must be set: [[stoney_conductor:_OpenLDAP_directory_data_organisation#Backup | stoney conductor: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Backup =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The main idea to backup a VM or a VM-Template is, to divide the task into three subtasks: &lt;br /&gt;
* createSnapshot: Create a disk only snapshot. A new overlay file is created, all write operations are performed to this file. The underlying disk-image is now read only.&lt;br /&gt;
* exportSnapshot: Copy the read only disk-image to the backup location.&lt;br /&gt;
* commitSnapshot: Commit the performed write operations from the overlay back to the underlying (original) disk image. Now the underlying image is read-write again and the overlay image can be deleted.&lt;br /&gt;
A more detailed and technical description for these three sub-processes can be found [[#Sub-Processes | here]].&lt;br /&gt;
&lt;br /&gt;
Furthermore there is an control instance, which can independently call these three sub-processes for a given machine. Like that, the stoney cloud is able to handle different cases:&lt;br /&gt;
=== Backup a single machine ===&lt;br /&gt;
The procedure for backing up a single machine is very simple. Just call the three sub-processes (snapshot, merge and retain) one after the other. So the control instance would do some very basic stuff: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machine = args[0];&lt;br /&gt;
&lt;br /&gt;
if( createSsnapshot( machine ) )&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
    if ( exportSnapshot( machine ) )&lt;br /&gt;
    {&lt;br /&gt;
&lt;br /&gt;
        if ( commitSnapshot( machine ) )&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Successfully backed up machine %s\n&amp;quot;, machine);&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
} else&lt;br /&gt;
{&lt;br /&gt;
    printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machine, error);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Backup multiple machines at the same time ===&lt;br /&gt;
When backing up multiple machines at the same time, we need to make sure that the snapshots for the machines are as close together as possible. Therefore the control instance should call first the createSnapshot process for all machines. After every machine has been snapshotted, the control instance can call the exportSnapshot and commitSnapshot process for every machine. The most important part here is, that the control instance somehow remembers, if the snapshot for a given machine was successful or not. Because if the snapshot failed, it must not call the exportSnapshot and commitSnapshot process. So the control instance needs a little bit more logic: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
object machines[] = args[0];&lt;br /&gt;
object successful_snapshots[];&lt;br /&gt;
&lt;br /&gt;
# Snapshot all machines&lt;br /&gt;
for( int i = 0; i &amp;lt;  sizeof(machines) / sizeof(object) ; i++ )&lt;br /&gt;
{&lt;br /&gt;
    # If the snapshot was successful, put the machine into the &lt;br /&gt;
    # successful_snapshots array&lt;br /&gt;
    if ( createSnapshot( machines[i] ) )&lt;br /&gt;
    {&lt;br /&gt;
        successful_snapshots[machines[i]];&lt;br /&gt;
    } else&lt;br /&gt;
    {&lt;br /&gt;
        printf(&amp;quot;Error while snapshotting machine %s: %s\n&amp;quot;, machines[i],error);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
# export and commit all successful_snapshot machines&lt;br /&gt;
for ( int i = 0; i &amp;lt;  sizeof(successful_snapshots) / sizeof(object) ; i++ ) )&lt;br /&gt;
{&lt;br /&gt;
    # Check if the element at this position is not null, then the snapshot &lt;br /&gt;
    # for this machine was successful&lt;br /&gt;
    if ( successful_snapshots[i] )&lt;br /&gt;
    {&lt;br /&gt;
        if ( exportSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
        {&lt;br /&gt;
            if ( commitSnapshot( successful_snapshots[i] ) )&lt;br /&gt;
            {&lt;br /&gt;
              printf(&amp;quot;Successfully backed-up machine %s\n&amp;quot;, successful_snapshots[i]);&lt;br /&gt;
            } else&lt;br /&gt;
            {&lt;br /&gt;
                printf(&amp;quot;Error while committing snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
            }&lt;br /&gt;
&lt;br /&gt;
        } else&lt;br /&gt;
        {&lt;br /&gt;
            printf(&amp;quot;Error while exporting snapshot for machine %s: %s\n&amp;quot;, successful_snapshots[i],error);&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sub-Processes ===&lt;br /&gt;
See also [[Libvirt_external_snapshot_with_GlusterFS]]&lt;br /&gt;
==== createSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Part_2:_Create_the_snapshot_using_virsh]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#createSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== exportSnapshot ====&lt;br /&gt;
# Simply copy the underlying image to the backup location&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;image&amp;gt;.qcow2 /&amp;lt;path&amp;gt;/&amp;lt;to&amp;gt;/&amp;lt;backup&amp;gt;/&amp;lt;location&amp;gt;/.&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#exportSnapshot]]&lt;br /&gt;
&lt;br /&gt;
==== commitSnapshot ====&lt;br /&gt;
For the commands see [[Libvirt_external_snapshot_with_GlusterFS#Cleanup.2FCommit_.28Online.29]]&lt;br /&gt;
&lt;br /&gt;
For the workflow see [[stoney_conductor:_prov-backup-kvm#commitSnapshot]]&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
Since the stoney cloud is (as the name says already) a cloud solution, it makes sense to have a backend (in our case openLDAP) involved in the whole process. Like that it is possible to run the backup jobs decentralized on every vm-node. The control instance can then modify the backend, and theses changes are seen by the diffenrent backup daemons on the vm-nodes. So the communication could look like shown in the following picture (Figure 1): &lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-communication.png|800px|thumbnail|none|Figure 1: Communication between the control instance and the prov-backup-kvm daemon through the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
=== Control-Instance Daemon Interaction for creating a Backup with LDIF Examples ===&lt;br /&gt;
The step numbers correspond with the graphical overview from above.&lt;br /&gt;
&lt;br /&gt;
==== Step 00: Backup Configuration for a virtual machine ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The following backup configuration says, that the backup should be done daily, at 03:00 hours (localtime).&lt;br /&gt;
# * * * * * command to be executed&lt;br /&gt;
# - - - - -&lt;br /&gt;
# | | | | |&lt;br /&gt;
# | | | | +----- day of week (0 - 6) (Sunday=0)&lt;br /&gt;
# | | | +------- month (1 - 12)&lt;br /&gt;
# | | +--------- day of month (1 - 31)&lt;br /&gt;
# | +----------- hour (0 - 23)&lt;br /&gt;
# +------------- min (0 - 59)&lt;br /&gt;
# localtime in the crontab entry&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
objectclass: sstCronObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
description: This sub tree contains the backup plan for the virtual machine kvm-005.&lt;br /&gt;
sstCronMinute: 0&lt;br /&gt;
sstCronHour: 3&lt;br /&gt;
sstCronDay: *&lt;br /&gt;
sstCronMonth: *&lt;br /&gt;
sstCronDayOfWeek: *&lt;br /&gt;
sstCronActive: TRUE&lt;br /&gt;
sstBackupRootDirectory: file:///var/backup/virtualization&lt;br /&gt;
sstBackupRetainDirectory: file:///var/virtualization/retain&lt;br /&gt;
sstBackupRamDiskLocation: file:///mnt/ramdisk-test&lt;br /&gt;
sstVirtualizationDiskImageFormat: qcow2&lt;br /&gt;
sstVirtualizationDiskImageOwner: root&lt;br /&gt;
sstVirtualizationDiskImageGroup: vm-storage&lt;br /&gt;
sstVirtualizationDiskImagePermission: 0660&lt;br /&gt;
sstBackupNumberOfIterations: 1&lt;br /&gt;
sstVirtualizationVirtualMachineForceStart: FALSE&lt;br /&gt;
sstVirtualizationBandwidthMerge: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 01: Initialize Backup Sub Tree (Control instance daemon) ====&lt;br /&gt;
The sub tree &#039;&#039;&#039; ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&#039;&#039;&#039; reflects the time, when the backup is planned (in the form of [YYYY][MM][DD]T[hh][mm][ss]Z ([http://en.wikipedia.org/wiki/ISO_8601 ISO 8601]) and it should be written at the time, when the backup is planned and should be executed. The section &#039;&#039;&#039;20121002T010000Z&#039;&#039;&#039; means the following:&lt;br /&gt;
* Year: 2012&lt;br /&gt;
* Month: 10&lt;br /&gt;
* Day of Month: 02&lt;br /&gt;
* Hour of Day: 01&lt;br /&gt;
* Minutes: 00&lt;br /&gt;
* Seconds: 00&lt;br /&gt;
Please be aware the the time is to be written in UTC (see also the comment in the LDIF example below).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# This entry is the place holder for the backup, which is to be executed at 03:00 hours (localtime with daylight-saving). This&lt;br /&gt;
# leads to the 20121002T010000Z timestamp (which is written in UTC).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: sstProvisioning&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
ou: 20121002T010000Z&lt;br /&gt;
sstProvisioningExecutionDate: 0&lt;br /&gt;
sstProvisioningMode: initialize&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
sstProvisioningState: 20121002T014513Z&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Finalize the Initialization (Control instance daemon) ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is modified.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: initialized&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Start the Snapshot Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshot&#039;&#039;&#039;, the actual backup process is kicked off by the Control instance daemon.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# snapshot (this way the Provisioning-Backup-VKM daemon knows, that it must start the snapshotting process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshot&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 04: Starting the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotting&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is snapshotting the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to snapshotting by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotting&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Finalizing the Snapshot Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the snapshot command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;snapshotted&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the snapshot of the virtual machine or virtual machine template is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010011Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: snapshotted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Start the Merge Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;merge&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to merge the backing file disk image back into the current disk image.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# merge (this way the Provisioning-Backup-VKM daemon knows, that it must start the merging process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: merge&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Starting the Merge Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the merge command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;merging&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is merging the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to merging by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: merging&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 08: Finalizing the Merging Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the merge command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;merged&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the merging of the virtual machine or virtual machine template is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T010500Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: merged&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the Retain Process (Control instance daemon) ====&lt;br /&gt;
With the setting of the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;retain&#039;&#039;&#039;, the Control instance daemon tells the Provisioning-Backup-KVM daemon to retain (copy and then delete) all the necessary files to the configured backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the fc-brokerd, when sstProvisioningMode is modified to&lt;br /&gt;
# retain (this way the Provisioning-Backup-VKM daemon knows, that it must start the retaining process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: retain&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the Retain Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the retain command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;retaining&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is retaining the necessary files to the configured backup location.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to retaining by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: retaining&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the Retaing Process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the retain command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;retained&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the retaining of all the necessary files to the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the fc-brokerd knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: retained&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the Backup Process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;retained&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the fc-brokerd, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the backup process is finished, there for a new backup process could be started.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Backup) ==&lt;br /&gt;
Since we do not have a working control instance, we need to have a workaround for backing up the machines: &lt;br /&gt;
&lt;br /&gt;
* We do already have a BackupKVMWrapper.pl script (File-Backend) which executes the three [[#Sub-Processes | sub-processes ]] in the correct order for a given list of machines (see [[#Backup multiple machines at the same_time]]).&lt;br /&gt;
* We do already have the implementation for the whole backup with the LDAP-Backend (see [[ stoney conductor: prov backup kvm ]]).&lt;br /&gt;
* We can now combine these two existing scripts and create a wrapper (lets call it LDAPKVMWrapper) which, in some way, adds some logic to the BackupKVMWrapper.pl. In fact the LDAPKVMWrapper wrapper will generate the list of machines which need a backup.&lt;br /&gt;
&lt;br /&gt;
The behaviour on our servers is as follows (c.f. Figure 2):&lt;br /&gt;
# The (decentralized) LDAPKVMWrapper wrapper (which is executed everyday via cronjob) generates a list off all machines running on the current host.&lt;br /&gt;
#* Currently on the hosts the cronjobs looks like: &amp;lt;code&amp;gt;00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
#* For each of these machines:&lt;br /&gt;
#** Check if the machine is excluded from the backup, if yes, remove the machine from the list&lt;br /&gt;
#** Check if the last backup was successful, if not, remove the machine from the list&lt;br /&gt;
# Update the backup subtree for each machine in the list&lt;br /&gt;
#* Remove the old backup leaf (the &amp;quot;yesterday-leaf&amp;quot;), and add a new one (the &amp;quot;today-leaf&amp;quot;) &lt;br /&gt;
#* After this step, the machines are ready to be backed up&lt;br /&gt;
# Call the BackupKVMWrapper.pl script with the machines list as a parameter&lt;br /&gt;
# Wait for the BackupKVMWrapper.pl script to finish&lt;br /&gt;
# Go again through all machines and update the backup subtree a last time&lt;br /&gt;
#* Check if the backup was successful, if yes, set sstProvisioningMode = finished (see also TBD)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:wrapper-interaction.png|500px|thumbnail|none|Figure 2: How the two wrapper interact with the LDAP backend]]&lt;br /&gt;
&lt;br /&gt;
* If for some reason something does not work at all, the whole backup process can be deactivated by simply disabling the LDAPKVMWrapper cronjob&lt;br /&gt;
** &amp;lt;code&amp;gt;crontab -e&amp;lt;/code&amp;gt;&lt;br /&gt;
** Comment the LDAPKVMWrapper cronjob line: &amp;lt;code&amp;gt;#00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM&amp;lt;/code&amp;gt;&lt;br /&gt;
=== How to exclude a machine from the backup ===&lt;br /&gt;
Login to one of the [[VM-Node | vm-nodes]] and execute the following command&lt;br /&gt;
&lt;br /&gt;
If you want to exclude a machine from the backup run you simply need to add the following entry to your LDAP directory: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
objectclass: top&lt;br /&gt;
objectclass: organizationalUnit&lt;br /&gt;
objectclass: sstVirtualizationBackupObjectClass&lt;br /&gt;
ou: backup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the backup subtree in the LDAP directory already exists, you need to add the sstbackupexcludefrombackup attribute: &lt;br /&gt;
&amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
machineuuid=&amp;quot;&amp;lt;UUID OF THE MACHINE-NAME&amp;gt;&amp;quot; # e.g.: b9d13dbc-9ab7-4948-9daa-a5709de83dc2&lt;br /&gt;
cat &amp;lt;&amp;lt; EOF | ldapadd -D cn=Manager,o=stepping-stone,c=ch -H ldaps://ldapm.stepping-stone.ch/ -W -x&lt;br /&gt;
dn: ou=backup,sstVirtualMachine=${machineuuid},ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
add: objectClass&lt;br /&gt;
objectClass: sstVirtualizationBackupObjectClass&lt;br /&gt;
-&lt;br /&gt;
add: sstbackupexcludefrombackup&lt;br /&gt;
sstbackupexcludefrombackup: TRUE&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Re-include the machine to the backup ====&lt;br /&gt;
If you want to re include a machine, simply delete the machines whole backup subtree. It will be recreated during the next backup run.&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
= Restore =&lt;br /&gt;
== Basic idea ==&lt;br /&gt;
The restore process, similar to the backup process, can be divided into three sub-processes: &lt;br /&gt;
* Unretain the small files: Copy the small files (backend entry, XML description) from the backup directory to the retain directory&lt;br /&gt;
* Unretain the big files: Copy the big files (state file, disk image(s)) form the backup directory to the retain directory&lt;br /&gt;
* Restore the machine: Replace the live disk image(s) by the one(s) from the backup and restore the machine from the state file&lt;br /&gt;
&lt;br /&gt;
Additionally the restore process can also be divided into two phases: &lt;br /&gt;
* User-Interaction phase: After the &amp;quot;unretain small files&amp;quot; the user needs to decide two things:&lt;br /&gt;
** On conflicts between the backend entry file and the XML description, the user need to decide how to resolve this conflict(s)&lt;br /&gt;
** The user can also abort the restore process up to this point. After that the restore can not be aborted or undone! &lt;br /&gt;
* Non-User-Interaction phase: The daemons communicate through the backend between each other and the restore process continues without further user input (c.f. [[#Communication_through_backend_2 | Communication through backend]])&lt;br /&gt;
&lt;br /&gt;
=== Sub Processes ===&lt;br /&gt;
==== Unretain small files ====&lt;br /&gt;
This workflow assumes that the backup directory is on the same physical server as the retain directory (protocol is file://)&lt;br /&gt;
# Copy the backend-entry file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.backend /path/to/retain/vm-001.backend&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the XML description from the from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.xml /path/to/retain/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Compare the backend-entry file (the one in the retain directory) with the live-backend entry&lt;br /&gt;
#* Resolve all conflicts between these two backend entries&lt;br /&gt;
#** Modify the backend entry at the retain location accordingly&lt;br /&gt;
# Apply the same changes for the XML description at the retain location (backend entry and XML description need to be consistent).&lt;br /&gt;
&lt;br /&gt;
==== Unretain large files ====&lt;br /&gt;
# Copy the state file from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.state /path/to/retain/vm-001.state&amp;lt;/source&amp;gt;&lt;br /&gt;
# Copy the disk image(s) from the backup directory to the retain directory:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/backup/vm-001.qcow2 /path/to/retain/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
&lt;br /&gt;
==== Restore the VM ====&lt;br /&gt;
# Shutdown the VM if it is running:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh shutdown vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Undefine the VM if it is still defined: &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh undefine vm-001&amp;lt;/source&amp;gt;&lt;br /&gt;
# Overwrite the original disk image:&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;mv /path/to/retain/vm-001.qcow2 /path/to/images/vm-001.qcow2&amp;lt;/source&amp;gt;&lt;br /&gt;
#** &#039;&#039;&#039;Important:&#039;&#039;&#039; If a VM has more than just one disk image, repeat this step for every disk image&lt;br /&gt;
# Restore the VMs backend entry: &lt;br /&gt;
#* Write the backend entry from the retain location (&amp;lt;code&amp;gt;/path/to/retain/vm-001.backend&amp;lt;/code&amp;gt;) to the backend&lt;br /&gt;
# Overwrite the VMs XML description with the one from the retain location &lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;cp -p /path/to/retain/vm-001.xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
# Restore the VM from the state file with the corrected XML&lt;br /&gt;
#* &amp;lt;source lang=&amp;quot;bash&amp;quot;&amp;gt;virsh restore /path/to/retain/vm-001.state --xml /path/to/xmls/vm-001.xml&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Communication through backend ==&lt;br /&gt;
The actual KVM-Restore process is controlled completely by the Control instance daemon via the OpenLDAP directory. See [[#OpenLDAP Directory Integration|OpenLDAP Directory Integration]] the involved attributes and possible values.&lt;br /&gt;
&lt;br /&gt;
[[File:Daemon-interaction-restore.png|thumb|500px|none|Figure 3: Communication between all involved parties during the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update these interactions by editing [[File:Restore-Interaction.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== Control instance Daemon Interaction for restoring a Backup with LDIF Examples ===&lt;br /&gt;
==== Step 01: Start the unretainSmallFiles process (Control instance daemon) ====&lt;br /&gt;
The first step of the restore process is to copy the small files (in this case the XML file and the LDIF) from the configured backup location to the configured retain location. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainSmallFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainSmallFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 02: Starting the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingSmallFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the small files for the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 03: Finalizing the unretainSmallFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the small files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedSmallFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the small files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedSmallFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 05: Start the unretainLargeFiles process (Control instance daemon) ====&lt;br /&gt;
Next step in the restore process is to copy the large files (state file and disk images) from the configured backup directory to the configured retain directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# unretainLargeFiles (this way the Provisioning-Backup-VKM daemon knows, that it must start the unretainLargeFiles process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 06: Starting the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the command to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainingLargeFiles&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is unretaining the large files for the virtual machine or virtual machine template.&lt;br /&gt;
&lt;br /&gt;
In the meantime the vm-manager merges the LDIF we have unretained in [[#Step_02:_Starting_the_unretainSmallFiles_process_.28Provisioning-Backup-KVM_daemon.29 | step 02]] with the one in the live directory to sort out possible differences in the configuration of the virtual machine.  &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to unretainingSmallFiles by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainingLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 07: Finalizing the unretainLargeFiles process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the commands to unretain the large files, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;unretainedLargeFiles&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the unretaining of all the large files from the configured backup location is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: unretainedLargeFiles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 09: Start the restore process (Control instance daemon) ====&lt;br /&gt;
Since we now have all necessary files in the configured retain location, the restore process can be started. There we simply copy the disk images back to their original location and restore the VM from the state file (which is also at the configured retain location)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set to zero by the Control instance daemon, when sstProvisioningMode is modified to &lt;br /&gt;
# restore (this way the Provisioning-Backup-VKM daemon knows, that it must start the restore process).&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restore&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 10: Starting the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon receives the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restoring&#039;&#039;&#039; to tell the Control instance daemon and other interested parties, that it is restoring the virtual machine or virtual machine template.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningMode is set to restoring by the Provisioning-Backup-VKM daemon.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restoring&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 11: Finalizing the restore process (Provisioning-Backup-KVM daemon) ====&lt;br /&gt;
As soon as the Provisioning-Backup-KVM daemon has executed the restore command, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;restored&#039;&#039;&#039;, the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC) and &#039;&#039;&#039;sstProvisioningReturnValue&#039;&#039;&#039; to zero to tell the Control instance daemon and other interested parties, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is set with the current timestamp by the Provisioning-Backup-VKM daemon, when&lt;br /&gt;
# the attributes sstProvisioningReturnValue and sstProvisioningMode are set.&lt;br /&gt;
# With this combination, the Control instance daemon knows, that it can proceed.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012000Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningReturnValue&lt;br /&gt;
sstProvisioningReturnValue: 0&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: restored&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Step 12: Finalizing the restore process (Control instance daemon) ====&lt;br /&gt;
As soon as the Control instance daemon notices, that the attribute &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; ist set to &#039;&#039;&#039;restored&#039;&#039;&#039;, it sets the &#039;&#039;&#039;sstProvisioningMode&#039;&#039;&#039; to &#039;&#039;&#039;finished&#039;&#039;&#039; and the &#039;&#039;&#039;sstProvisioningState&#039;&#039;&#039; to the current timestamp (UTC). All interested parties now know, that the restore process is finished.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# The attribute sstProvisioningState is updated with current time by the Control instance daemon, when sstProvisioningMode is&lt;br /&gt;
# set to finished.&lt;br /&gt;
# All interested parties now know, that the restore process is finished.&lt;br /&gt;
dn: ou=20121002T010000Z,ou=backup,sstVirtualMachine=kvm-005,ou=virtual machines,ou=virtualization,ou=services,o=stepping-stone,c=ch&lt;br /&gt;
changetype: modify&lt;br /&gt;
replace: sstProvisioningState&lt;br /&gt;
sstProvisioningState: 20121002T012001Z&lt;br /&gt;
-&lt;br /&gt;
replace: sstProvisioningMode&lt;br /&gt;
sstProvisioningMode: finished&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Current Implementation (Restore) ==&lt;br /&gt;
* Since the prov-backup-kvm daemon is not running on the vm-nodes (c.f. [[stoney_conductor:_Backup#State_of_the_art]]), the restore process does not work when clicking the icon in the webinterface. &lt;br /&gt;
* Resolving the conflicts in the backend and XML description file is not yet done&lt;br /&gt;
** Actually all steps not executed by prov-backup-kvm are not yet properly implemented (c.f. [[stoney_conductor:_prov_backup_kvm#Restore]])&lt;br /&gt;
* The implementation is done, but the last step from the [[#Restore_2 | restore process ]] is different:&lt;br /&gt;
** The &amp;lt;code&amp;gt;virsh restore&amp;lt;/code&amp;gt; command is not executed with the &amp;lt;code&amp;gt;--xml&amp;lt;/code&amp;gt; option, the XML from the state file is taken when restoring the machine. Therefore the conflicts are not properly resolved. &lt;br /&gt;
*** --[[User:Pat|Pat]] ([[User talk:Pat|talk]]) 09:41, 29 October 2013 (CET): Currently the [http://search.cpan.org/~danberr/Sys-Virt-1.1.3/lib/Sys/Virt.pm Sys::Virt] library does not support the --xml parameter when restoring a domain&lt;br /&gt;
&lt;br /&gt;
=== How to manually restore a machine from backup ===&lt;br /&gt;
&#039;&#039;&#039;Important&#039;&#039;&#039;: Before you continue with this guide, make sure that you have no other possibility to restore the machine. It might be easier and safer to get lost files from the online backup if the machine has one set up.&lt;br /&gt;
&lt;br /&gt;
If you really have to restore the machine from the backup:&lt;br /&gt;
# Stop the machine from via the [https://cloud.stepping-stone.ch/vm-manager/ web interface]&lt;br /&gt;
# Login (as root) on the [[VM-Node]] the machine was running on&lt;br /&gt;
&lt;br /&gt;
As a first step, you would like to set some useful bash variables to be able to copy paste the following guide:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Double check all variables you are setting here. If one is not correct, you will restore a running machine or overwrite a live-disk image!&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
machinename=&amp;quot;&amp;lt;MACHINE-NAME&amp;gt;&amp;quot; # For example: machinename=&amp;quot;b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6&amp;quot;&lt;br /&gt;
vmpool=&amp;quot;&amp;lt;VM-POOL&amp;gt;&amp;quot; # For example vmpool=&amp;quot;0f83f084-8080-413e-b558-b678e504836e&amp;quot;&lt;br /&gt;
vmtype=&amp;quot;&amp;lt;VM-TYPE&amp;gt;&amp;quot; # For example vmtype=&amp;quot;vm-persistent&amp;quot;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change to the backup directory for the given machine and check the iterations:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Change into the most recent iteration&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd 2014...&lt;br /&gt;
ls -al&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In there you should have: &lt;br /&gt;
* The state file &amp;lt;MACHINE-NAME&amp;gt;.state.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.state.20140109T134445Z)&lt;br /&gt;
* The XML description &amp;lt;MACHINE-NAME&amp;gt;.xml.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.xml.20140109T134445Z)&lt;br /&gt;
* The ldif file &amp;lt;MACHINE-NAME&amp;gt;.ldif.&amp;lt;BACKUP-DATE&amp;gt; (for example b6dc3d27-5981-4b18-8f3f-31ed3d21a3c6.ldif.20140109T134445Z)&lt;br /&gt;
* And at least one disk image &amp;lt;DISK-IMAGE&amp;gt;.qcow2.&amp;lt;BACKUP-DATE&amp;gt; (for example 8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2.20140109T134445Z)&lt;br /&gt;
Now you should save the backup date and the disk image(s) in a variable&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
backupdate=&amp;quot;&amp;lt;BACKUP-DATE&amp;gt;&amp;quot; # For example: backupdate=&amp;quot;20140109T134445Z&amp;quot;&lt;br /&gt;
diskimage1=&amp;quot;&amp;lt;DISK-IMAGE-1&amp;gt;.qcow2&amp;quot; # For example: diskimage1=&amp;quot;8798561b-d5de-471b-a6fc-ec2b4831ed12.qcow2&amp;quot;&lt;br /&gt;
diskimage2=&amp;quot;&amp;lt;DISK-IMAGE-2&amp;gt;.qcow2&amp;quot; # For example: diskimage2=&amp;quot;aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee.qcow2&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Have again a look at the different variables and &#039;&#039;&#039;double check them again&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
echo &amp;quot;Machine Name = ${machinename}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Pool = ${vmpool}&amp;quot;&lt;br /&gt;
echo &amp;quot;VM Type = ${vmtype}&amp;quot;&lt;br /&gt;
echo &amp;quot;Backup date = ${backupdate}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 1 = ${diskimage1}&amp;quot;&lt;br /&gt;
echo &amp;quot;Disk Image 2 = ${diskimage2}&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all these files to the retain location:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
currentdate=`date --utc +&#039;%Y%m%dT%H%M%SZ&#039;`&lt;br /&gt;
mkdir -p /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.ldif.${backupdate} /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Check if there is a difference between the current XML file and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
diff -Naur /etc/libvirt/qemu/${machinename}.xml /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.xml.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Now you are entering the critical part. You won&#039;t be able to undo the following steps&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Check if there is a difference between the current LDAP entry and the one from the backup&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
domain=&amp;quot;&amp;lt;DOMAIN&amp;gt;&amp;quot; # For example domain=&amp;quot;stoney-cloud.org&amp;quot;&lt;br /&gt;
ldapbase=&amp;quot;&amp;lt;LDAPBASE&amp;gt;&amp;quot; # For expample ldapbase=&amp;quot;dc=stoney-cloud,dc=org&amp;quot;&lt;br /&gt;
ldapsearch -H ldaps://ldapm.${domain} -b &amp;quot;sstVirtualMachine=${machinename},ou=virtual machines,ou=virtualization,ou=services,${ldapbase}&amp;quot; -s sub -x -LLL -o ldif-wrap=no -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W  &amp;quot;(objectclass=*)&amp;quot; &amp;gt; /tmp/${machinename}.ldif&lt;br /&gt;
diff -Naur /tmp/${machinename}.ldif /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
and &#039;&#039;&#039;edit the file at the retain location&#039;&#039;&#039; according to your needs.&lt;br /&gt;
&lt;br /&gt;
If there are no differences (or the differences are not important) you can skip the following step. Otherwise use the [https://cloud.stepping-stone.ch/phpldapadmin PhpLdapAdmin] to delete the machine from the LDAP directory (do not forget to delete the dhcp entry &amp;lt;code&amp;gt;dn: cn=&amp;lt;MACHINE-NAME&amp;gt;,ou=virtual machines,cn=192.168.140.0,cn=config-01,ou=dhcp,ou=networks,ou=virtualization,ou=services,dc=stoney-cloud,dc=org&amp;lt;/code&amp;gt;). Then add the LDIF (the one you just edited) to the LDAP (first do some general replacement)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sed -i\&lt;br /&gt;
 -e &#039;s/snapshotting/finished/&#039;\&lt;br /&gt;
 -e &#039;/member.*/d&#039;\&lt;br /&gt;
 /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&lt;br /&gt;
/usr/bin/ldapadd -H &amp;quot;ldaps://ldapm.${domain}&amp;quot; -x -D &amp;quot;cn=Manager,${ldapbase}&amp;quot; -W -f /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}/${machinename}.ldif.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Undefine the machine&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh undefine ${machinename}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy all the disk images from the backup location back to their original location&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage1}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage1}&lt;br /&gt;
cp -p /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${diskimage2}.${backupdate} /var/virtualization/${vmtype}/${vmpool}/${diskimage2}&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And restore the domain from the state file from the backup location with the XML from the retain location (the one you might have edited)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virsh restore /var/backup/virtualization/${vmtype}/${vmpool}/${machinename}/${backupdate}/${machinename}.state.${backupdate}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now the machine should be up and running again. Continuing where it was stopped when taking the backup.&lt;br /&gt;
&lt;br /&gt;
If everything is OK, you can cleanup the created files and directories&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rm -rf /var/virtualization/retain/${vmtype}/${vmpool}/${machinename}/${currentdate}&lt;br /&gt;
rm /tmp/${machinename}.ldif&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Next steps ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: stoney conductor]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3733</id>
		<title>stoney conductor: prov-backup-kvm</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_conductor:_prov-backup-kvm&amp;diff=3733"/>
		<updated>2014-06-26T13:23:42Z</updated>

		<summary type="html">&lt;p&gt;Pat: /* commitSnapshot */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
The &#039;&#039;&#039;Provisioning-Backup-KVM Daemon&#039;&#039;&#039; is written in Perl and uses the mechanisms described under [[stoney core: OpenLDAP directory data organisation]].&lt;br /&gt;
&lt;br /&gt;
= Workflow =&lt;br /&gt;
== Backup ==&lt;br /&gt;
This is the simplified workflow for the Provisioning-Backup-KVM Daemon. The Subroutines (snapshot, merge and retain) are shown later.&lt;br /&gt;
&lt;br /&gt;
[[File:KVM-Backup-Workflow.png|thumb|none|400px|Figure 1: Simplified prov-backup-kvm workflow]]&lt;br /&gt;
&lt;br /&gt;
You can modify/update this workflow by editing [[File:KVM-Backup-simple.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).&lt;br /&gt;
&lt;br /&gt;
=== createSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Snapshot.png|thumb|none|500px|Figure 2: Detailed workflow for the snaphshot process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See also: [[stoney_conductor:_Backup#Snapshot | Snapshot: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== exportSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Merge.png|thumb|none|500px|Figure 2: Detailed workflow for the merge process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also: [[ stoney_conductor:_Backup#Merge | Merge: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
=== commitSnapshot ===&lt;br /&gt;
[[File:KVM-Backup-Workflow-Retain.png|thumb|none|500px|Figure 3: Detailed workflow for the retain process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-detailed.xmi]]&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#Retain | Retain: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
== Restore ==&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#00FF00&amp;quot;&amp;gt;Task for the control-instance daemon&amp;lt;/span&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#FFFF00&amp;quot;&amp;gt;Task for the prov-backup-kvm daemon&amp;lt;/span&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;background:#FF8000&amp;quot;&amp;gt;Task for the vm-manager&amp;lt;/span&amp;gt;&lt;br /&gt;
[[File:KVM-Backup-Workflow-Restore.png|thumb|none|500px|Figure 4: Detailed workflow for the restore process]]&lt;br /&gt;
&lt;br /&gt;
You can edit this workflow with the following file (you may need umbrello to modify it): [[File:KVM-Backup-Workflow-restore.xmi]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See also [[stoney_conductor:_Backup#Basic_idea_2 | Restore: Basic idea]]&lt;br /&gt;
&lt;br /&gt;
= Configuration =&lt;br /&gt;
== Global ==&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Copyright (C) 2013 stepping stone GmbH&lt;br /&gt;
#                    Switzerland&lt;br /&gt;
#                    http://www.stepping-stone.ch&lt;br /&gt;
#                    support@stepping-stone.ch&lt;br /&gt;
#&lt;br /&gt;
# Authors:&lt;br /&gt;
#  Pat Kläy &amp;lt;pat.klaey@stepping-stone.ch&amp;gt;&lt;br /&gt;
#  &lt;br /&gt;
# Licensed under the EUPL, Version 1.1.&lt;br /&gt;
#&lt;br /&gt;
# You may not use this work except in compliance with the&lt;br /&gt;
# Licence.&lt;br /&gt;
# You may obtain a copy of the Licence at:&lt;br /&gt;
#&lt;br /&gt;
# http://www.osor.eu/eupl&lt;br /&gt;
#&lt;br /&gt;
# Unless required by applicable law or agreed to in&lt;br /&gt;
# writing, software distributed under the Licence is&lt;br /&gt;
# distributed on an &amp;quot;AS IS&amp;quot; basis,&lt;br /&gt;
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either&lt;br /&gt;
# express or implied.&lt;br /&gt;
# See the Licence for the specific language governing&lt;br /&gt;
# permissions and limitations under the Licence.&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[Global]&lt;br /&gt;
# If true the script logs every information to the log-file.&lt;br /&gt;
LOG_DEBUG = 1&lt;br /&gt;
&lt;br /&gt;
# If true the script logs additional information to the log-file.&lt;br /&gt;
LOG_INFO = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs warnings to the log-file.&lt;br /&gt;
LOG_WARNING = 1&lt;br /&gt;
&lt;br /&gt;
#If true the script logs errors to the log-file.&lt;br /&gt;
LOG_ERR = 1&lt;br /&gt;
&lt;br /&gt;
# The environment indicates the hostname (fqdn) on which the prov-backup-kvm &lt;br /&gt;
# daemon is running&lt;br /&gt;
ENVIRONMENT = &amp;lt;STONEY-CLOUD-NODE-NAME&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
# All information related to the database (backend) the daemon connects to&lt;br /&gt;
[Database]&lt;br /&gt;
BACKEND = LDAP&lt;br /&gt;
SERVER = &amp;lt;STONEY-CLOUD-LDAP-SERVER&amp;gt;&lt;br /&gt;
PORT = &amp;lt;STONEY-CLOUD-LDAP-PORT&amp;gt;&lt;br /&gt;
ADMIN_USER = &amp;lt;STONEY-CLOUD-LDAP-BINDDN&amp;gt;&lt;br /&gt;
ADMIN_PASSWORD = &amp;lt;STONEY-CLOUD-LDAP-BIND-PASSWORD&amp;gt;&lt;br /&gt;
SERVICE_SUBTREE = &amp;lt;STONEY-CLOUD-LDAP-SERVICE-SUBTREE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# A cookie file will be used to be able to restart the daemon without&lt;br /&gt;
# processing every entry again (they appear as new if the daemon is started) &lt;br /&gt;
COOKIE_FILE = &amp;lt;STONEY-CLOUD-LDAP-COOKIE-FILE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# The default cookie just contains an empty CSN, in that way, all entries&lt;br /&gt;
# are processed&lt;br /&gt;
DEFAULT_COOKIE = rid=001,csn=&lt;br /&gt;
&lt;br /&gt;
# The search filter for the database. Only process entries found with this&lt;br /&gt;
# filter&lt;br /&gt;
SEARCH_FILTER = (&amp;amp;(entryCSN&amp;gt;=%entryCSN%)(objectClass=*))&lt;br /&gt;
&lt;br /&gt;
# Indicates the prov-backup-kvm configuration which applies for every&lt;br /&gt;
# VM-Pool and every VM if not overwritten by a VM-Pool- or VM-specific &lt;br /&gt;
# configuration&lt;br /&gt;
STONEY_CLOUD_WIDE_CONFIGURATION = &amp;lt;STONEY-CLOUD-LDAP-PROV-BACKUP-KVM-DEFAULT-CONFIGURATION&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Configuration concerining the provisioning module&lt;br /&gt;
[Service]&lt;br /&gt;
&lt;br /&gt;
# The modus should always be selfcare&lt;br /&gt;
MODUS = selfcare&lt;br /&gt;
&lt;br /&gt;
# Which TransportApi is used to execute the commands on the destination system&lt;br /&gt;
# TransportApi can be &amp;quot;LocalCLI&amp;quot; or &amp;quot;CLISSH&amp;quot;&lt;br /&gt;
TRANSPORTAPI = LocalCLI&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning service&lt;br /&gt;
SERVICE = Backup&lt;br /&gt;
&lt;br /&gt;
# The name of the provisioning type&lt;br /&gt;
TYPE = KVM&lt;br /&gt;
&lt;br /&gt;
# The syslog tag (normally service-type)&lt;br /&gt;
SYSLOG = Backup-KVM&lt;br /&gt;
&lt;br /&gt;
# All information concerning the gateway (TransportApi)&lt;br /&gt;
[Gateway]&lt;br /&gt;
HOST = localhost&lt;br /&gt;
USER = provisioning&lt;br /&gt;
DSA_FILE = none&lt;br /&gt;
&lt;br /&gt;
# Service specific configuration which is not present in the backend&lt;br /&gt;
[Backup]&lt;br /&gt;
&lt;br /&gt;
# Which command is used to export files&lt;br /&gt;
EXPORT_COMMAND = cp -p&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Backend ==&lt;br /&gt;
In the backend, you need to have at least one configuration which applies for the whole stoney cloud. This configuration is referenced in the [[#Global|global configuration]]. You are able to overwrite the stoney-cloud-wide configuration for&lt;br /&gt;
* A VM-Pool&lt;br /&gt;
* A single VM&lt;br /&gt;
The configuration which applies for the VM is evaluated in the following way:&lt;br /&gt;
# Check if the VM has a VM-specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# Check if the VM-Pool has a specific configuration&lt;br /&gt;
#* If yes, this one applies&lt;br /&gt;
#* If not, continue&lt;br /&gt;
# The stoney-cloud-wide configuration applies&lt;br /&gt;
&lt;br /&gt;
=== Mandatory Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupNumberOfIterations&#039;&#039;&#039;: An integer value how many backup iterations should be kept. Default is 1 (for disaster recovery).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRootDirectory&#039;&#039;&#039;: The path to the backup root directory where all iterations of disk-images and state files are stored. Default is file:///var/backup/virtualization.&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRetainDirectory&#039;&#039;&#039;: The path to the local retain directory where the temporary snapshots (disk-image and state file) are stored. Default is file:///var/virtualization/retain.&lt;br /&gt;
* &#039;&#039;&#039;sstRestoreVMWithoutState&#039;&#039;&#039;: Boolean value which indicates whether or not to restore a virtual machine without the state. Default is FALSE (most often we want to restore the state together with the virtual machine).&lt;br /&gt;
* &#039;&#039;&#039;sstBackupRamDiskLocation&#039;&#039;&#039;: Path to the RAM-Disk. Default is /mnt/ramdisk. Because this attribute can be set for the whole FOSS-Cloud, for a specific VM-Pool, for a specific virtual machine or a specific virtual machine template, this attribute is independent from the VM-Nodes. There for no guarantee can be given, that this RAM-Disk exists on all the VM-Nodes. A check for its existence is mandatory!&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineForceStart&#039;&#039;&#039;: Force start VM in the case of not being able to restore the VM State during the backup process. TRUE or FALSE, default is FALSE. Attention: If set to TRUE, this could lead to file system inconsistencies in the virtual machine.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationBandwidthMerge&#039;&#039;&#039;: Bandwidth of the disk merging process (specifies the maximum I/O rate to allow in Megabyte/s). Default is 0 (unlimited). Integer Attribute, single value.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageFormat&#039;&#039;&#039;: The format for the new disk image that is created during the backup process. Default is qcow2.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageOwner&#039;&#039;&#039;: The owner for the new disk image that is created during the backup process. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageGroup&#039;&#039;&#039; : The group for the new disk image that is created during the backup process. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImagePermission&#039;&#039;&#039;: The permission (in octal representation) for the new disk image that is created during the backup process. Default is 660 (equivalent to 0660).&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryOwner&#039;&#039;&#039;: The owner for the new directory where the disk image is located. Default is root.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryGroup&#039;&#039;&#039;: The group for the new directory where the disk image is located. Default is vm-storage.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationDiskImageDirectoryPermission&#039;&#039;&#039;: The permission (in octal representation) for the new directory where the disk image is located. Default is 770 (equivalent to 0770).&lt;br /&gt;
&lt;br /&gt;
=== Optional Configuration-Parameters ===&lt;br /&gt;
* &#039;&#039;&#039;sstBackupExcludeFromBackup&#039;&#039;&#039;: Do we want to exclude a virtual machine from the default backup plan? Default is FALSE.&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStop&#039;&#039;&#039;: Multiple dependencies for the stopping order can be defined. Example: a web VM depends on the corresponding database VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be stopped in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID1&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID3&lt;br /&gt;
* &#039;&#039;&#039;sstVirtualizationVirtualMachineSequenceStart&#039;&#039;&#039;: Multiple dependencies for the starting order can be defined. Example: a database VM must be started before the corresponding web VM. IA5String, multi valued. This attribute must exist in all of the virtual machine entries, that are to be started in a certain order. Example (0,1,2, ... is the order, UUID1, UUID2, ... is the uuid of a virtual machine):&lt;br /&gt;
** 0: UUID3&lt;br /&gt;
** 1: UUID2&lt;br /&gt;
** 2: UUID1&lt;br /&gt;
&lt;br /&gt;
= Exit codes =&lt;br /&gt;
The following list defines the return codes and their meaning for the KVM-Backup script see also [https://github.com/stepping-stone/prov-backup-kvm/blob/master/lib/Provisioning/Backup/KVM/Constants.pm KVMConstants.pm]:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
use constant SUCCESS_CODE                               =&amp;gt; 0;&lt;br /&gt;
&lt;br /&gt;
### Error codes constants&lt;br /&gt;
use constant UNDEFINED_ERROR                            =&amp;gt; 1; # Always the first!&lt;br /&gt;
use constant MISSING_PARAMETER_IN_CONFIG_FILE           =&amp;gt; 2;&lt;br /&gt;
use constant CONFIGURED_RAM_DISK_IS_NOT_VALUD           =&amp;gt; 3;&lt;br /&gt;
use constant NOT_ENOUGH_SPACE_ON_RAM_DISK               =&amp;gt; 4;&lt;br /&gt;
use constant CANNOT_SAVE_MACHINE_STATE                  =&amp;gt; 5;&lt;br /&gt;
use constant CANNOT_WRITE_TO_BACKUP_LOCATION            =&amp;gt; 6;&lt;br /&gt;
use constant CANNOT_COPY_FILE_TO_BACKUP_LOCATION        =&amp;gt; 7;&lt;br /&gt;
use constant CANNOT_COPY_IMAGE_TO_BACKUP_LOCATION       =&amp;gt; 8;&lt;br /&gt;
use constant CANNOT_COPY_XML_TO_BACKUP_LOCATION         =&amp;gt; 9;&lt;br /&gt;
use constant CANNOT_COPY_BACKEND_FILE_TO_BACKUP_LOCATION=&amp;gt; 10;&lt;br /&gt;
use constant CANNOT_MERGE_DISK_IMAGES                   =&amp;gt; 11;&lt;br /&gt;
use constant CANNOT_REMOVE_OLD_DISK_IMAGE               =&amp;gt; 12;&lt;br /&gt;
use constant CANNOT_REMOVE_FILE                         =&amp;gt; 13;&lt;br /&gt;
use constant CANNOT_CREATE_EMPTY_DISK_IMAGE             =&amp;gt; 15;&lt;br /&gt;
use constant CANNOT_RENAME_DISK_IMAGE                   =&amp;gt; 16;&lt;br /&gt;
use constant CANNOT_CONNECT_TO_BACKEND                  =&amp;gt; 17;&lt;br /&gt;
use constant WRONG_STATE_INFORMATION                    =&amp;gt; 18;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_OWNERSHIP            =&amp;gt; 19;&lt;br /&gt;
use constant CANNOT_SET_DISK_IMAGE_PERMISSION           =&amp;gt; 20;&lt;br /&gt;
use constant CANNOT_RESTORE_MACHINE                     =&amp;gt; 21;&lt;br /&gt;
use constant CANNOT_LOCK_MACHINE                        =&amp;gt; 22;&lt;br /&gt;
use constant CANNOT_FIND_MACHINE                        =&amp;gt; 23;&lt;br /&gt;
use constant CANNOT_COPY_STATE_FILE_TO_RETAIN           =&amp;gt; 24;&lt;br /&gt;
use constant RETAIN_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 25;&lt;br /&gt;
use constant BACKUP_ROOT_DIRECTORY_DOES_NOT_EXIST       =&amp;gt; 26;&lt;br /&gt;
use constant CANNOT_CREATE_DIRECTORY                    =&amp;gt; 27;&lt;br /&gt;
use constant CANNOT_SAVE_XML                            =&amp;gt; 28;&lt;br /&gt;
use constant CANNOT_SAVE_BACKEND_ENTRY                  =&amp;gt; 29;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_OWNERSHIP             =&amp;gt; 30;&lt;br /&gt;
use constant CANNOT_SET_DIRECTORY_PERMISSION            =&amp;gt; 31;&lt;br /&gt;
use constant CANNOT_FIND_CONFIGURATION_ENTRY            =&amp;gt; 32;&lt;br /&gt;
use constant BACKEND_XML_UNCONSISTENCY                  =&amp;gt; 33;&lt;br /&gt;
use constant CANNOT_CREATE_TARBALL                      =&amp;gt; 34;&lt;br /&gt;
use constant UNSUPPORTED_FILE_TRANSFER_PROTOCOL         =&amp;gt; 35;&lt;br /&gt;
use constant UNKNOWN_BACKEND_TYPE                       =&amp;gt; 36;&lt;br /&gt;
use constant MISSING_NECESSARY_FILES                    =&amp;gt; 37;&lt;br /&gt;
use constant CORRUPT_DISK_IMAGE_FOUND                   =&amp;gt; 38;&lt;br /&gt;
use constant UNSUPPORTED_CONFIGURATION_PARAMETER        =&amp;gt; 39;&lt;br /&gt;
use constant CANNOT_MOVE_DISK_IMAGE_TO_ORIGINAL_LOCATION=&amp;gt; 40;&lt;br /&gt;
use constant CANNOT_DEFINE_MACHINE                      =&amp;gt; 41;&lt;br /&gt;
use constant CANNOT_START_MACHINE                       =&amp;gt; 42;&lt;br /&gt;
use constant CANNOT_WORK_ON_UNDEFINED_OBJECT            =&amp;gt; 43;&lt;br /&gt;
use constant CANNOT_READ_STATE_FILE                     =&amp;gt; 44;&lt;br /&gt;
use constant CANNOT_READ_XML_FILE                       =&amp;gt; 45;&lt;br /&gt;
use constant NOT_ALL_FILES_DELETED_FROM_RETAIN_LOCATION =&amp;gt; 46;&lt;br /&gt;
use constant NOT_ENOUGH_DISK_SPACE                      =&amp;gt; 47;&lt;br /&gt;
use constant NO_DISK_SPACE_INFORMATION                  =&amp;gt; 48;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Next steps =&lt;br /&gt;
* Change the behaviour of the snapshot/merge process&lt;br /&gt;
** No longer merge the original file into the new one but merge (commit) backing store file back into original one&lt;br /&gt;
*** Like that we are able to reduce the backup (merge) time a lot.&lt;br /&gt;
*** Needs different behaviour for save -&amp;gt; copy/move -&amp;gt; create new image -&amp;gt; restore -&amp;gt; merge&lt;br /&gt;
&lt;br /&gt;
= Source Code =&lt;br /&gt;
The source code is located in our GitHub Repository:&lt;br /&gt;
&lt;br /&gt;
https://github.com/stepping-stone/prov-backup-kvm&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney conductor]][[Category:Provisioning Modules]]&lt;/div&gt;</summary>
		<author><name>Pat</name></author>
	</entry>
</feed>