Changes

stoney cloud: Multi-Node Installation

3,184 bytes added, 12:54, 27 June 2014
/* Node Integration */
== System Overview ==The [http://www.stoney-cloud.org/ stoney-Cloudcloud] builds upon various standard open source components and can be run on commodity hardware. The final Multi-Node Setup consists of the following components:* One [[Primary-Master-Node]] with an OpenLDAP Directory Server for the storage of the stoney-Cloud cloud user and service related data with the web based management [[VM-Manager]] interface and the Linux kernel based virtualization technology.* One [[VMSecondary-Master-Node]] with the Linux kernel based virtualization technology.
* Two [[Storage-Node | Storage-Nodes]] configured as a replicated and distributed data storage service based on [http://www.gluster.org/ GlusterFS].
The components communicate with each other over a standard Ethernet based IPv4 network.
=== Prerequisites ===
The following items and conditions are required to be able to setup install and configure a stoney-Cloud cloud environment:
* Dedicated Hardware (4 Servers) which fulfil the following requirements:
** 64-Bit Intel with VT-Technologie (AMD is not tested at the moment).
=== Limitations ===
During the installation of the [http://www.gentoo.org/ Gentoo Linux] operating system, the first two physical Ethernet Interfaces are automatically configured as a logical interface (bond0) and the four tagged VLANs are set up. If more the than two physical Ethernet Interfaces are to be included in to the logical interface (bond0) or the physical Ethernet Interfaces have different bandwidths (for example 1 Gigabit/s and 10 Gigabit/s), the final logical interface (bond0) needs to be configured manually after the installation of the [http://www.gentoo.org/ Gentoo Linux] operating system.
Only the first two [http://www.gluster.org/ GlusterFS] [[Storage-Node | Storage-Nodes]] are set up automatically. More [[Storage-Node | Storage-Nodes]] need to be integrated manually.
=== Network Overview ===
As stated before, a minimal multi node stoney-Cloud cloud environment consist of two stoneyVM-Cloud and GlusterFS nodesStorage-Nodes.
It is highly recommended to use IEEE 802.3ad link aggregation (bonding, trunking etc.) over two network cards and attach them as one logical link to the access switch.
<pre>
+----------------------+ +----------------------+ | | | | | stoney-cloud vm-node-01 | | stoney-cloud vm-node-02 | | | | | +----------------------+ +----------------------+ bond0 | \ / | bond0 | \ / | | \ / | | \/ | ___ | /\ | ___( )___ | / \ | __( )__ | / \ | _( )_ +-------------+ +-------------+ _( )_ | switch-01 |===| switch-02 |-------------------(_ Corporate LAN/WAN _) +-------------+ +-------------+ (_ _) | \ / | (__ __) | \ / | (___ ___) | \ / | (___) | \/ | | /\ | | / \ | bond0 | / \ | bond0 +-----------------------+ +-----------------------+ | | | | | | glustertier1-storage-node-01 | | glustertier1-storage-node-02 | | | | | | +-----------------------+ +-----------------------+
</pre>
If theres only one switch available (or the switches aren't stackable) connect the nodes as illustrated below. As you can see, the switch is as a single point of failure.
<pre>
+----------------------+ +----------------------+ | | | | | stoney-cloud vm-node-01 | | stoney-cloud vm-node-02 | | | | | +----------------------+ +----------------------+ ___ bond0 \\ // bond0 ___( )___ \\ // __( )__ \\ // _( )_ +-------------+ _( )_ | switch-01 |-----------------------------(_ Corporate LAN/WAN _) +-------------+ (_ _) // \\ (__ __) // \\ (___ ___) bond0 // \\ bond0 (___) +-----------------------+ +-----------------------+ | | | | | | glustertier1-storage-node-01 | | glustertier1-storage-node-02 | | | | | +-----------------------+ +-----------------------+
</pre>
==== Network overview: Logical layer ====
The goal is to achieve the following configuration:<pre>+----------------+----------------+----------------+----------------+| 10.1.110.1X | 10.1.120.1X | 10.1.130.1X | 192.168.140.1X |+----------------+----------------+----------------+----------------+| | | | vmbr0 || vlan110 | vlan120 | vlan130 +----------------+| | | | vlan140 |+----------------+----------------+----------------+----------------++-------------------------------------------------------------------+| bond0 (bonding.mode=802.3ad) |+-------------------------------------------------------------------++----------------+ +----------------+| eth0 | | eth1 |+----------------+ +----------------+</pre> The ideal stoney-Cloud cloud environment is based on four logical separated VLANs (virtual LANs):
* '''admin''': Administrative network, used for administration and monitoring purposes.
* '''data''': Data network, used for GlusterFS traffic.
|-
|glustertier1-storage-node-01
|10.1.110.11
|10.1.120.11
|-
|glustertier1-storage-node-02
|10.1.110.12
|10.1.120.12
|-
|stoney-cloudvm-node-01
|10.1.110.13
|10.1.120.13
|-
|stoney-cloudvm-node-02
|10.1.110.14
|10.1.120.14
=== RAID Set Up ===
Create a RAID1 volume. This RAID-Set is used for the Operating System. Please be aware, the the current stoney-Cloud cloud only supports 147 Gigabyte up to 2 Terabyte disks for this first RAID-Set.
Optional: For the two [http://www.gluster.org/ GlusterFS] [[Storage-Node | Storage-Nodes]] we recommend a second RAID-Set configured as [http://en.wikipedia.org/wiki/RAID_6#RAID_6 RAID6]-Set with battery backup.
* '''udevd''': Linux dynamic device management, that manage events, symlinks and permissions of devices.
=== Skipping Checks ===To skip checks, type '''no''' when asked: Do you want to start the installation? yes or no?: '''no''' Then manually restart the stoney-Cloud installer with the desired options: /mnt/cdrom/stoney-cloud-installer -c Options: -c: Skip CPU requirement checks -m: Skip memory requirement checks -s: Skip CPU and memory requirement checks ==== First Storage-Node (glustertier1-storage-node-01) ====# Insert the stoney-Cloud cloud CD and boot the server.
# Answer the questions as follows (the bold values are examples can be set through the administrator and are variable, according to the local setup):
## Global Section
#### Device #0: '''eth0'''
#### Device #1: '''eth1'''
### Node-Name: '''glustertier1-storage-node-01'''
## pub-VLAN Section
### VLAN ID: '''140'''
### Omit configuring a second DNS-Server with '''no'''
## Confirm the listed configuration with '''yes'''
## Enter your very secret root password
## Confirm to reboot with '''yes'''
# Make sure, that you boot from the first harddisk and not from the installation medium again.
# Continue with [[Multi-Node Installation#Specialized_Installation|specializing your Node]]
==== Second Storage-Node (glustertier1-storage-node-02) ====# Insert the stoney-Cloud cloud CD and boot the server.
# Answer the questions.
# Reboot the Server and make sure, that you boot from the first harddisk.
==== Primary-Master-Node (stoney-cloudvm-node-01) ====# Insert the stoney-Cloud cloud CD and boot the server.
# Answer the questions.
# Reboot the Server and make sure, that you boot from the first harddisk.
==== Secondary-Master-Node (stoney-cloudvm-node-02) ====# Insert the stoney-Cloud cloud CD and boot the server.
# Answer the questions.
# Reboot the Server and make sure, that you boot from the first harddisk.
 
== Skipping Checks ==
To skip checks, type '''no''' when asked:
Do you want to start the installation?
yes or no?: '''no'''
 
Then manually restart the stoney cloud installer with the desired options. For example:
/mnt/cdrom/foss-cloud-installer -c
 
Options:
-c: Skip CPU requirement checks
-m: Skip memory requirement checks
-s: Skip CPU and memory requirement checks
== Specialized Installation ==
==== First Storage-Node (glustertier1-storage-node-01) ====
Before running the node configuration script, you may want to create a [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#Before_node_configuration_script | additional Backup Volume on Storage Node]].
For more information about the script and what it does, please visit the [[fc-node-configuration]] script page.
==== Second Storage-Node (glustertier1-storage-node-02) ====
Before running the node configuration script, you may want to create a [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#Before_node_configuration_script | additional Backup Volume on Storage Node]].
For more information about the script and what it does, please visit the [[node-configuration]] script page.
==== Primary-Master-Node (stoney-cloudvm-node-01) ====
If you configured a additional Backup Volume on the Storage Nodes, you want to [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#On_the_VM-Nodes | mount them now in the VM-Node]].
/usr/sbin/fc-node-configuration --node-type primary-master-node
The stoney-Cloud cloud uses virtual ip addresses (VIPs) for failover fail over purposes. Therefore you need to configure [http://www.pureftpd.org/project/ucarp ucarp].
Confirm that you want to run the script.
** Currently the user for the prov-backup-kvm daemon is the LDAP-Superuser so enter the same password again
** Define the password for the LDAP-dhcp user (cn=dhcp,ou=services,ou=administration,dc=stoney-cloud,dc=org)
** Enter all necessary information for the stoney-Cloud cloud administrator (User1)
*** Given name
*** Surname
*** Password
* Finally enter the domain name which will correspond to the public VIP (default is stoney-cloud.example.tldorg* Due to [https://github.com/stepping-stone/node-integration/issues/9 bug #9], you need to manually finish the configuration of the libvirthook scripts:** You mainly have to fill in the following variables:*** '''libvirtHookFirewallSvnUser'''*** '''libvirtHookFirewallSvnPassword'''** See also [https://int.stepping-stone.ch/wiki/libvirt_Hooks#Config_for_test-environment this test configuration]* Due to [https://github.com/stepping-stone/node-integration/issues/12 bug #12], you need to manually configure the LDAPKVMWrapper.pl script:** Fill in the <code>/etc/Provisioning/Backup/LDAPKVMWrapper.conf</code> file** Create a cronjob entry which runs the script <code>/usr/bin/LDAPKVMWrapper.pl</code> once a day:*** <code>00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM</code>
For more information about the script and what it does, please visit the [[fc-node-configuration]] script page.
==== Secondary-Master-Node (stoney-cloudvm-node-02) ====
If you configured a additional Backup Volume on the Storage Nodes, you want to [[Additional_Local_Backup_Volume_on_the_Storage-Nodes#On_the_VM-Nodes | mount them now in the Secondary-Master-Node]].
* Enter the LDAP-Superuser password you defined during the [[Multi-Node Installation#Primary-Master-Node (stoney-cloud-node-01)_2 | primary-master-node]] installation
 
* Due tu [https://github.com/stepping-stone/node-integration/issues/9 bug #9], you need to manually finish the configuration of the libvirthook scripts:
** You mainly have to fill in the following variables:
*** '''libvirtHookFirewallSvnUser'''
*** '''libvirtHookFirewallSvnPassword'''
** See also [https://int.stepping-stone.ch/wiki/libvirt_Hooks#Config_for_test-environment this test configuration]
* Due to [https://github.com/stepping-stone/node-integration/issues/12 bug #12], you need to manually configure the LDAPKVMWrapper.pl script:
** Fill in the <code>/etc/Provisioning/Backup/LDAPKVMWrapper.conf</code> file
** Create a cronjob entry which runs the script <code>/usr/bin/LDAPKVMWrapper.pl</code> once a day:
*** <code>00 01 * * * /usr/bin/LDAPKVMWrapper.pl | logger -t Backup-KVM</code>
For more information about the script and what it does, please visit the [[fc-node-configuration]] script page.
== Links ==
= Node Integration =
The following figure gives an overview what the node-integration script does for the different node types
 
[[File:node-integration.png|500px|thumbnail|none|What does the node-integration script for the different node types]]
 
 
You can modify/update these steps by editing [[File:node-integration.xmi]] (you may need [http://uml.sourceforge.net/ Umbrello UML Modeller] diagram programme for KDE to display the content properly).
= Old Documentation =
</source>
[[Category:stoney cloud]][[Category:Documentation]][[Category:Installation]]
486
edits