Changes

stoney backup: Server set-up

2,058 bytes removed, 07:41, 27 June 2014
/* Provisioning global configuration */
== Requirements ==
A working stoney cloud installation, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].
== Keywords & USE-Flags ==
echo "perl-core/Switch ~amd64" >> /etc/portage/package.keywords
To build puttygen only without X11 (ebuild currently from our overlay, see [https://bugs.gentoo.org/show_bug.cgi?id=482816 Gentoo Bug #482816]):
echo "net-misc/putty ~amd64" >> /etc/portage/package.keywords
echo "net-misc/putty -gtk" >> /etc/portage/package.use
</pre>
For more information, visit the [http://www.gentoo.org/doc/en/gentoolkit.xml Gentoolkit] page. = Base Server Software Configuration =
== OpenSSH ==
=== OpenSSH Configuration ===
=== Root CA Certificate Installation ===
Install the root CA certificate into the OpenSSL default certificate storage directory:
fqdn="cloud.stoney-cloud.org" # The fully qualified domain name of the server containing the root certificate.
cd /etc/ssl/certs/
wget --no-check-certificate https://cloud.stepping-stone.ch${fqdn}/ca/FOSS-Cloud_CA.cert.pem
chown root:root /etc/ssl/certs/FOSS-Cloud_CA.cert.pem
chmod 444 /etc/ssl/certs/FOSS-Cloud_CA.cert.pem
# server is listening on. If no port number is provided, the default port for the scheme is
# used (389 for ldap://, 636 for ldaps://). A space separated list of URIs may be provided.
URI ldaps://ldapm.steppingstoney-stonecloud.chorg
# Used to specify the default base DN to use when performing ldap operations. The base must be
auth sufficient pam_unix.so try_first_pass likeauth nullok
auth sufficient pam_ldap.so minimum_uid=1000 use_first_pass
auth optional required pam_permitpam_deny.so
account required pam_unix.so
account sufficient pam_ldap.so minimum_uid=1000 use_first_pass
account optional pam_permit.so
password required pam_cracklib.so difok=2 minlen=8 dcredit=2 ocredit=2 retry=3
password required pam_unix.so try_first_pass use_authtok nullok sha512 shadow
password sufficient pam_ldap.so minimum_uid=1000 use_first_pass
password optional required pam_permitpam_deny.so
session required pam_limits.so
session required pam_unix.so
session sufficient pam_ldap.so minimum_uid=1000 use_first_pass
session optional pam_permit.so
</pre>
=== Start the necessary Daemons ===
/etc/init.d/nslcd start
/etc/init.d/nscd start
== Quota ==
=== 32-bit Project Identifier Support ===
We need to enable 32-bit project identifier support (PROJID32BIT feature) for our naming scheme (uid numbers larger than 65'536), which is already the default on the stepping stone virtual machines: mkfs.xfs '''-i projid32bit=1''' /dev/vdbvg-local-01/var
=== Update /etc/fstab and Mount ===
Make sure, that you have user quota (uqota) and project quota (pquota) set as options on the chosen mount point in /etc/fstab. For example:
/dev/vdb LABEL=LV-VAR /var/backup xfs noatime,discard,inode64,uquota,pquota 0 02
mkdir /var/backup chmod 755 /var/backup  mount /var/backupreboot
Check, if everything went ok:
df -h | grep backupvar
/dev/vdb 1.0T 33M mapper/vg--local--01-var 1.0T 11023G 220G 804G 22% /var/backup
=== Verify ===
* -c: Pass arguments on the command line. Multiple arguments may be given.
Remount the file system /var/backup and check, if /var/backup has the desired values: xfs_quota -x -c state /var/backup
As you can see (items marked bold), we have achieved our goal:
User quota state on /var/backup (/dev/vdbmapper/vg--local--01-var)
Accounting: '''ON'''
Enforcement: '''ON'''
Inode: #131 (1 blocks, 1 extents)
Group quota state on /var/backup (/dev/vdbmapper/vg--local--01-var)
Accounting: OFF
Enforcement: OFF
Inode: #132 (1 blocks, 1 extents)
Project quota state on /var/backup (/dev/vdbmapper/vg--local--01-var)
Accounting: '''ON'''
Enforcement: '''ON'''
==== Adding a User Quota ====
Set a quota of 1 Gigabyte for the user 4000187 (the values are in kilobytes, so 1048576 kilobyte are 1024 megabytes which corresponds to 1 gigabyte):
xfs_quota -x -c 'limit bhard=1048576k 4000187' /var/backup
Or in bytes:
xfs_quota -x -c 'limit bhard=1073741824 4000187' /var/backup
Read the quota information for the user 4000187:
xfs_quota -x -c 'quota -v -N -u 4000187' /var/backup
/dev/vdb mapper/vg--local--01-var 0 0 1048576 00 [--------] /var/backup
If the user has data in the project, that belongs to him, the result will change:
/dev/vdb mapper/vg--local--01-var 512000 0 1048576 00 [--------] /var/backup
==== Modifiying a User Quota ====
To modify a users quota, you just set a new quota (limit):
xfs_quota -x -c 'limit bhard=1048576k 4000187' /var/backup
Read the quota information for the user 4000187:
xfs_quota -x -c 'quota -v -N -u 4000187' /var/backup
/dev/vdb mapper/vg--local--01-var 0 0 1048576 00 [--------] /var/backup
If the user has data in the project, that belongs to him, the result will change:
/dev/vdb mapper/vg--local--01-var 512000 0 1048576 00 [--------] /var/backup
==== Removing a User Quota ====
Removing a quota for a user:
xfs_quota -x -c 'limit bhard=0 4000187' /var/backup
The following command should give you an empty result:
xfs_quota -x -c 'quota -v -N -u 4000187' /var/backup
=== Project (Directory) Quotas ===
* project_ID: The uid of the online backup account (4000187).
* project_name: The uid of the online backup account (4000187). This could be a human readable name.
* mountpoint: The mountpoint of the xfs-filesystem (/var/backup). See the <code>/etc/fstab</code> entry from above.* directory: The directory of the project (187/4000187), starting from the mountpoint of the xfs-filesystem (/var/backup).
Define a unique project ID for the directory hierarchy in the <code>/etc/projects</code> file (project_ID:mountpoint/directory):
Set Project:
xfs_quota -x -c 'project -s -p /var/backup/187/4000187/home/4000187 4000187' /var/backup
Set Quota (limit) on Project:
xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var/backup
Check your Quota (limit)
xfs_quota -x -c 'quota -p 4000187' /var/backup
Check the Quota:
* <code>-p</code>: display project quota information.
* <code>-h</code>: human readable format.
xfs_quota -x -c 'quota -v -N -p 4000187' /var/backup
/dev/vdb mapper/vg--local--01-var 0 0 1048576 00 [--------] /var/backup
If you copied data into the project, the output will look something like:
/dev/vdb mapper/vg--local--01-var 512000 0 1048576 00 [--------] /var/backup
To give you an overall view of the whole system:
xfs_quota -x -c report /var/backup
<pre>
User quota on /var/backup (/dev/vdbmapper/vg--local--01-var)
Blocks
User ID Used Soft Hard Warn/Grace
4000187 0 0 1048576 00 [--------]
Project quota on /var/backup (/dev/vdbmapper/vg--local--01-var)
Blocks
Project ID Used Soft Hard Warn/Grace
==== Modifying a Project (Directory) Quota ====
To modify a project (directory) quota, you just set an new quota (limit) on the chosen project:
xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var/backup
Check your quota (limit)
xfs_quota -x -c 'quota -p 4000187' /var/backup
==== Removing a Project (Directory) Quota ====
Removing a quota from a project:
xfs_quota -x -c 'limit -p bhard=0 4000187' /var/backup
Chreck the results:
xfs_quota -x -c report /var/backup
<pre>
User quota on /var/backup (/dev/vdbmapper/vg--local--01-var)
Blocks
User ID Used Soft Hard Warn/Grace
== prov-backup-rsnapshot ==
Install the [[stoney_backup:_prov-backup-rsnapshot | prov-backup-rsnasphot ]] daemon script using the package manager:
<pre>
cd /var/workgit clone emerge -va sys-recursive https:apps//github.com/steppingsst-stone/prov-backup-rsnapshot.gitcd /var/work/prov-backup-rsnapshot/Provisioning/etc/Provisioning/ln -s ../../../etc/Provisioning/Backup/ Backupcd /var/work/prov-backup-rsnapshot/Provisioning/lib/Provisioning/ln -s ../../../lib/Provisioning/Backup/ Backupchmod -R a+rX /var/work
</pre>
 
=== Configuration ===
If it is the first provisioning module running on this server (very likely) you first have to configure the provisioning daemon (you can skip this step if you have already another provisioning module running on this server) ==== Provisioning global configuration ====The global configuration file for the provisioning daemon (which was installed with the first provisioning module and the <code>sys-apps/sst-provisioning</code> package) applies to all provisioning modules running on the server. This configuration therefore contains information about the provisioning daemon itself and no information at all about the specific modules. /etc/Provisioning/Global.conf<pre># Copyright (C) 2012 stepping stone GmbH# Switzerland# http://www.stepping-stone.ch# support@stepping-stone.ch## Authors:# Pat Kläy <pat.klaey@stepping-stone.ch># # Licensed under the EUPL, Version 1.1.## You may not use this work except in compliance with the# Licence.# You may obtain a copy of the Licence at:## http://www.osor.eu/eupl## Unless required by applicable law or agreed to in# writing, software distributed under the Licence is currently located # distributed on an "AS IS" basis,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either# express or implied.# See the Licence for the specific language governing# permissions and limitations under the Licence.# [Global]# If true the script logs every information to the log-file.LOG_DEBUG = 0 # If true the script logs additional information to the log-file.LOG_INFO = 1 #If true the script logs warnings to the log-file.LOG_WARNING = 1 #If true the script logs errors to the log-file.LOG_ERR = 1  # The number of seconds to wait before retry contacting the backend server during startup.SLEEP = 10 # Number of backend server connection retries during startup.ATTEMPTS = 3 [Operation Mode]# The number of seconds to wait before retry contacting the backend server in case of a service interruptions.SLEEP = 30 # Number of backend server connection retries in case of a service interruptions.ATTEMPTS = 3 [Mail]# Error messages are sent to the mail configured below.SENDTO = <YOUR-MAIL-ADDRESS>HOST = mail.stepping-stone.chPORT = 587USERNAME = <YOUR-NOTIFICATION-EMAIL-ADDRESS>PASSWORD = <PASSWORD>FROMNAME = Provisioning daemonCA_DIR = /varetc/workssl/certsSSL = starttlsAUTH_METHOD = LOGIN # Additionally, you can be informed about creation, modification and deletion of services.WANTINFOMAIL = 1</pre> ==== Provisioning daemon prov-backup-rsnapshot directory: module ==== vi The module specific configuration is located in /varetc/workProvisioning/<Service>/<Type>.conf. In the case of the prov-backup-rsnapshotmodule this is <code>/etc/Provisioning/Backup/Rsnapshot_testRsnapshot.conf</code>. (Note: Comments starting with /* are not in the configuration file, they are only in the wiki to add some additional information)
<pre>
#
/* If you want, you can override the log information from the global configuration file this might be useful for debugging */
[Global]
# If true the script logs every information to the log-file.
LOG_ERR = 1
/* Specify the hosts fully qualified domain name. This name will be used to perform some checks and also appear in the information and error mails */ENVIRONMENT = <FQDN>
[Database]
BACKEND = LDAP
SERVER = ldaps://ldapm.tombstone.chorg
PORT = 636
ADMIN_USER = cn=Manager,dc=stoney-cloud,dc=org
ADMIN_PASSWORD = <PASSWORD>
SERVICE_SUBTREE = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org
COOKIE_FILE = /var/work/prov-backup-rsnapshot/Provisioning/etc/Provisioning/Backup/rnsapshotrsnapshot.cookie
DEFAULT_COOKIE = rid=001,csn=
SEARCH_FILTER = (&(entryCSN>=%entryCSN%)(objectClasssstProvisioningState=*0))
/* Specifies the service itself. As it is the prov-backup-rsnapshot module, the SERVICE is "Backup" and the TYPE is "Rsnapshot".
* The MODUS is as usual selfcare and the TRANSPORTAPI is LocalCLI. This is because the daemon is running on the same host as the
* backup accounts are provisioned and the commands can be executed on this host using the cli.
* For more information about MODUS and TRANSPORTAPI see https://int.stepping-stone.ch/wiki/provisioning.pl#Service_Konfiguration
*/
[Service]
MODUS = selfcare
TYPE = Rsnapshot
SYSLOG = Backupprov-Rsnapshotbackup-rsnapshot
/* For the TRANSPORTAPI LocalCLI there is no gateway required because there is no connection to establish. So set HOST, USER and
* DSA_FILE to whatever you want. Don't leave it blank, otherwise the provisioning daemon would log some error messages saying
* these attributes are empty
*/
[Gateway]
HOST = localhost
DSA_FILE = none
/* Information about the backup itself (how to setup everything). Note that the %uid% int the RSNAPSHOT_CONFIG_FILE parameter will
* be replaced by the accounts UID. The script CREATE_CHROOT_CMD was installed with the prov-backup-rsnapshot module, so do not
* change this parameter. The quota parameters (SET_QUOTA_CMD, MOUNTPOINT, QUOTA_FILE, PROJECTS_FILE and PROJID_FILE) represent
* the quota setup as described on http://wiki.stoney-cloud.org/index.php/stoney_backup:_Server_set-up#Quota. If you followed this
* manual, you can copy-paste them into your configuration file, otherwise adapt them according to your quota setup.
*/
[Backup]
RSNAPSHOT_CONFIG_FILE = /etc/rsnapshot/rsnapshot.conf.%uid%
SET_QUOTA_CMD = /usr/sbin/setquotaxfs_quotaCREATE_CHROOT_CMD = /rootusr/createDummyBackupDirectorylibexec/createBackupDirectory.sh # You might want to change this for the productive systemMOUNTPOINT = / # You might want to change this for the productive systemvarQUOTA_FILE = /etc/backupSizePROJECTS_FILE = /etc/projectsPROJID_FILE = /etc/projid
</pre>
=== Init Scripts =backup utils ==Currently we just create very basic init Install the backup utils (multiple scripts which start help you to manage and stop monitor your backup server and backup accounts) using the deamon: /etc/initpackage manager.d/prov-For more information about the scripts please see the [[stoney_backup:_Service_Software | stoney backup-rsnapshotService Software]] page.
<pre>
#!/sbin/runscript# Copyright 1999emerge -2013 Gentoo Foundation# Distributed under the terms of the GNU General Public License v2# $Header: $ depend() { need net after slapd} start() { ebegin "Starting backupva sys-rsnapshot provisioning daemon" start-stop-daemon --start \ --background \ --user ${USER:-root}:${GROUP:-root} \ --make-pidfile \ --pidfile "${PIDFILE}" \ --exec apps/var/work/provsst-backup-rsnapshot/Provisioning/bin/provisioning.pl \ --interpreted \ -- ${OPTIONS} \ -c /var/work/prov-backup-rsnapshot/Provisioning/etc/Provisioning/Backup/Rsnapshot_test.conf \ -g /var/work/prov-backup-rsnapshot/Provisioning/etc/Provisioning/Global.conf eend $?} stop() { ebegin "Stopping backup-rsnapshot provisioning daemon" start-stop-daemon --stop \ --pidfile "${PIDFILE}" eend $?}utils
</pre>
 
/etc/conf.d/prov-backup-rsnapshot
<pre>
USER="root"
GROUP="root"
 
PIDFILE="/run/prov-backup-rsnapshot.pid"
 
# OPTIONS="..."
 
</pre>
==== Run-Level ====
rc-update add prov-backup-rsnapshot default
 
== writeAccountSize ==
If you have already installed the [[Backup_(Server_Setup)#rsnapshot | rsnapshot]] script, you also have the writeAccountSize script. Otherwise follow [[Backup_(Server_Setup)#rsnapshot | these instructions (installation only)]]
 
=== Configuration ===
vi /var/work/backup-util/etc/writeAccountSize.conf <pre>[Global]INCOMING_DIRECTORY = /incomingACCOUNT_SIZE_FILE = /etc/backupSizeSNAPSHOTS = 1 [Syslog]SYSLOG = rsnapshot [Directory]LDAP_SERVER = ldaps://ldapm.tombstone.chLDAP_PORT = 636LDAP_BIND_DN = cn=Manager,dc=foss-cloud,dc=orgLDAP_BIND_PW = <password>LDAP_BASE_DN = ou=accounts,ou=backup,ou=services,dc=foss-cloud,dc=orgLDAP_PERSON_BASE = ou=people,dc=foss-cloud,dc=orgLDAP_RESELLER_BASE = ou=reseller,ou=Please refer to the configuration,ou=backup,ou=services,dc=foss-cloud,dc=orgLDAP_EMAIL_ATTRIBUTE = mail [Notification]EMAIL_SENDER = stepping stone GmbH Supprt <support@stepping-stone.ch>EMAIL_ALERT_THRESHOLD = 85 Salutation_Default_de-CH = Liebe Kundin / Lieber KundeSalutation_m_de-CH = Sehr geehrter HerrSalutation_f_de-CH = Sehr geehrte FrauSalutation_Default_en-GB = Dear customerSalutation_m_en-GB = Dear Mr.Salutation_f_en-GB = Dear Mrs. [MAIL]host = mail.stepping-stone.chport = 587username = support@stepping-stone.chpassword = <password></pre>  == Snapshots ==We use rsnapshot - remote filesystem snapshot utility sections for the actual snapshots and a handful of wrapper different scripts, that do things like:* Read the users and their settings from the LDAP directory.* Execute rsnapshot according to the users settings.* Write the backup quotas backup (incoming), iterations (.snapshots) and free space to the users local backupSize file and update the LDAP directory.* Inform the reseller, customer or user (depending on the settings in the LDAP directory) via mail, if the quota limit has been reached.* Depending on the users settings in the LDAP directory, warning mail will be sent to the reseller, customer or user, if a backup was not executed on time. === rsnaphot configuration directory ===The users individual rsnapshot configurations are stored under <code>/etc/rsnapshot</code>. Please make sure, that the directory exists: ls -al /etc | grep rsnapshot  drwx------ 2 root root 64 30. Aug 20:20 rsnapshot If not, create it: mkdir /etc/rsnapshot chmod 700 /etc/rsnapshot === snapshot.pl Configuration ===The snapshot.pl script is responsible for the execution of rsnapshot according to the users settings. /etc/backup-utils/snapshot.conf <pre>[General]MaxParallelProcesses = 5Rsnapshot_command = /usr/bin/nice -n 19 /usr/bin/rsnapshot -c /etc/rsnapshot/rsnapshot.conf.%uid% %interval% [LDAP]Host = ldapsstoney_backup://ldapm.tombstone.chPort = 636User = cn=Manager,dc=_Service_Software | stoney-cloud,dc=orgPassword = <Password>CA_Path = /etc/ssl/certsAccounts_Base = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org</pre> Legend:* '''%uid%''': The backup account and login uid as a numeric number. For example: 4000205.* '''%interval%''': The backup level to be executed. Possible values are hourly, daily, weekly, monthly and yearly. === snapshot.pl Test ===Before adding the necessary cronjob entries, we need to make sure, that we've configured the snapshot.pl script correctly: /usr/libexec/backup-utils/snapshot.pl --interval daily -d If everything worked as planned, you should receive feedback looking roughly like:<pre>INFO: Starting rsnapshot for interval daily with maximum 5 parallel processes INFO: Executing snapshot for 4000080 INFO: Executing snapshot for 4000079 INFO: Snapshot process for 4000079 finished in 0.18 seconds with status 0 INFO: Snapshot process for 4000080 finished in 0.19 seconds with status 0 INFO: rsnapshot for all backups done. Took 0.24 seconds</pre> === Cronjobs ===After making sure, that everything worked as planned, you can update your crontab entry: crontab -e<pre>...# Rsnapshot for all users30 22 * * * /usr/libexec/backup-utils/snapshot.pl --interval daily15 22 * * sun /usr/libexec/backup-utils/snapshot.pl --interval weekly00 22 1 * * /usr/libexec/backup-utils/snapshot.pl --interval monthly...</pre> == schedule warning ==To install the new schedule warning script you have to execute the following commands:<pre>cd /var/work/git clone --recursive https://github.com/stepping-stone/backup-surveillance.gitcd backup-surveillance/bin/ln -s ../perl-utils/lib/PerlUtil/ PerlUtil</pre> === Configuration === vi /var/work/backup-surveillance/etc/config.conf<pre>[XMLService Software]SCHEDULE_FILE = %homeDirectory%/incoming/%computerName%/.sepiola_backup/scheduler.xmlSCHEDULE_XSD = %configpath%/../etc/schema/scheduler_schema.xsdBACKUP_ENDED_FILE = %homeDirectory%/incoming/%computerName%/.sepiola_backup/backupEnded.xmlBACKUP_ENDED_XSD = %configpath%/../etc/schema/backupended_schema.xsdBACKUP_STARTED_FILE = %homeDirectory%/incoming/%computerName%/.sepiola_backup/backupStarted.xmlBACKUP_STARTED_XSD = %configpath%/../etc/schema/backupstarted_schema.xsd  [TEMPLATE]Salutation_Default_de-CH = Liebe Kundin / Lieber KundeSalutation_m_de-CH = Sehr geehrter HerrSalutation_f_de-CH = Sehr geehrte FrauSalutation_Default_en-GB = Dear customerSalutation_m_en-GB = Dear Mr.Salutation_f_en-GB = Dear Mrs. [LDAP] SERVER = ldaps://ldapm.tombstone.chPORT = 636DEBUG = 1 ADMIN_DN = cn=Manager,dc=foss-cloud,dc=org ADMIN_PASSWORD = <Password> BACKUP_BASE = ou=accounts,ou=backup,ou=services,dc=foss-cloud,dc=orgPEOPLE_BASE = ou=people,dc=foss-cloud,dc=orgRESELLER_BASE = ou=reseller,ou=configuration,ou=backup,ou=services,dc=foss-cloud,dc=orgSCOPE = sub [MAIL]mailTo = host = mail.stepping-stone.chport = 587username = password = from =  </pre>
= Links =
* [http://www.busybox.net/ Busybox] BusyBox combines tiny versions of many common UNIX utilities into a single small executable. Useful to reduce the number of files (and thus the complexity) when building a chroot.
  [[Category:Servicesstoney backup]]
486
edits