Difference between revisions of "stoney backup: Server set-up"
[unchecked revision] | [unchecked revision] |
(→prov-backup-rsnapshot) |
(→Provisioning global configuration) |
||
Line 617: | Line 617: | ||
==== Provisioning global configuration ==== | ==== Provisioning global configuration ==== | ||
− | The global configuration for the provisioning daemon applies to all provisioning modules running on the server. This configuration therefore contains information about the provisioning daemon itself and no information at all about the specific modules. | + | The global configuration for the provisioning daemon applies to all provisioning modules (@PAT: how where they installed? --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 09:58, 19 June 2014 (CEST)) running on the server. This configuration therefore contains information about the provisioning daemon itself and no information at all about the specific modules. |
/etc/Provisioning/Global.conf | /etc/Provisioning/Global.conf | ||
<pre> | <pre> |
Revision as of 08:58, 19 June 2014
Contents
- 1 Abstract
- 2 Overview
- 3 Software Installation
- 4 Base Server Software Configuration
- 4.1 OpenSSH
- 4.2 OpenLDAP
- 4.3 Random Number Generator (haveged)
- 4.4 nss-pam-ldapd
- 4.5 Quota
- 4.6 prov-backup-rsnapshot
- 4.7 backup utils
- 5 stoney backup Service Software
- 6 Links
Abstract
This document describes server setup for the stoney cloud (Online) Backup service, built upon the Gentoo Linux distribution.
Overview
After working through this documentation, you will be able to set up and configure your own (Online) Backup service server.
Software Installation
Requirements
A working stoney cloud, installed according to stoney cloud: Single-Node Installation or stoney cloud: Multi-Node Installation.
Keywords & USE-Flags
For a minimal OpenLDAP directory installation:
echo "net-nds/openldap minimal sasl" >> /etc/portage/package.use echo "net-nds/openldap ~amd64" >> /etc/portage/package.keywords
NSS and PAM modules for lookups using LDAP:
echo "sys-auth/nss-pam-ldapd sasl" >> /etc/portage/package.use echo "sys-auth/nss-pam-ldapd ~amd64" >> /etc/portage/package.keywords echo "sys-fs/quota ldap" >> /etc/portage/package.use
echo "=app-admin/jailkit-2.16 ~amd64" >> /etc/portage/package.keywords
For the prov-backup-rsnapshot daemon:
echo "dev-perl/Net-SMTPS ~amd64" >> /etc/portage/package.keywords echo "perl-core/Switch ~amd64" >> /etc/portage/package.keywords
To build puttygen only without X11:
echo "net-misc/putty ~amd64" >> /etc/portage/package.keywords echo "net-misc/putty -gtk" >> /etc/portage/package.use
Emerge
emerge -va nss-pam-ldapd \ quota \ net-misc/putty \ app-admin/jailkit \ sys-apps/haveged \ net-misc/putty \ sys-apps/sst-backup-utils \ sys-apps/sst-prov-backup-rsnapshot
To list the dependencies of ebuilds, you can use equery
:
equery depgraph sst-backup-utils
* Searching for sst-backup-utils ... * dependency graph for sys-apps/sst-backup-utils-0.1.0 `-- sys-apps/sst-backup-utils-0.1.0 amd64 `-- dev-perl/PerlUtil-0.1.0 (>=dev-perl/PerlUtil-0.1.0) amd64 `-- virtual/perl-Sys-Syslog-0.320.0 (virtual/perl-Sys-Syslog) amd64 `-- dev-perl/perl-ldap-0.530.0 (dev-perl/perl-ldap) amd64 `-- dev-perl/XML-Simple-2.200.0 (dev-perl/XML-Simple) amd64 `-- dev-perl/Config-IniFiles-2.780.0 (dev-perl/Config-IniFiles) amd64 `-- dev-perl/XML-Validator-Schema-1.100.0 (dev-perl/XML-Validator-Schema) amd64 `-- dev-perl/Date-Calc-6.300.0 (dev-perl/Date-Calc) amd64 `-- dev-perl/DateManip-6.310.0 (dev-perl/DateManip) amd64 `-- dev-perl/Schedule-Cron-Events-1.930.0 (dev-perl/Schedule-Cron-Events) amd64 `-- dev-perl/DateTime-Format-Strptime-1.520.0 (dev-perl/DateTime-Format-Strptime) amd64 `-- dev-perl/XML-SAX-0.990.0 (dev-perl/XML-SAX) amd64 `-- virtual/perl-MIME-Base64-3.130.0-r2 (virtual/perl-MIME-Base64) amd64 `-- dev-perl/Authen-SASL-2.160.0 (dev-perl/Authen-SASL) amd64 `-- dev-perl/Net-SMTPS-0.30.0 (dev-perl/Net-SMTPS) ~amd64 `-- dev-perl/text-template-1.450.0 (dev-perl/text-template) amd64 `-- virtual/perl-Getopt-Long-2.380.0-r2 (virtual/perl-Getopt-Long) amd64 `-- dev-perl/Parallel-ForkManager-1.20.0 (dev-perl/Parallel-ForkManager) amd64 `-- dev-perl/Time-Stopwatch-1.0.0 (dev-perl/Time-Stopwatch) amd64 `-- app-backup/rsnapshot-1.3.1-r1 (app-backup/rsnapshot) amd64 [ sys-apps/sst-backup-utils-0.1.0 stats: packages (20), max depth (1) ]
For more information, visit the Gentoolkit page.
Base Server Software Configuration
OpenSSH
OpenSSH Configuration
Configure the OpenSSH daemon:
vi /etc/ssh/sshd_config
Set following options:
PubkeyAuthentication yes PasswordAuthentication yes UsePAM yes Subsystem sftp internal-sftp
Make sure, that Subsystem sftp internal-sftp
is the last line in the configuration file.
We want to reduce the numbers of chroot environments in one folder. As the ChrootDirectory
configuration option only allows %h
(home directory of the user) and %u
(username of the user), we need to create the necessary matching rules in the form of:
Match User *000 ChrootDirectory /var/backup/000/%u AuthorizedKeysFile /var/backup/000/%u/%h/.ssh/authorized_keys Match Match User *001 ChrootDirectory /var/backup/001/%u AuthorizedKeysFile /var/backup/001/%u/%h/.ssh/authorized_keys Match ... Match User *999 ChrootDirectory /var/backup/999/%u AuthorizedKeysFile /var/backup/999/%u/%h/.ssh/authorized_keys Match
The creation of the matching rules is done by executing the following bash commands:
FILE=/etc/ssh/sshd_config; for x in {0..999} ; do \ printf "Match User *%03d\n" $x >> ${FILE}; \ printf " ChrootDirectory /var/backup/%03d/%%u\n" $x >> ${FILE}; \ printf " AuthorizedKeysFile /var/backup/%03d/%%u/%%h/.ssh/authorized_keys\n" $x >> ${FILE}; \ printf "Match\n" >> ${FILE}; \ done
Don't forget to restart the OpenSSH daemon:
/etc/init.d/sshd restart
OpenSSH Host Keys
If you migrate from a existing backup server, you might want to copy the ssh host keys to the new server. If you do so clients want see a difference between the two hosts as the fingerprint remains the same. Copy the following files from the existing host to the new:
- /etc/ssh/ssh_host_dsa_key
- /etc/ssh/ssh_host_ecdsa_key
- /etc/ssh/ssh_host_key
- /etc/ssh/ssh_host_rsa_key
- /etc/ssh/ssh_host_dsa_key.pub
- /etc/ssh/ssh_host_ecdsa_key.pub
- /etc/ssh/ssh_host_key.pub
- /etc/ssh/ssh_host_rsa_key.pub
Set the correct permissions on the new host:
chmod 600 /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_ecdsa_key /etc/ssh/ssh_host_key /etc/ssh/ssh_host_rsa_key chmod 644 /etc/ssh/*.pub
And restart the ssh daemon. Caution: do not close your existing ssh session as long as you are not sure the ssh daemon has restarted properly and you can login again.
/etc/init.d/sshd restart
OpenLDAP
/etc/hosts
Update the /etc/hosts
with the LDAP server:
/etc/hosts
# VIP of the LDAP Server 31.216.40.4 ldapm.stoney-cloud.org
Root CA Certificate Installation
Install the root CA certificate into the OpenSSL default certificate storage directory:
fqdn="cloud.stoney-cloud.org" # The fully qualified domain name of the server containing the root certificate. cd /etc/ssl/certs/ wget --no-check-certificate https://${fqdn}/ca/FOSS-Cloud_CA.cert.pem chown root:root /etc/ssl/certs/FOSS-Cloud_CA.cert.pem chmod 444 /etc/ssl/certs/FOSS-Cloud_CA.cert.pem
Rebuild the CA hashes
c_rehash /etc/ssl/certs/
/etc/openldap/ldap.conf
Update the /etc/openldap/ldap.conf
LDAP configuration file/environment variables:
/etc/openldap/ldap.conf
# Used to specify a size limit to use when performing searches. The number should be an # non-negative integer. SIZELIMIT of zero (0) specifies unlimited search size. SIZELIMIT 20000 # Used to specify a time limit to use when performing searches. The number should be an # non-negative integer. TIMELIMIT of zero (0) specifies unlimited search time to be used. TIMELIMIT 45 # Specify how aliases dereferencing is done. DEREF should be set to one of never, always, search, # or find to specify that aliases are never dereferenced, always dereferenced, dereferenced when # searching, or dereferenced only when locating the base object for the search. The default is to # never dereference aliases. DEREF never # Specifies the URI(s) of an LDAP server(s) to which the LDAP library should connect. The URI # scheme may be either ldapor ldaps which refer to LDAP over TCP and LDAP over SSL (TLS) # respectively. Each server's name can be specified as a domain- style name or an IP address # literal. Optionally, the server's name can followed by a ':' and the port number the LDAP # server is listening on. If no port number is provided, the default port for the scheme is # used (389 for ldap://, 636 for ldaps://). A space separated list of URIs may be provided. URI ldaps://ldapm.stoney-cloud.org # Used to specify the default base DN to use when performing ldap operations. The base must be # specified as a Distinguished Name in LDAP format. BASE dc=stoney-cloud,dc=org # This is a local copy of the certificate of the certificate authority # used to sign the server certificate for the LDAP server I am using TLS_CACERT /etc/ssl/certs/FOSS-Cloud_CA.cert.pem
Check you configuration by doing a search:
ldapsearch -v -H "ldaps://ldapm.stoney-cloud.org" \ -b "dc=stoney-cloud,dc=org" \ -D "cn=Manager,dc=stoney-cloud,dc=org" \ -s one "(objectClass=*)" \ -LLL -W
The result should look something like:
ldap_initialize( ldaps://ldapm.stoney-cloud.org:636/??base ) filter: (objectClass=*) requesting: All userApplication attributes dn: ou=administration,dc=stoney-cloud,dc=org objectClass: top objectClass: organizationalUnit ou: administration ...
Random Number Generator (haveged)
Tools like putty are dependent on random numbers to be able to create certificates.
haveged - Generate random numbers and feed linux random device
The haveged daemon doesn't need any special configuration, therefore you can start it from the command line interface:
/etc/init.d/haveged start
Check, if the start was successful:
ps auxf | grep haveged
root 18001 1.0 0.0 7420 3616 ? Ss 08:48 0:00 /usr/sbin/haveged -r 0 -w 1024 -v 1
Add the haveged daemon to the default run level:
rc-update add haveged default
nss-pam-ldapd
nslcd.conf — configuration file for LDAP nameservice daemon
/etc/nslcd.conf
# This is the configuration file for the LDAP nameservice # switch library's nslcd daemon. It configures the mapping # between NSS names (see /etc/nsswitch.conf) and LDAP # information in the directory. # See the manual page nslcd.conf(5) for more information. # The user and group nslcd should run as. uid nslcd gid nslcd # The uri pointing to the LDAP server to use for name lookups. # Multiple entries may be specified. The address that is used # here should be resolvable without using LDAP (obviously). #uri ldap://127.0.0.1/ #uri ldaps://127.0.0.1/ #uri ldapi://%2fvar%2frun%2fldapi_sock/ # Note: %2f encodes the '/' used as directory separator uri ldaps://ldapm.tombstone.ch # The LDAP version to use (defaults to 3 # if supported by client library) #ldap_version 3 # The distinguished name of the search base. base dc=stoney-cloud,dc=org # The distinguished name to bind to the server with. # Optional: default is to bind anonymously. binddn cn=Manager,dc=stoney-cloud,dc=org # The credentials to bind with. # Optional: default is no credentials. # Note that if you set a bindpw you should check the permissions of this file. bindpw myverysecretpassword # The distinguished name to perform password modifications by root by. #rootpwmoddn cn=admin,dc=example,dc=com # The default search scope. #scope sub #scope one #scope base # Customize certain database lookups. #base group ou=Groups,dc=example,dc=com base group ou=groups,ou=backup,ou=services,dc=stoney-cloud,dc=org base passwd ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org base shadow ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org #scope group onelevel #scope hosts sub #filter group (&(objectClass=posixGroup)(sstIsActive=TRUE)) filter passwd (&(objectClass=posixAccount)(sstIsActive=TRUE)) filter shadow (&(objectClass=shadowAccount)(sstIsActive=TRUE)) # Bind/connect timelimit. #bind_timelimit 30 # Search timelimit. #timelimit 30 # Idle timelimit. nslcd will close connections if the # server has not been contacted for the number of seconds. #idle_timelimit 3600 # Use StartTLS without verifying the server certificate. #ssl start_tls tls_reqcert never # CA certificates for server certificate verification #tls_cacertdir /etc/ssl/certs #tls_cacertfile /etc/ssl/ca.cert # Seed the PRNG if /dev/urandom is not provided #tls_randfile /var/run/egd-pool # SSL cipher suite # See man ciphers for syntax #tls_ciphers TLSv1 # Client certificate and key # Use these, if your server requires client authentication. #tls_cert #tls_key # Mappings for Services for UNIX 3.5 #filter passwd (objectClass=User) #map passwd uid msSFU30Name #map passwd userPassword msSFU30Password #map passwd homeDirectory msSFU30HomeDirectory #map passwd homeDirectory msSFUHomeDirectory #filter shadow (objectClass=User) #map shadow uid msSFU30Name #map shadow userPassword msSFU30Password #filter group (objectClass=Group) #map group member msSFU30PosixMember # Mappings for Services for UNIX 2.0 #filter passwd (objectClass=User) #map passwd uid msSFUName #map passwd userPassword msSFUPassword #map passwd homeDirectory msSFUHomeDirectory #map passwd gecos msSFUName #filter shadow (objectClass=User) #map shadow uid msSFUName #map shadow userPassword msSFUPassword #map shadow shadowLastChange pwdLastSet #filter group (objectClass=Group) #map group member posixMember # Mappings for Active Directory #pagesize 1000 #referrals off #idle_timelimit 800 #filter passwd (&(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*)) #map passwd uid sAMAccountName #map passwd homeDirectory unixHomeDirectory #map passwd gecos displayName #filter shadow (&(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*)) #map shadow uid sAMAccountName #map shadow shadowLastChange pwdLastSet #filter group (objectClass=group) # Alternative mappings for Active Directory # (replace the SIDs in the objectSid mappings with the value for your domain) #pagesize 1000 #referrals off #idle_timelimit 800 #filter passwd (&(objectClass=user)(objectClass=person)(!(objectClass=computer))) #map passwd uid cn #map passwd uidNumber objectSid:S-1-5-21-3623811015-3361044348-30300820 #map passwd gidNumber objectSid:S-1-5-21-3623811015-3361044348-30300820 #map passwd homeDirectory "/home/$cn" #map passwd gecos displayName #map passwd loginShell "/bin/bash" #filter group (|(objectClass=group)(objectClass=person)) #map group gidNumber objectSid:S-1-5-21-3623811015-3361044348-30300820 # Mappings for AIX SecureWay #filter passwd (objectClass=aixAccount) #map passwd uid userName #map passwd userPassword passwordChar #map passwd uidNumber uid #map passwd gidNumber gid #filter group (objectClass=aixAccessGroup) #map group cn groupName #map group gidNumber gid
nsswitch.conf - Name Service Switch configuration file
/etc/nsswitch.conf
passwd: files ldap shadow: files ldap group: files ldap # passwd: db files nis # shadow: db files nis # group: db files nis hosts: files dns networks: files dns services: db files protocols: db files rpc: db files ethers: db files netmasks: files netgroup: files bootparams: files automount: files aliases: files
system-auth
vi /etc/pam.d/system-auth
auth required pam_env.so auth sufficient pam_unix.so try_first_pass likeauth nullok auth sufficient pam_ldap.so minimum_uid=1000 use_first_pass auth required pam_deny.so account required pam_unix.so account sufficient pam_ldap.so minimum_uid=1000 use_first_pass password required pam_cracklib.so difok=2 minlen=8 dcredit=2 ocredit=2 retry=3 password required pam_unix.so try_first_pass use_authtok nullok sha512 shadow password sufficient pam_ldap.so minimum_uid=1000 use_first_pass password required pam_deny.so session required pam_limits.so session required pam_env.so session required pam_unix.so session sufficient pam_ldap.so minimum_uid=1000 use_first_pass
Test the Setup
nslcd -d
Update the Default Run Levels
rc-update add nslcd default rc-update add nscd default
Start the necessary Daemons
/etc/init.d/nslcd start /etc/init.d/nscd start
Quota
32-bit Project Identifier Support
We need to enable 32-bit project identifier support (PROJID32BIT feature) for our naming scheme (uid numbers larger than 65'536), which is already the default on the stepping stone virtual machines:
mkfs.xfs -i projid32bit=1 /dev/vg-local-01/var
Update /etc/fstab and Mount
Make sure, that you have user quota (uqota) and project quota (pquota) set as options on the chosen mount point in /etc/fstab. For example:
LABEL=LV-VAR /var xfs noatime,discard,inode64,uquota,pquota 0 2
reboot
Check, if everything went ok:
df -h | grep var
/dev/mapper/vg--local--01-var 1023G 220G 804G 22% /var
Verify
Some important options for xfs_quota:
- -x: Enable expert mode.
- -c: Pass arguments on the command line. Multiple arguments may be given.
Remount the file system /var and check, if /var has the desired values:
xfs_quota -x -c state /var
As you can see (items marked bold), we have achieved our goal:
User quota state on /var (/dev/mapper/vg--local--01-var) Accounting: ON Enforcement: ON Inode: #131 (1 blocks, 1 extents) Group quota state on /var (/dev/mapper/vg--local--01-var) Accounting: OFF Enforcement: OFF Inode: #132 (1 blocks, 1 extents) Project quota state on /var (/dev/mapper/vg--local--01-var) Accounting: ON Enforcement: ON Inode: #132 (1 blocks, 1 extents) Blocks grace time: [7 days 00:00:30] Inodes grace time: [7 days 00:00:30] Realtime Blocks grace time: [7 days 00:00:30]
User Quotas
Adding a User Quota
Set a quota of 1 Gigabyte for the user 4000187 (the values are in kilobytes, so 1048576 kilobyte are 1024 megabytes which corresponds to 1 gigabyte):
xfs_quota -x -c 'limit bhard=1048576k 4000187' /var
Or in bytes:
xfs_quota -x -c 'limit bhard=1073741824 4000187' /var
Read the quota information for the user 4000187:
xfs_quota -x -c 'quota -v -N -u 4000187' /var
/dev/mapper/vg--local--01-var 0 0 1048576 00 [--------] /var
If the user has data in the project, that belongs to him, the result will change:
/dev/mapper/vg--local--01-var 512000 0 1048576 00 [--------] /var
Modifiying a User Quota
To modify a users quota, you just set a new quota (limit):
xfs_quota -x -c 'limit bhard=1048576k 4000187' /var
Read the quota information for the user 4000187:
xfs_quota -x -c 'quota -v -N -u 4000187' /var
/dev/mapper/vg--local--01-var 0 0 1048576 00 [--------] /var
If the user has data in the project, that belongs to him, the result will change:
/dev/mapper/vg--local--01-var 512000 0 1048576 00 [--------] /var
Removing a User Quota
Removing a quota for a user:
xfs_quota -x -c 'limit bhard=0 4000187' /var
The following command should give you an empty result:
xfs_quota -x -c 'quota -v -N -u 4000187' /var
Project (Directory) Quotas
Adding a Project (Directory) Quota
The XFS file system additionally allows you to set quotas on individual directory hierarchies in the file system that are known as managed trees. Each managed tree is uniquely identified by a project ID and an optional project name. We'll use the following values in the examples:
- project_ID: The uid of the online backup account (4000187).
- project_name: The uid of the online backup account (4000187). This could be a human readable name.
- mountpoint: The mountpoint of the xfs-filesystem (/var). See the
/etc/fstab
entry from above. - directory: The directory of the project (187/4000187), starting from the mountpoint of the xfs-filesystem (/var).
Define a unique project ID for the directory hierarchy in the /etc/projects
file (project_ID:mountpoint/directory):
echo "4000187:/var/backup/187/4000187/home/4000187" >> /etc/projects
Create an entry in the /etc/projid
file that maps a project name to the project ID (project_name:project_ID):
echo "4000187:4000187" >> /etc/projid
Set Project:
xfs_quota -x -c 'project -s -p /var/backup/187/4000187/home/4000187 4000187' /var
Set Quota (limit) on Project:
xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var
Check your Quota (limit)
xfs_quota -x -c 'quota -p 4000187' /var
Check the Quota:
-
-v
: increase verbosity in reporting (also dumps zero values). -
-N
: suppress the initial header. -
-p
: display project quota information. -
-h
: human readable format.
xfs_quota -x -c 'quota -v -N -p 4000187' /var
/dev/mapper/vg--local--01-var 0 0 1048576 00 [--------] /var
If you copied data into the project, the output will look something like:
/dev/mapper/vg--local--01-var 512000 0 1048576 00 [--------] /var
To give you an overall view of the whole system:
xfs_quota -x -c report /var
User quota on /var (/dev/mapper/vg--local--01-var) Blocks User ID Used Soft Hard Warn/Grace ---------- -------------------------------------------------- root 1024000 0 0 00 [--------] 4000187 0 0 1048576 00 [--------] Project quota on /var (/dev/mapper/vg--local--01-var) Blocks Project ID Used Soft Hard Warn/Grace ---------- -------------------------------------------------- 4000187 512000 0 1048576 00 [--------]
Modifying a Project (Directory) Quota
To modify a project (directory) quota, you just set an new quota (limit) on the chosen project:
xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var
Check your quota (limit)
xfs_quota -x -c 'quota -p 4000187' /var
Removing a Project (Directory) Quota
Removing a quota from a project:
xfs_quota -x -c 'limit -p bhard=0 4000187' /var
Chreck the results:
xfs_quota -x -c report /var
User quota on /var (/dev/mapper/vg--local--01-var) Blocks User ID Used Soft Hard Warn/Grace ---------- -------------------------------------------------- root 512000 0 0 00 [--------] 4000187 0 0 1024 00 [--------]
As you can see, the line with the Project ID 4000187 has disappeared:
4000187 512000 0 1048576 00 [--------]
Don't forget to remove the project from /etc/projects
and /etc/projid
:
sed -i -e '/4000187/d' /etc/projects sed -i -e '/4000187/d' /etc/projid
Some important notes concerning XFS
- The quotacheck command has no effect on XFS filesystems. The first time quota accounting is turned on (at mount time), XFS does an automatic quotacheck internally; afterwards, the quota system will always be completely consistent until quotas are manually turned off.
- There is no need for quota file(s) in the root of the XFS filesystem.
prov-backup-rsnapshot
Install the prov-backup-rsnasphot daemon script using the package manager:
emerge -va sys-apps/sst-prov-backup-rsnapshot
Configuration
If it is the first provisioning module running on this server (very likely) you first have to configure the provisioning daemon (you can skip this step if you have already another provisioning module running on this server)
Provisioning global configuration
The global configuration for the provisioning daemon applies to all provisioning modules (@PAT: how where they installed? --Chrigu (talk) 09:58, 19 June 2014 (CEST)) running on the server. This configuration therefore contains information about the provisioning daemon itself and no information at all about the specific modules.
/etc/Provisioning/Global.conf
# Copyright (C) 2012 stepping stone GmbH # Switzerland # http://www.stepping-stone.ch # support@stepping-stone.ch # # Authors: # Pat Kläy <pat.klaey@stepping-stone.ch> # # Licensed under the EUPL, Version 1.1. # # You may not use this work except in compliance with the # Licence. # You may obtain a copy of the Licence at: # # http://www.osor.eu/eupl # # Unless required by applicable law or agreed to in # writing, software distributed under the Licence is # distributed on an "AS IS" basis, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either # express or implied. # See the Licence for the specific language governing # permissions and limitations under the Licence. # [Global] # If true the script logs every information to the log-file. LOG_DEBUG = 0 # If true the script logs additional information to the log-file. LOG_INFO = 1 #If true the script logs warnings to the log-file. LOG_WARNING = 1 #If true the script logs errors to the log-file. LOG_ERR = 1 # The number of seconds to wait before retry contacting the backend server during startup. SLEEP = 10 # Number of backend server connection retries during startup. ATTEMPTS = 3 [Operation Mode] # The number of seconds to wait before retry contacting the backend server in case of a service interruptions. SLEEP = 30 # Number of backend server connection retries in case of a service interruptions. ATTEMPTS = 3 [Mail] # Error messages are sent to the mail configured below. SENDTO = <YOUR-MAIL-ADDRESS> HOST = mail.stepping-stone.ch PORT = 587 USERNAME = <YOUR-NOTIFICATION-EMAIL-ADDRESS> PASSWORD = <PASSWORD> FROMNAME = Provisioning daemon CA_DIR = /etc/ssl/certs SSL = starttls AUTH_METHOD = LOGIN # Additionally, you can be informed about creation, modification and deletion of services. WANTINFOMAIL = 1
Provisioning daemon prov-backup-rsnapshot module
The module specific configuration is located in /etc/Provisioning/<Service>/<Type>.conf. In the case of the prov-backup-rsnapshot module this is /etc/Provisioning/Backup/Rsnapshot.conf
. (Note: Comments starting with /* are not in the configuration file, they are only in the wiki to add some additional information)
# Copyright (C) 2013 stepping stone GmbH # Switzerland # http://www.stepping-stone.ch # support@stepping-stone.ch # # Authors: # Pat Kläy <pat.klaey@stepping-stone.ch> # # Licensed under the EUPL, Version 1.1. # # You may not use this work except in compliance with the # Licence. # You may obtain a copy of the Licence at: # # http://www.osor.eu/eupl # # Unless required by applicable law or agreed to in # writing, software distributed under the Licence is # distributed on an "AS IS" basis, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either # express or implied. # See the Licence for the specific language governing # permissions and limitations under the Licence. # /* If you want, you can override the log information from the global configuration file this might be useful for debugging */ [Global] # If true the script logs every information to the log-file. LOG_DEBUG = 1 # If true the script logs additional information to the log-file. LOG_INFO = 1 #If true the script logs warnings to the log-file. LOG_WARNING = 1 #If true the script logs errors to the log-file. LOG_ERR = 1 /* Specify the hosts fully qualified domain name. This name will be used to perform some checks and also appear in the information and error mails */ ENVIRONMENT = <FQDN> [Database] BACKEND = LDAP SERVER = ldaps://ldapm.tombstone.org PORT = 636 ADMIN_USER = cn=Manager,dc=stoney-cloud,dc=org ADMIN_PASSWORD = <PASSWORD> SERVICE_SUBTREE = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org COOKIE_FILE = /etc/Provisioning/Backup/rsnapshot.cookie DEFAULT_COOKIE = rid=001,csn= SEARCH_FILTER = (&(entryCSN>=%entryCSN%)(sstProvisioningState=0)) /* Specifies the service itself. As it is the prov-backup-rsnapshot module, the SERVICE is "Backup" and the TYPE is "Rsnapshot". * The MODUS is as usual selfcare and the TRANSPORTAPI is LocalCLI. This is because the daemon is running on the same host as the * backup accounts are provisioned and the commands can be executed on this host using the cli. * For more information about MODUS and TRANSPORTAPI see https://int.stepping-stone.ch/wiki/provisioning.pl#Service_Konfiguration */ [Service] MODUS = selfcare TRANSPORTAPI = LocalCLI SERVICE = Backup TYPE = Rsnapshot SYSLOG = prov-backup-rsnapshot /* For the TRANSPORTAPI LocalCLI there is no gateway required because there is no connection to establish. So set HOST, USER and * DSA_FILE to whatever you want. Don't leave it blank, otherwise the provisioning daemon would log some error messages saying * these attributes are empty */ [Gateway] HOST = localhost USER = provisioning DSA_FILE = none /* Information about the backup itself (how to setup everything). Note that the %uid% int the RSNAPSHOT_CONFIG_FILE parameter will * be replaced by the accounts UID. The script CREATE_CHROOT_CMD was installed with the prov-backup-rsnapshot module, so do not * change this parameter. The quota parameters (SET_QUOTA_CMD, MOUNTPOINT, QUOTA_FILE, PROJECTS_FILE and PROJID_FILE) represent * the quota setup as described on http://wiki.stoney-cloud.org/index.php/stoney_backup:_Server_set-up#Quota. If you followed this * manual, you can copy-paste them into your configuration file, otherwise adapt them according to your quota setup. */ [Backup] RSNAPSHOT_CONFIG_FILE = /etc/rsnapshot/rsnapshot.conf.%uid% SET_QUOTA_CMD = /usr/sbin/xfs_quota CREATE_CHROOT_CMD = /usr/libexec/createBackupDirectory.sh MOUNTPOINT = /var QUOTA_FILE = /etc/backupSize PROJECTS_FILE = /etc/projects PROJID_FILE = /etc/projid
backup utils
Install the backup utils (multiple scripts which help you to manage and monitor your backup server and backup accounts) using the package manager. For more information about the scripts please see the stoney backup Service Software section.
emerge -va sys-apps/sst-backup-utils
Configuration
Please refer to the configuration sections for the different scripts in stoney backup Service Software.
stoney backup Service Software
The stoney backup Service comes along with multiple scripts which help you to manage and monitor your backup server and accounts:
We use rsnapshot - remote filesystem snapshot utility for the actual snapshots and a handful of wrapper scripts, that do things like:
- Read the users and their settings from the LDAP directory.
- Execute rsnapshot according to the users settings.
- Write the backup quotas backup (incoming), iterations (.snapshots) and free space to the users local backupSize file and update the LDAP directory.
- Inform the reseller, customer or user (depending on the settings in the LDAP directory) via mail, if the quota limit has been reached.
- Depending on the users settings in the LDAP directory, warning mail will be sent to the reseller, customer or user, if a backup was not executed on time.
writeAccountSize.pl
This script is installed to /usr/libexec/backup-utils/writeAccountSize.pl
by the sys-apps/sst-backup-utils
package and does the following:
- Calculates the used disk space (backup and iterations) for a given account and writes the corresponding values to:
- The LDAP backend (used by the selfcare webinterface to display quota information):
- Backup space used (sstBackupSize): The disk space the account uses for the backup itself (disk space used under the
incoming
folder of the users chroot-home directory) - Snapshot space used (sstIncrementSize): The disk space the account uses for the iterations (disk space under the
.snapshot
folder of the users chroot-home directory)
- Backup space used (sstBackupSize): The disk space the account uses for the backup itself (disk space used under the
- The file
etc/backupSize
of the accounts chroot (used by the Sepiola Online Backup client):
- The LDAP backend (used by the selfcare webinterface to display quota information):
- Checks if the user and/or reseller must be notified that there is no more disk space left for the given account
- Checks if the notification flag was passed, if not no notification will be triggered
- Calculates the used disk space (backup and iterations) in percentage
- Reads the notification threshold value from the LDAP backed
- If the disk space used (in percentage) is bigger than the value retrieved from the LDAP backend start the notification process with
- Product: Given account UID
- Service: Backup
- Problem: Quota
- Pod documentation:
NAME writeAccountSize.pl DESCRIPTION This Script gets quota information from filesystem, size of incoming and snapshots directories, write the data to a file and the LDAP backend and sends an e-mail message for each account that is over quota to users e-mail address (from ldap directory) if notification flag is passed. The configuration file for this script is stored in the backup-utils configuration directory (/etc/backup-utils/) and is called writeAccountSize.conf. The script needs access to the quota program to get quota information. The script needs ldap access to get users e-mail address and quota information. The script uses syslog for logging purposes. Command Line Interface (CLI) parameters: -C configfile The configuration file. -U uid The user id. -n notification Start notification process if quota threshold is reached -d debug Turns the debug mode on. -h help This online help. USAGE writeAccountSize.pl -U uid [-C configuration file ] []-d debug] [-h help] [-n] CREATED 2009-04-16 michael.rhyner@stepping-stone.ch created VERSION 2009-04-16 michael.rhyner@stepping-stone.ch created 2009-04-30 michael.rhyner@stepping-stone.ch changed position based quota output parsing with correctly parsed elements 2009-06-15 michael.rhyner@stepping-stone.ch added over quota check and sending e-mail 2009-06-16 michael.rhyner@stepping-stone.ch renamed script and make it more general usable (e.g. for online backup, online storage, ...) 2009-06-17 michael.rhyner@stepping-stone.ch changed mail message to read from a text file instead from configuration parameter 2009-06-18 michael.rhyner@stepping-stone.ch corrected wrong regex to weed out the asterisk (*) in getQuotaSize 2009-06-19 michael.rhyner@stepping-stone.ch corrected wrong evaluation success from subroutines and avoid message output when not in debug mode 2009-06-22 michael.rhyner@stepping-stone.ch getQuotaSize: return immediately if no quota was set 2009-06-24 michael.rhyner@stepping-stone.ch alert when used certain percentge of allowed space instead of more than allowed space 2009-06-26 michael.rhyner@stepping-stone.ch values are presented in Gigabytes within notification message 2009-07-23 michael.rhyner@stepping-stone.ch corrected wrong syslog severities for errors 2009-07-24 michael.rhyner@stepping-stone.ch made e-mail address available within message body 2013-08-19 pat.klaey@stepping-stone.ch write quota values also to the LDAP INCORPORATED CODE Incorporate code with use: warnings; strict; Config::IniFiles; Getopt::Std; Sys::Syslog; File::Basename; Text::Template; POSIX; Notification; PerlUtil::Logging; PerlUtil::LDAPUtil;
Configuration
/etc/backup-utils/writeAccountSize.conf
[Global] INCOMING_DIRECTORY = /incoming ACCOUNT_SIZE_FILE = /etc/backupSize SNAPSHOTS = 1 [Syslog] SYSLOG = rsnapshot [Directory] LDAP_SERVER = ldaps://ldapm.tombstone.ch LDAP_PORT = 636 LDAP_BIND_DN = cn=Manager,dc=stoney-cloud,dc=org LDAP_BIND_PW = <password> LDAP_BASE_DN = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org LDAP_PERSON_BASE = ou=people,dc=stoney-cloud,dc=org LDAP_RESELLER_BASE = ou=reseller,ou=configuration,ou=backup,ou=services,dc=stoney-cloud,dc=org LDAP_EMAIL_ATTRIBUTE = mail [Notification] EMAIL_SENDER = stepping stone GmbH Supprt <support@stepping-stone.ch> EMAIL_ALERT_THRESHOLD = 85 Salutation_Default_de-CH = Liebe Kundin / Lieber Kunde Salutation_m_de-CH = Sehr geehrter Herr Salutation_f_de-CH = Sehr geehrte Frau Salutation_Default_en-GB = Dear customer Salutation_m_en-GB = Dear Mr. Salutation_f_en-GB = Dear Mrs.
Tests
/usr/libexec/backup-utils/writeAccountSize.pl -U 4000080 -d
Debug modus was turned on Debug sub checkUsersHomeDirectory: $localUsersHomeDirectory: /var/backup/080/4000080/home/4000080 Debug sub checkUsersHomeDirectory: The $localUsersHomeDirectory /var/backup/080/4000080/home/4000080 exists Debug sub checkUsersIncomingDirectory: $localUsersHomeDirectory: /var/backup/080/4000080/home/4000080 Debug sub checkUsersIncomingDirectory: $localUsersIncomingDirectory: /incoming Debug sub checkUsersIncomingDirectory: $localIncomingPath: /var/backup/080/4000080/home/4000080/incoming Debug sub checkUsersIncomingDirectory: The $localIncomingPath /var/backup/080/4000080/home/4000080/incoming exists Total Quota: 1048576 kilobytes Total used Space: 0 kilobytes Incoming Size: 0 kilobytes Debug sub getSnapshotsSize: $localUsedQuota: 0 Debug sub getSnapshotsSize: $localSnapshotsSize: 0 Debug writeAccountSize: Working on /var/backup/080/4000080/etc/backupSize Debug: wrote 1024 0 0 to /var/backup/080/4000080/etc/backupSize DEBUG: Successfully executed the following modifications for entry uid=4000080,ou=accounts,ou=backup,ou=services,o=stepping-stone,c=ch: sstBackupSize => 0 DEBUG: Successfully executed the following modifications for entry uid=4000080,ou=accounts,ou=backup,ou=services,o=stepping-stone,c=ch: sstIncrementSize => 0 Alert Threshold: 85 % Calculated value: 0
Now write some data (200 megaytes in this example) into the users incoming directory and then execute the script again:
dd if=/dev/zero of=/var/backup/080/4000080/home/4000080/incoming/test.zeros bs=1024k count=200 chown 4000080:4000080 /var/backup/080/4000080/home/4000080/incoming/test.zeros /usr/libexec/backup-utils/writeAccountSize.pl -U 4000080 -d
Debug modus was turned on Debug sub checkUsersHomeDirectory: $localUsersHomeDirectory: /var/backup/080/4000080/home/4000080 Debug sub checkUsersHomeDirectory: The $localUsersHomeDirectory /var/backup/080/4000080/home/4000080 exists Debug sub checkUsersIncomingDirectory: $localUsersHomeDirectory: /var/backup/080/4000080/home/4000080 Debug sub checkUsersIncomingDirectory: $localUsersIncomingDirectory: /incoming Debug sub checkUsersIncomingDirectory: $localIncomingPath: /var/backup/080/4000080/home/4000080/incoming Debug sub checkUsersIncomingDirectory: The $localIncomingPath /var/backup/080/4000080/home/4000080/incoming exists Total Quota: 1048576 kilobytes Total used Space: 204800 kilobytes Incoming Size: 204800 kilobytes Debug sub getSnapshotsSize: $localUsedQuota: 204800 Debug sub getSnapshotsSize: $localSnapshotsSize: 0 Debug writeAccountSize: Working on /var/backup/080/4000080/etc/backupSize Debug: wrote 1024 200 0 to /var/backup/080/4000080/etc/backupSize DEBUG: Successfully executed the following modifications for entry uid=4000080,ou=accounts,ou=backup,ou=services,o=stepping-stone,c=ch: sstBackupSize => 209715200 DEBUG: Successfully executed the following modifications for entry uid=4000080,ou=accounts,ou=backup,ou=services,o=stepping-stone,c=ch: sstIncrementSize => 0 Alert Threshold: 85 % Calculated value: 19.53125
Everything seems to be working fine!
snapshot.pl
This script is installed to /usr/libexec/backup-utils/snapshot.pl
by the sys-apps/sst-backup-utils
package and does the following:
- Read interval parameter value passed
- The interval parameter value can be daily, weekly, monthly (or yearly)
- Find all backup active accounts for which the rsnapshot command must be executed (depending on the given interval)
- Filter to find these accounts:
(&(sstBackupInterval<INTERVAL>=*)(sstIsActive=TRUE))
for example for the daily rsnapshot the filter is(&(sstBackupIntervalDaily=*)(sstIsActive=TRUE))
- In other words this means: Get me all acounts that have for
sstBackupInterval<INTERVAL>
a value defined ANDsstIsActive
is set to "TRUE"
- In other words this means: Get me all acounts that have for
- Filter to find these accounts:
- According to the interval given and the account UID calculate the rsnapshot command for all these accounts
- For example
- Account UID: 4000000
- Interval: daily
- Resulting rsnapshot command: /usr/bin/nice -n 19 /usr/bin/rsnapshot -c /etc/rsnapshot/rsnapshot.conf.4000000 daily (if you use the configuration below)
- For example
- Execute all these commands
- Use controlled parallel execution, you can specify how many commands can be executed in parallel (see configuration below)
- Pod documentation:
NAME snapshot.pl DESCRIPTION This script gets all active online backup accounts from the LDAP backend for which the rsnapshot process for the given interval must be executed. According to these accounts and the given interval, the commands to be executed are generated and finally executed. The commands can be executed in parallel, however there is a limit defined in the configuration file which limits the amount of parallel running processes. USAGE ./snapshot.pl --interval interval [--debug] [--help] OPTIONS --interval/-i interval Specifies the rsnapshot interval (can be hourly, daily, weekly, monthly or yearly) --debug/-d Turns on debug mode --help/-h Shows this help CREATED 2012-03-19 pat.klaey@stepping-stone.ch created VERSION 2012-03-19 pat.klaey@stepping-stone.ch created INCORPORATED CODE Incorporated code with use: warnings; strict; Getopt::Long; Sys::Syslog; PerlUtil::Logging; PerlUtil::LDAPUtil; File::Basename; Parallel::ForkManager; Time::Stopwatch;
Configuration
There are two things to do in this step:
- Configure the rsnapshot root directory
- Configure the snapshot.pl script itself
rsnaphot configuration directory
The users individual rsnapshot configurations are stored under /etc/rsnapshot
. Please make sure, that the directory exists:
ls -al /etc | grep rsnapshot
drwx------ 2 root root 64 30. Aug 20:20 rsnapshot
If not, create it:
mkdir /etc/rsnapshot chmod 700 /etc/rsnapshot
snapshot.pl Configuration
The snapshot.pl script is responsible for the execution of rsnapshot according to the users settings.
/etc/backup-utils/snapshot.conf
[General] MaxParallelProcesses = 5 Rsnapshot_command = /usr/bin/nice -n 19 /usr/bin/rsnapshot -c /etc/rsnapshot/rsnapshot.conf.%uid% %interval% [LDAP] Host = ldaps://ldapm.tombstone.ch Port = 636 User = cn=Manager,dc=stoney-cloud,dc=org Password = <Password> CA_Path = /etc/ssl/certs Accounts_Base = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org
Legend: At runtime the following placeholders are replaced as follows
- %uid%: The backup account and login uid as a numeric number. For example: 4000205.
- %interval%: The backup level to be executed. Possible values are hourly, daily, weekly, monthly and yearly.
Tests
Before adding the necessary cronjob entries, we need to make sure, that we've configured the snapshot.pl script correctly:
/usr/libexec/backup-utils/snapshot.pl --interval daily -d
If everything worked as planned, you should receive feedback looking roughly like:
INFO: Starting rsnapshot for interval daily with maximum 5 parallel processes INFO: Executing snapshot for 4000080 INFO: Executing snapshot for 4000079 INFO: Snapshot process for 4000079 finished in 0.18 seconds with status 0 INFO: Snapshot process for 4000080 finished in 0.19 seconds with status 0 INFO: rsnapshot for all backups done. Took 0.24 seconds
Just to make sure, that everything did work out fine, execute writeAccountSize.pl
again:
/usr/libexec/backup-utils/writeAccountSize.pl -U 4000080 -d
Debug modus was turned on Debug sub checkUsersHomeDirectory: $localUsersHomeDirectory: /var/backup/080/4000080/home/4000080 Debug sub checkUsersHomeDirectory: The $localUsersHomeDirectory /var/backup/080/4000080/home/4000080 exists Debug sub checkUsersIncomingDirectory: $localUsersHomeDirectory: /var/backup/080/4000080/home/4000080 Debug sub checkUsersIncomingDirectory: $localUsersIncomingDirectory: /incoming Debug sub checkUsersIncomingDirectory: $localIncomingPath: /var/backup/080/4000080/home/4000080/incoming Debug sub checkUsersIncomingDirectory: The $localIncomingPath /var/backup/080/4000080/home/4000080/incoming exists Total Quota: 1048576 kilobytes Total used Space: 409600 kilobytes Incoming Size: 204800 kilobytes Debug sub getSnapshotsSize: $localUsedQuota: 409600 Debug sub getSnapshotsSize: $localSnapshotsSize: 204800 Debug writeAccountSize: Working on /var/backup/080/4000080/etc/backupSize Debug: wrote 1024 200 200 to /var/backup/080/4000080/etc/backupSize DEBUG: Successfully executed the following modifications for entry uid=4000080,ou=accounts,ou=backup,ou=services,o=stepping-stone,c=ch: sstBackupSize => 209715200 DEBUG: Successfully executed the following modifications for entry uid=4000080,ou=accounts,ou=backup,ou=services,o=stepping-stone,c=ch: sstIncrementSize => 209715200 Alert Threshold: 85 % Calculated value: 39.0625
As you can see, the total used space has risen to 39.0625.
Cronjobs
After making sure, that everything worked as planned, you can update your crontab entry:
crontab -e
... # Rsnapshot for all users 30 22 * * * /usr/libexec/backup-utils/snapshot.pl --interval daily 15 22 * * sun /usr/libexec/backup-utils/snapshot.pl --interval weekly 00 22 1 * * /usr/libexec/backup-utils/snapshot.pl --interval monthly ...
- TBD: Maybe this is not optimal if there is a lot of data to rotate. In this case, it might be that for example weekly and daily snapshot both run at the same time what might lead to strange results.
- Workarounds:
- Simple/short version: Instead one could create another wrapper script which is called everyday and does some simple logic:
- Is today the first day of a month?
- Yes: Is today sunday?
- Yes: Execute monthly then weekly then daily rsnapshots using the snapshot.pl script (but wait for each interval to finish before starting the next)
- No: Execute monthly then daily rsnapshots using the snapshot.pl script (but wait for monthly interval to finish before starting the daily)
- No: Is today sunday?
- Yes: Execute weekly then daily rsnapshots using the snapshot.pl script (but wait for weekly interval to finish before starting the daily)
- No: Execute daily rsnapshots using the snapshot.pl script
- Yes: Is today sunday?
- Complex/long version: Adapt the snapshot.pl script and call it every day without interval parameter. The script does the logic described above.
This avoids the above mentioned collision
(Complex) / short version: rule via cronjobs:
# Rsnapshot for all users 30 22 * * 1-6 [ $( date +\%d ) -gt 1 ] && /usr/libexec/backup-utils/snapshot.pl --interval daily 15 22 * * 7 [ $( date +\%d ) -gt 1 ] && /usr/libexec/backup-utils/snapshot.pl --interval weekly && /usr/libexec/backup-utils/snapshot.pl --interval daily 00 22 1 * * /usr/libexec/backup-utils/snapshot.pl --interval monthly && /usr/libexec/backup-utils/snapshot.pl --interval weekly && /usr/libexec/backup-utils/snapshot.pl --interval daily ...
scheduleWarning.pl
This script is installed to /usr/libexec/backup-utils/scheduleWarning.pl
by the sys-apps/sst-backup-utils
package and does the following:
- The basic task of this script is simple: For the given account
- Check if the planned backups were started
- If not, start the notification process with
- Product: Given account UID
- Service: Backup
- Problem: Schedule
- If not, start the notification process with
- Check if the planned backups finished successfully
- If not, start the notification process with
- Product: Given account UID
- Service: Backup
- Problem: Unsuccessful
- If not, start the notification process with
- Check if the planned backups were started
As the backup clients distributed by stepping stone GmbH upload metadata XML before (scheduling information and start time) and after (end time and backup status) the actual backup, the scheduleWarning.pl script is able to verify whether or not a planned backup has been executed and whether or not the backup was successful.
- Pod documentation
NAME scheduleWarning.pl DESCRIPTION This script tests whether a planed backup was successful or not. There are two different typs of failure. 1. The Backup did not start The first type of error is that a backup is scheduled for YYYY-MM-DD at HH:MM, but the backup don't start at this specified time. A possible reason could be that the computer was shut down. 2.The backup was not successful The second type of error is that a backup started as scheduled but did not finish successfully. There are different reasons for this error. If a backup wasn't successful the script checks which type of error occured. It reads the XML files which are stored on the server and compares the given information. If the error is detected, the script stats the norification process with the information of the error. If a user has more than one computer backed-up, the script tests one computer after the other. The mail(s) sent by the programm also contain(s) the information which computer is affected. OPTION -U uid The -U option is required to run the script, it indicates for which uid the script is executed. USAGE scheduleWarning.pl [-U user] CREATED 2010-04-14 created Pat Kläy <pat.klaey@stepping-stone.ch> VERSION 2010-04-14 v0.01, created pkl 2010-08-24 v0.02, modified pkl New using Net::SMTP::TLS to send mails 2013-09-13 v0.03, modified pat.klaey@stepping-stone.ch Changes to use the script with the new backup infrastructure (read more information from LDAP, use Notification lib to send mails) USES strict; warnings; XML::Simple; Config::IniFiles; XML::Validator::Schema; Date::Calc qw(:all); Date::Manip; Schedule::Cron::Events; DateTime::Format::Strptime; Sys::Syslog; XML::SAX::ParserFactory; Getopt::Std; MIME::Base64; Authen::SASL; Net::LDAPS; Net::SMTP::TLS; Cwd 'abs_path'; PerlUtil::Logging; PerlUtil::LDAPUtil;
Configuration
/etc/backup-utils/scheduleWarning.conf
[Backup] CHROOT_DIRECTORY = /var/backup/%lastthree%/%user% [XML] SCHEDULE_FILE = %homeDirectory%/incoming/%computerName%/.sepiola_backup/scheduler.xml SCHEDULE_XSD = /etc/backup-utils/schema/scheduler_schema.xsd BACKUP_ENDED_FILE = %homeDirectory%/incoming/%computerName%/.sepiola_backup/backupEnded.xml BACKUP_ENDED_XSD = /etc/backup-utils/schema/backupended_schema.xsd BACKUP_STARTED_FILE = %homeDirectory%/incoming/%computerName%/.sepiola_backup/backupStarted.xml BACKUP_STARTED_XSD = /etc/backup-utils/schema/backupstarted_schema.xsd [TEMPLATE] Salutation_Default_de-CH = Liebe Kundin / Lieber Kunde Salutation_m_de-CH = Sehr geehrter Herr Salutation_f_de-CH = Sehr geehrte Frau Salutation_Default_en-GB = Dear customer Salutation_m_en-GB = Dear Mr. Salutation_f_en-GB = Dear Mrs. [LDAP] SERVER = ldaps://ldapm.tombstone.ch PORT = 636 DEBUG = 1 ADMIN_DN = cn=Manager,dc=stoney-cloud,dc=org ADMIN_PASSWORD = <Password> BACKUP_BASE = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org PEOPLE_BASE = ou=people,dc=stoney-cloud,dc=org RESELLER_BASE = ou=reseller,ou=configuration,ou=backup,ou=services,dc=stoney-cloud,dc=org SCOPE = sub
checkBackups.pl
This script is installed to /usr/libexec/backup-utils/writeDate.pl
by the sys-apps/sst-backup-utils
package. It is a wrapper around the above mentioned writeAccountSize.pl
and scheduleWarning.pl
scripts (if you configure it accordingly).
- As mentioned the script is a wrapper for calling the quota and scheduler information scripts:
- It gets all active backup accounts from the LDAP directory
- According to the CLI parameters passed it calles for all these accounts:
- The quota checking script (this script can be defined in the configuration file, see configuration section below)
- The scheduler checking script (this script can be defined in the configuration file, see configuration section below)
- Pod documentation:
NAME checkBackups.pl DESCRIPTION This script processes all active backup accounts and checks (according to the command line options) the quota and/or the scheduled backups for the given accounts. The script uses syslog for logging purposes. Command Line Interface (CLI) parameters: OPTIONS --schedule/-s Checks schedule information by calling the defined script --quota/-s Checks the quota values by calling the defined script --notify/-n Passes the notify flag to the called scripts USAGE ./checkBackups.pl [--quota] [--schedule] [--notify] ./checkBackups.pl --quota Checks and writes the quota for all active backup accounts. The user and/or reseller will NOT be informed if any quota threshold is reached ./checkBackups.pl --quota --notify Checks and writes the quota for all active backup accounts. The user and/or reseller will be informed if any quota threshold is reached. ./checkBackups.pl --schedule Checkes all active backup account if the scheduled backup was executed and successful. The --schedule option will allways trigger a notification mail to the user and/or reseller. ./checkBackups.pl --schedule --quota Checks both, quota and schedule, for all active backup accounts. Only schedule will trigger notification mails. ./checkBackups.pl --schedule --quota --notify Checks both, quota and schedule, for all active backup accounts. Both, quota and schedule, will trigger notification mails. CREATED 2013-09-17 pat.klaey@stepping-stone.ch created VERSION 2013-09-17 pat.klaey@stepping-stone.ch created 2013-11-19 pat.klaey@stepping-stone.ch Added options to be able to check quota, schedule or both in one run INCORPORATED CODE Incorporated code with use: warnings; strict; Getopt::Long; Sys::Syslog; Cwd 'abs_path'; File::Basename; PerlUtil::LDAPUtil; PerlUtil::Logging;
Configuration
In the configuration you have to define two simple things:
- Which script to call for quota checks and scheduler checks
- How to access the LDAP backend
/etc/backup-utils/checkBackups.conf
[Scripts] CheckQuotaScript = /usr/libexec/backup-utils/writeAccountSize.pl CheckScheduleScript = /usr/libexec/backup-utils/scheduleWarning.pl [LDAP] Server = ldaps://ldapm.tombstone.ch Port = 636 Username = cn=Manager,dc=stoney-cloud,dc=org Password = <PASSWORD> AccountBase = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org
Cronjobs
As you can pass the notification flag to the scripts, you can call the quota script say all 10 minutes. This makes sense if followed the configuration section, as the writeAccountSize.pl script writes the quota values to the LDAP backend which will then be visible in the selfcare webinterface. Like that you have a more or less up to date quota representation in the selfcare webinterface. On the other side, you don't want to send notification mails to the user every 10 minutes. It might be enough to send them once a day, so the cronjobs may look like:
# Check backups 15 12 * * * /usr/libexec/backup-utils/checkBackups.pl --schedule --quota --notify # Write quota values every 10 minutes */10 * * * * /usr/libexec/backup-utils/checkBackups.pl --quota
ResellerBackupBilling.pl
This script is installed to /usr/libexec/backup-utils/writeDate.pl
by the sys-apps/sst-backup-utils
package. The pod documentation says all what needs to be said about this script.
- Pod documentation
NAME ResellerBackupBilling.pl DESCRIPTION This script processes the reseller with the passed UID and creates a very simplistic backup-billing overview which then is sent to the address specified in the configuration file. The billing overview contains all backup accounts which belong to the given reseller as well as their quota. According to the billing plan specified in the configuration file, the script also calculates the price for each backup account. The script uses syslog for logging purposes. Command Line Interface (CLI) parameters: OPTIONS --reseller/-r UID Process the reseller with this UID --debug/-d Turn on debug mode --help/-h Display this help USAGE ./ResellerBackupBilling.pl --reseller UID [--debug] [--help] ./ResellerBackupBilling.pl --reseller 2000000 This processes the reseller with the UID 200000 and sends the billing overview to the address specified in the configuration file CREATED 2014-01-24 pat.klaey@stepping-stone.ch created VERSION 2014-01-24 pat.klaey@stepping-stone.ch created INCORPORATED CODE Incorporated code with use: warnings; strict; Getopt::Long; PerlUtil::LDAPUtil; PerlUtil::Logging; PerlUtil::Mail; Number::Format; File::Basename; Cwd 'abs_path';
Configuration
You have to configure basically three things:
- How to access the LDAP directory (similar to the other backup-util scripts)
- Define a pricing schema
- Define a base price for an account
- Define a GB price
- The total price for each account will finall be:
base price + ( quota in GB * GB price )
- How to send the mail
- You need to enter a valid (outgoing) mail server and corresponding port
- User and password to authenticate on the mail server
- A valid recipient (typically the company’s backoffice address) and sender
/etc/backup-utils/ResellerBackupBilling.conf
[LDAP] Server = ldaps://ldapm.tombstone.ch Port = 636 Username = cn=Manager,dc=stoney-cloud,dc=org Password = <Password> AccountBase = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org CustomerBase = ou=customers,dc=stoney-cloud,dc=org ResellerBase = ou=reseller,dc=stoney-cloud,dc=org [Pricing] AccountBasePrice = 5.00 AccountGBPrice = 0.50 [Mail] Server = <Mail-Server> Port = 587 Username = <Sender-Email-Address> Password = <Password> To = <Backoffice> From = Billing Script <E-Mail-Address>
writeDate.pl
This script is installed to /usr/libexec/backup-utils/writeDate.pl
by the sys-apps/sst-backup-utils
package. This is a very simple script, it simply writes the current date and time to a file on the backup server before the rsnapshots rotates the backup. This date is used by the Sepiola Online Backup Client to display the iterations and their dates.
- Pod documentation:
NAME writeDate.pl DESCRIPTION This script writes the current date and time into the file for each backup before the rsnapshots rotates the backup. This information is used by the Sepiola Online Backup Client. The configuration file for this script is stored under /etc/backup-utils/writeDate.conf The script uses syslog for logging purposes. Command Line Interface (CLI) parameters: -C configfile The configuration file. -U uid The user id. -d debug Turns the debug mode on. -h help This online help. USAGE writeDate.pl [-C configuration_file] [-U uid] [-d debug] [-h help] CREATED 2007-09-16 michael.eichenberger@stepping-stone.ch created VERSION 2013-09-01 michael.eichenberger@stepping-stone.ch updated to reflect changes in the new backup environment 2007-09-16 michael.eichenberger@stepping-stone.ch created INCORPORATED CODE Incorporate code with use: warnings; strict; Config::IniFiles; Getopt::Std; Sys::Syslog;
Configuration
/etc/backup-utils/writeDate.conf
[Global] BACKUP_DIRECTORY = /incoming BACKUP_BACKUP_TIME_FILE = /.sepiola_backup/backupTime [Syslog] SYSLOG = rsnapshot
rsnapshot wrapper bash scripts
There are two rsnapshot wrapper scripts (these are bash scripts), they are installed by the sys-apps/sst-backup-utils
package:
-
/usr/libexec/backup-utils/rsnapshotPreExecWrapper.sh
- This script is executed before the rsnapshot process and simply calls the writeDat.pl script.
-
/usr/libexec/backup-utils/rsnapshotPostExecWrapper.sh
- This script is executed after the rsnapshot process and simply calls the writeAccountSize.pl script.
Note: These scripts do not need any configuration.
Links
- OpenLDAP, an open source implementation of the Lightweight Directory Access Protocol.
- nss-pam-ldapd, a Name Service Switch (NSS) module that allows your LDAP server to provide user account, group, host name, alias, netgroup, and basically any other information that you would normally get from /etc flat files or NIS.
- Gentoo Leitfaden zur OpenLDAP Authentifikation.
- Centralized authentication using OpenLDAP.
- openssh-lpk_openldap.schema OpenSSH LDAP Public Keys.
- linuxquota Linux DiskQuota.
- rsnapshot, a remote filesystem snapshot utility, based on rsync.
- Jailkit, set of utilities to limit user accounts to specific files using chroot() and or specific commands. Also includes a tool to build a chroot environment.
- Busybox BusyBox combines tiny versions of many common UNIX utilities into a single small executable. Useful to reduce the number of files (and thus the complexity) when building a chroot.