Changes

stoney backup: Server set-up

30,610 bytes added, 07:41, 27 June 2014
/* Provisioning global configuration */
== Requirements ==
A working stoney cloud installation, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].
== Keywords & USE-Flages Flags ==For a full minimal OpenLDAP directory installation: echo "net-nds/openldap overlays perl minimal sasl" >> /etc/portage/package.use echo "net-nds/openldap ~amd64" >> /etc/portage/package.keywords
For a minimal OpenLDAP directory installation (just the necessary tools)NSS and PAM modules for lookups using LDAP:
echo "sys-auth/nss-pam-ldapd sasl" >> /etc/portage/package.use
echo "sys-auth/nss-pam-ldapd ~amd64" >> /etc/portage/package.keywords
echo "net-nds/openldap ~amd64" >> /etc/portage/package.keywords
echo "sys-fs/quota ldap" >> /etc/portage/package.use
 
echo "=app-admin/jailkit-2.16 ~amd64" >> /etc/portage/package.keywords
 
For the prov-backup-rsnapshot daemon:
echo "dev-perl/Net-SMTPS ~amd64" >> /etc/portage/package.keywords
echo "perl-core/Switch ~amd64" >> /etc/portage/package.keywords
 
To build puttygen only without X11:
echo "net-misc/putty ~amd64" >> /etc/portage/package.keywords
echo "net-misc/putty -gtk" >> /etc/portage/package.use
== Emerge ==
emerge ⁻va -va nss-pam-ldapd\ emerge quota \ net-va rsnapshotmisc/putty \ emerge app-va quotaadmin/jailkit \ sys-apps/haveged \ net-misc/putty \ sys-apps/sst-backup-utils \ sys-apps/sst-prov-backup-rsnapshot
To list the dependencies of ebuilds, you can use <code>equery</code>: equery depgraph sst-backup-utils<pre> * Searching for sst-backup-utils ...  * dependency graph for sys-apps/sst-backup-utils-0.1.0 `-- sys-apps/sst-backup-utils-0.1.0 amd64 `-- dev-perl/PerlUtil-0.1.0 (>= dev-perl/PerlUtil-0.1.0) amd64 `-- virtual/perl-Sys-Syslog-0.320.0 (virtual/perl-Sys-Syslog) amd64 `-- dev-perl/perl-ldap-0.530.0 (dev-perl/perl-ldap) amd64 `-- dev-perl/XML-Simple-2.200.0 (dev-perl/XML-Simple) amd64 `-- dev-perl/Config-IniFiles-2.780.0 (dev-perl/Config-IniFiles) amd64 `-- dev-perl/XML-Validator-Schema-1.100.0 (dev-perl/XML-Validator-Schema) amd64 `-- dev-perl/Date-Calc-6.300.0 (dev-perl/Date-Calc) amd64 `-- dev-perl/DateManip-6.310.0 (dev-perl/DateManip) amd64 `-- dev-perl/Schedule-Cron-Events-1.930.0 (dev-perl/Schedule-Cron-Events) amd64 `-- dev-perl/DateTime-Format-Strptime-1.520.0 (dev-perl/DateTime-Format-Strptime) amd64 `-- dev-perl/XML-SAX-0.990.0 (dev-perl/XML-SAX) amd64 `-- virtual/perl-MIME-Base64-3.130.0-r2 (virtual/perl-MIME-Base64) amd64 `-- dev-perl/Authen-SASL-2.160.0 (dev-perl/Authen-SASL) amd64 `-- dev-perl/Net-SMTPS-0.30.0 (dev-perl/Net-SMTPS) ~amd64 `-- dev-perl/text-template-1.450.0 (dev-perl/text-template) amd64 `-- virtual/perl-Getopt-Long-2.380.0-r2 (virtual/perl-Getopt-Long) amd64 `-- dev-perl/Parallel-ForkManager-1.20.0 (dev-perl/Parallel-ForkManager) amd64 `-- dev-perl/Time-Stopwatch-1.0.0 (dev-perl/Time-Stopwatch) amd64 `-- app-backup/rsnapshot-1.3.1-r1 (app-backup/rsnapshot) amd64 [ sys-apps/sst-backup-utils-0.1.0 stats: packages (20), max depth (1) ]</pre> For more information, visit the [http://www.gentoo.org/doc/en/gentoolkit.xml Gentoolkit] page. = Base Server Software Configuration === OpenSSH ===== OpenSSH Configuration ===Configure the OpenSSH daemon:<source lang="bash">vi /etc/ssh/sshd_config</source> Set following options:<source lang="bash">PubkeyAuthentication yesPasswordAuthentication yesUsePAM yesSubsystem sftp internal-sftp</source> Make sure, that <code>Subsystem sftp internal-sftp</code> is the last line in the configuration file. We want to reduce the numbers of chroot environments in one folder. As the <code>ChrootDirectory</code> configuration option only allows <code>%h</code> (home directory of the user) and <code>%u</code> (username of the user), we need to create the necessary matching rules in the form of:<source lang="bash">Match User *000 ChrootDirectory /var/backup/000/%u AuthorizedKeysFile /var/backup/000/%u/%h/.ssh/authorized_keysMatchMatch User *001 ChrootDirectory /var/backup/001/%u AuthorizedKeysFile /var/backup/001/%u/%h/.ssh/authorized_keysMatch...Match User *999 ChrootDirectory /var/backup/999/%u AuthorizedKeysFile /var/backup/999/%u/%h/.ssh/authorized_keysMatch</source> The creation of the matching rules is done by executing the following bash commands:<source lang="bash">FILE=/etc/ssh/sshd_config; for x in {0..999} ; do \ printf "Match User *%03d\n" $x >> ${FILE}; \ printf " ChrootDirectory /var/backup/%03d/%%u\n" $x >> ${FILE}; \ printf " AuthorizedKeysFile /var/backup/%03d/%%u/%%h/.ssh/authorized_keys\n" $x >> ${FILE}; \ printf "Match\n" >> ${FILE}; \done</source> Don't forget to restart the OpenSSH daemon:<source lang="bash">/etc/init.d/sshd restart</source> === OpenSSH Host Keys ===If you migrate from a existing backup server, you might want to copy the ssh host keys to the new server. If you do so clients want see a difference between the two hosts as the fingerprint remains the same. Copy the following files from the existing host to the new:* /etc/ssh/ssh_host_dsa_key* /etc/ssh/ssh_host_ecdsa_key* /etc/ssh/ssh_host_key* /etc/ssh/ssh_host_rsa_key* /etc/ssh/ssh_host_dsa_key.pub* /etc/ssh/ssh_host_ecdsa_key.pub* /etc/ssh/ssh_host_key.pub* /etc/ssh/ssh_host_rsa_key.pub Set the correct permissions on the new host: chmod 600 /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_ecdsa_key /etc/ssh/ssh_host_key /etc/ssh/ssh_host_rsa_key chmod 644 /etc/ssh/*.pub And restart the ssh daemon. ''Caution'': do not close your existing ssh session as long as you are not sure the ssh daemon has restarted properly and you can login again. /etc/init.d/sshd restart
== OpenLDAP ==
=== /etc/hosts ===
Update the <code>/etc/hosts</code> with the LDAP server:
/etc/hosts
 
# VIP of the LDAP Server
31.216.40.4 ldapm.stoney-cloud.org
 
=== Root CA Certificate Installation ===
Install the root CA certificate into the OpenSSL default certificate storage directory:
fqdn="cloud.stoney-cloud.org" # The fully qualified domain name of the server containing the root certificate.
cd /etc/ssl/certs/
wget --no-check-certificate https://${fqdn}/ca/FOSS-Cloud_CA.cert.pem
chown root:root /etc/ssl/certs/FOSS-Cloud_CA.cert.pem
chmod 444 /etc/ssl/certs/FOSS-Cloud_CA.cert.pem
 
Rebuild the CA hashes
c_rehash /etc/ssl/certs/
 
=== /etc/openldap/ldap.conf ===
Update the <code>/etc/openldap/ldap.conf</code>LDAP configuration file/environment variables:
/etc/openldap/ldap.conf
 
<pre>
# Used to specify a size limit to use when performing searches. The number should be an
# non-negative integer. SIZELIMIT of zero (0) specifies unlimited search size.
SIZELIMIT 20000
 
# Used to specify a time limit to use when performing searches. The number should be an
# non-negative integer. TIMELIMIT of zero (0) specifies unlimited search time to be used.
TIMELIMIT 45
 
# Specify how aliases dereferencing is done. DEREF should be set to one of never, always, search,
# or find to specify that aliases are never dereferenced, always dereferenced, dereferenced when
# searching, or dereferenced only when locating the base object for the search. The default is to
# never dereference aliases.
DEREF never
 
# Specifies the URI(s) of an LDAP server(s) to which the LDAP library should connect. The URI
# scheme may be either ldapor ldaps which refer to LDAP over TCP and LDAP over SSL (TLS)
# respectively. Each server's name can be specified as a domain- style name or an IP address
# literal. Optionally, the server's name can followed by a ':' and the port number the LDAP
# server is listening on. If no port number is provided, the default port for the scheme is
# used (389 for ldap://, 636 for ldaps://). A space separated list of URIs may be provided.
URI ldaps://ldapm.stoney-cloud.org
 
# Used to specify the default base DN to use when performing ldap operations. The base must be
# specified as a Distinguished Name in LDAP format.
BASE dc=stoney-cloud,dc=org
 
# This is a local copy of the certificate of the certificate authority
# used to sign the server certificate for the LDAP server I am using
TLS_CACERT /etc/ssl/certs/FOSS-Cloud_CA.cert.pem
</pre>
 
Check you configuration by doing a search:
ldapsearch -v -H "ldaps://ldapm.stoney-cloud.org" \
-b "dc=stoney-cloud,dc=org" \
-D "cn=Manager,dc=stoney-cloud,dc=org" \
-s one "(objectClass=*)" \
-LLL -W
 
The result should look something like:
ldap_initialize( ldaps://ldapm.stoney-cloud.org:636/??base )
filter: (objectClass=*)
requesting: All userApplication attributes
dn: ou=administration,dc=stoney-cloud,dc=org
objectClass: top
objectClass: organizationalUnit
ou: administration
...
 
== Random Number Generator (haveged) ==
Tools like putty are dependent on random numbers to be able to create certificates.
 
=== haveged - Generate random numbers and feed linux random device ===
The haveged daemon doesn't need any special configuration, therefore you can start it from the command line interface:
/etc/init.d/haveged start
 
Check, if the start was successful:
ps auxf | grep haveged
 
root 18001 1.0 0.0 7420 3616 ? Ss 08:48 0:00 /usr/sbin/haveged -r 0 -w 1024 -v 1
 
Add the haveged daemon to the default run level:
rc-update add haveged default
== nss-pam-ldapd ==
=== nslcd.conf — configuration file for LDAP nameservice daemon ===
/etc/nslcd.conf
<pre>
# This is the configuration file for the LDAP nameservice
# switch library's nslcd daemon. It configures the mapping
# between NSS names (see /etc/nsswitch.conf) and LDAP
# information in the directory.
# See the manual page nslcd.conf(5) for more information.
 
# The user and group nslcd should run as.
uid nslcd
gid nslcd
 
# The uri pointing to the LDAP server to use for name lookups.
# Multiple entries may be specified. The address that is used
# here should be resolvable without using LDAP (obviously).
#uri ldap://127.0.0.1/
#uri ldaps://127.0.0.1/
#uri ldapi://%2fvar%2frun%2fldapi_sock/
# Note: %2f encodes the '/' used as directory separator
uri ldaps://ldapm.tombstone.ch
 
# The LDAP version to use (defaults to 3
# if supported by client library)
#ldap_version 3
 
# The distinguished name of the search base.
base dc=stoney-cloud,dc=org
 
# The distinguished name to bind to the server with.
# Optional: default is to bind anonymously.
binddn cn=Manager,dc=stoney-cloud,dc=org
 
# The credentials to bind with.
# Optional: default is no credentials.
# Note that if you set a bindpw you should check the permissions of this file.
bindpw myverysecretpassword
 
# The distinguished name to perform password modifications by root by.
#rootpwmoddn cn=admin,dc=example,dc=com
 
# The default search scope.
#scope sub
#scope one
#scope base
 
# Customize certain database lookups.
#base group ou=Groups,dc=example,dc=com
base group ou=groups,ou=backup,ou=services,dc=stoney-cloud,dc=org
base passwd ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org
base shadow ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org
#scope group onelevel
#scope hosts sub
 
#filter group (&(objectClass=posixGroup)(sstIsActive=TRUE))
filter passwd (&(objectClass=posixAccount)(sstIsActive=TRUE))
filter shadow (&(objectClass=shadowAccount)(sstIsActive=TRUE))
 
# Bind/connect timelimit.
#bind_timelimit 30
 
# Search timelimit.
#timelimit 30
 
# Idle timelimit. nslcd will close connections if the
# server has not been contacted for the number of seconds.
#idle_timelimit 3600
 
# Use StartTLS without verifying the server certificate.
#ssl start_tls
tls_reqcert never
 
# CA certificates for server certificate verification
#tls_cacertdir /etc/ssl/certs
#tls_cacertfile /etc/ssl/ca.cert
 
# Seed the PRNG if /dev/urandom is not provided
#tls_randfile /var/run/egd-pool
 
# SSL cipher suite
# See man ciphers for syntax
#tls_ciphers TLSv1
 
# Client certificate and key
# Use these, if your server requires client authentication.
#tls_cert
#tls_key
 
# Mappings for Services for UNIX 3.5
#filter passwd (objectClass=User)
#map passwd uid msSFU30Name
#map passwd userPassword msSFU30Password
#map passwd homeDirectory msSFU30HomeDirectory
#map passwd homeDirectory msSFUHomeDirectory
#filter shadow (objectClass=User)
#map shadow uid msSFU30Name
#map shadow userPassword msSFU30Password
#filter group (objectClass=Group)
#map group member msSFU30PosixMember
 
# Mappings for Services for UNIX 2.0
#filter passwd (objectClass=User)
#map passwd uid msSFUName
#map passwd userPassword msSFUPassword
#map passwd homeDirectory msSFUHomeDirectory
#map passwd gecos msSFUName
#filter shadow (objectClass=User)
#map shadow uid msSFUName
#map shadow userPassword msSFUPassword
#map shadow shadowLastChange pwdLastSet
#filter group (objectClass=Group)
#map group member posixMember
 
# Mappings for Active Directory
#pagesize 1000
#referrals off
#idle_timelimit 800
#filter passwd (&(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*))
#map passwd uid sAMAccountName
#map passwd homeDirectory unixHomeDirectory
#map passwd gecos displayName
#filter shadow (&(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*))
#map shadow uid sAMAccountName
#map shadow shadowLastChange pwdLastSet
#filter group (objectClass=group)
 
# Alternative mappings for Active Directory
# (replace the SIDs in the objectSid mappings with the value for your domain)
#pagesize 1000
#referrals off
#idle_timelimit 800
#filter passwd (&(objectClass=user)(objectClass=person)(!(objectClass=computer)))
#map passwd uid cn
#map passwd uidNumber objectSid:S-1-5-21-3623811015-3361044348-30300820
#map passwd gidNumber objectSid:S-1-5-21-3623811015-3361044348-30300820
#map passwd homeDirectory "/home/$cn"
#map passwd gecos displayName
#map passwd loginShell "/bin/bash"
#filter group (|(objectClass=group)(objectClass=person))
#map group gidNumber objectSid:S-1-5-21-3623811015-3361044348-30300820
 
# Mappings for AIX SecureWay
#filter passwd (objectClass=aixAccount)
#map passwd uid userName
#map passwd userPassword passwordChar
#map passwd uidNumber uid
#map passwd gidNumber gid
#filter group (objectClass=aixAccessGroup)
#map group cn groupName
#map group gidNumber gid
</pre>
 
=== nsswitch.conf - Name Service Switch configuration file ===
/etc/nsswitch.conf
<pre>passwd: files ldapshadow: files ldapgroup: files ldap # passwd: db files nis# shadow: db files nis# group: db files nis hosts: files dnsnetworks: files dns services: db filesprotocols: db filesrpc: db filesethers: db filesnetmasks: filesnetgroup: filesbootparams: files automount: filesaliases: files</pre> == = system-auth === vi /etc/pam.d/system-auth <pre>auth required pam_env.soauth sufficient pam_unix.so try_first_pass likeauth nullokauth sufficient pam_ldap.so minimum_uid=1000 use_first_passauth required pam_deny.so account required pam_unix.soaccount sufficient pam_ldap.so minimum_uid=1000 use_first_pass password required pam_cracklib.so difok=2 minlen=8 dcredit=2 ocredit=2 retry=3password required pam_unix.so try_first_pass use_authtok nullok sha512 shadowpassword sufficient pam_ldap.so minimum_uid=1000 use_first_passpassword required pam_deny.so session required pam_limits.sosession required pam_env.sosession required pam_unix.sosession sufficient pam_ldap.so minimum_uid=1000 use_first_pass</pre> === Test the Setup === nslcd -d === Update the Default Run Levels === rc-update add nslcd default rc-update add nscd default === Start the necessary Daemons === /etc/init.d/nslcd start /etc/init.d/nscd start == Quota ===== 32-bit Project Identifier Support ===We need to enable 32-bit project identifier support (PROJID32BIT feature) for our naming scheme (uid numbers larger than 65'536), which is already the default on the stepping stone virtual machines: mkfs.xfs '''-i projid32bit=1''' /dev/vg-local-01/var === Update /etc/fstab and Mount ===Make sure, that you have user quota (uqota) and project quota (pquota) set as options on the chosen mount point in /etc/fstab. For example: LABEL=LV-VAR /var xfs noatime,discard,inode64,uquota,pquota 0 2  reboot Check, if everything went ok: df -h | grep var  /dev/mapper/vg--local--01-var 1023G 220G 804G 22% /var === Verify ===Some important options for xfs_quota:* -x: Enable expert mode.* -c: Pass arguments on the command line. Multiple arguments may be given. Remount the file system /var and check, if /var has the desired values: xfs_quota -x -c state /var As you can see (items marked bold), we have achieved our goal: User quota state on /var (/dev/mapper/vg--local--01-var) Accounting: '''ON''' Enforcement: '''ON''' Inode: #131 (1 blocks, 1 extents) Group quota state on /var (/dev/mapper/vg--local--01-var) Accounting: OFF Enforcement: OFF Inode: #132 (1 blocks, 1 extents) Project quota state on /var (/dev/mapper/vg--local--01-var) Accounting: '''ON''' Enforcement: '''ON''' Inode: #132 (1 blocks, 1 extents) Blocks grace time: [7 days 00:00:30] Inodes grace time: [7 days 00:00:30] Realtime Blocks grace time: [7 days 00:00:30] === User Quotas ======= Adding a User Quota ====Set a quota of 1 Gigabyte for the user 4000187 (the values are in kilobytes, so 1048576 kilobyte are 1024 megabytes which corresponds to 1 gigabyte): xfs_quota -x -c 'limit bhard=1048576k 4000187' /var Or in bytes: xfs_quota -x -c 'limit bhard=1073741824 4000187' /var Read the quota information for the user 4000187: xfs_quota -x -c 'quota -v -N -u 4000187' /var  /dev/mapper/vg--local--01-var 0 0 1048576 00 [--------] /var If the user has data in the project, that belongs to him, the result will change: /dev/mapper/vg--local--01-var 512000 0 1048576 00 [--------] /var ==== Modifiying a User Quota ====To modify a users quota, you just set a new quota (limit): xfs_quota -x -c 'limit bhard=1048576k 4000187' /var Read the quota information for the user 4000187: xfs_quota -x -c 'quota -v -N -u 4000187' /var  /dev/mapper/vg--local--01-var 0 0 1048576 00 [--------] /var If the user has data in the project, that belongs to him, the result will change: /dev/mapper/vg--local--01-var 512000 0 1048576 00 [--------] /var ==== Removing a User Quota ====Removing a quota for a user: xfs_quota -x -c 'limit bhard=0 4000187' /var The following command should give you an empty result: xfs_quota -x -c 'quota -v -N -u 4000187' /var === Project (Directory) Quotas ======= Adding a Project (Directory) Quota ====The XFS file system additionally allows you to set quotas on individual directory hierarchies in the file system that are known as managed trees. Each managed tree is uniquely identified by a project ID and an optional project name. We'll use the following values in the examples:* project_ID: The uid of the online backup account (4000187).* project_name: The uid of the online backup account (4000187). This could be a human readable name.* mountpoint: The mountpoint of the xfs-filesystem (/var). See the <code>/etc/fstab</code> entry from above.* directory: The directory of the project (187/4000187), starting from the mountpoint of the xfs-filesystem (/var). Define a unique project ID for the directory hierarchy in the <code>/etc/projects</code> file (project_ID:mountpoint/directory): echo "4000187:/var/backup/187/4000187/home/4000187" >> /etc/projects Create an entry in the <code>/etc/projid</code> file that maps a project name to the project ID (project_name:project_ID): echo "4000187:4000187" >> /etc/projid Set Project: xfs_quota -x -c 'project -s -p /var/backup/187/4000187/home/4000187 4000187' /var Set Quota (limit) on Project: xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var Check your Quota (limit) xfs_quota -x -c 'quota -p 4000187' /var Check the Quota:* <code>-v</code>: increase verbosity in reporting (also dumps zero values).* <code>-N</code>: suppress the initial header.* <code>-p</code>: display project quota information.* <code>-h</code>: human readable format. xfs_quota -x -c 'quota -v -N -p 4000187' /var  /dev/mapper/vg--local--01-var 0 0 1048576 00 [--------] /var If you copied data into the project, the output will look something like: /dev/mapper/vg--local--01-var 512000 0 1048576 00 [--------] /var To give you an overall view of the whole system: xfs_quota -x -c report /var <pre>User quota on /var (/dev/mapper/vg--local--01-var) Blocks User ID Used Soft Hard Warn/Grace ---------- -------------------------------------------------- root 1024000 0 0 00 [--------]4000187 0 0 1048576 00 [--------] Project quota on /var (/dev/mapper/vg--local--01-var) Blocks Project ID Used Soft Hard Warn/Grace ---------- -------------------------------------------------- 4000187 512000 0 1048576 00 [--------]</pre> ==== Modifying a Project (Directory) Quota ====To modify a project (directory) quota, you just set an new quota (limit) on the chosen project: xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var Check your quota (limit) xfs_quota -x -c 'quota -p 4000187' /var ==== Removing a Project (Directory) Quota ====Removing a quota from a project: xfs_quota -x -c 'limit -p bhard=0 4000187' /var Chreck the results: xfs_quota -x -c report /var <pre>User quota on /var (/dev/mapper/vg--local--01-var) Blocks User ID Used Soft Hard Warn/Grace ---------- -------------------------------------------------- root 512000 0 0 00 [--------]4000187 0 0 1024 00 [--------]</pre> As you can see, the line with the Project ID 4000187 has disappeared: 4000187 512000 0 1048576 00 [--------] Don't forget to remove the project from <code>/etc/projects</code> and <code>/etc/projid</code>: sed -i -e '/4000187/d' /etc/projects sed -i -e '/4000187/d' /etc/projid === Some important notes concerning XFS ===# The '''quotacheck''' command has no effect on XFS filesystems. The first time quota accounting is turned on (at mount time), XFS does an automatic quotacheck internally; afterwards, the quota system will always be completely consistent until quotas are manually turned off. # There is '''no need for quota file(s)''' in the root of the XFS filesystem. == prov-backup-rsnapshot ==Install the [[stoney_backup:_prov-backup-rsnapshot | prov-backup-rsnasphot ]] daemon script using the package manager:<pre>emerge -va sys-apps/sst-prov-backup-rsnapshot</pre> === Configuration ===If it is the first provisioning module running on this server (very likely) you first have to configure the provisioning daemon (you can skip this step if you have already another provisioning module running on this server) ==== Provisioning global configuration ====The global configuration for the provisioning daemon (which was installed with the first provisioning module and the <code>sys-apps/sst-provisioning</code> package) applies to all provisioning modules running on the server. This configuration therefore contains information about the provisioning daemon itself and no information at all about the specific modules. /etc/Provisioning/Global.conf<pre># Copyright (C) 2012 stepping stone GmbH# Switzerland# http://www.stepping-stone.ch# support@stepping-stone.ch## Authors:# Pat Kläy <pat.klaey@stepping-stone.ch># # Licensed under the EUPL, Version 1.1.## You may not use this work except in compliance with the# Licence.# You may obtain a copy of the Licence at:## http://www.osor.eu/eupl## Unless required by applicable law or agreed to in# writing, software distributed under the Licence is# distributed on an "AS IS" basis,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either# express or implied.# See the Licence for the specific language governing# permissions and limitations under the Licence.# [Global]# If true the script logs every information to the log-file.LOG_DEBUG = 0 # If true the script logs additional information to the log-file.LOG_INFO = 1 #If true the script logs warnings to the log-file.LOG_WARNING = 1 #If true the script logs errors to the log-file.LOG_ERR = 1  # The number of seconds to wait before retry contacting the backend server during startup.SLEEP = 10 # Number of backend server connection retries during startup.ATTEMPTS = 3 [Operation Mode]# The number of seconds to wait before retry contacting the backend server in case of a service interruptions.SLEEP = 30 # Number of backend server connection retries in case of a service interruptions.ATTEMPTS = 3 [Mail]# Error messages are sent to the mail configured below.SENDTO = <YOUR-MAIL-ADDRESS>HOST = mail.stepping-stone.chPORT = 587USERNAME = <YOUR-NOTIFICATION-EMAIL-ADDRESS>PASSWORD = <PASSWORD>FROMNAME = Provisioning daemonCA_DIR = /etc/ssl/certsSSL = starttlsAUTH_METHOD = LOGIN # Additionally, you can be informed about creation, modification and deletion of services.WANTINFOMAIL = 1</pre> ==== Provisioning daemon prov-backup-rsnapshot module ====The module specific configuration is located in /etc/Provisioning/<Service>/<Type>.conf. In the case of the prov-backup-rsnapshot module this is <code>/etc/Provisioning/Backup/Rsnapshot.conf</code>. (Note: Comments starting with /* are not in the configuration file, they are only in the wiki to add some additional information) <pre># Copyright (C) 2013 stepping stone GmbH# Switzerland# http://www.stepping-stone.ch# support@stepping-stone.ch## Authors:# Pat Kläy <pat.klaey@stepping-stone.ch># # Licensed under the EUPL, Version 1.1.## You may not use this work except in compliance with the# Licence.# You may obtain a copy of the Licence at:## http://www.osor.eu/eupl## Unless required by applicable law or agreed to in# writing, software distributed under the Licence is# distributed on an "AS IS" basis,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either# express or implied.# See the Licence for the specific language governing# permissions and limitations under the Licence.# /* If you want, you can override the log information from the global configuration file this might be useful for debugging */[Global]# If true the script logs every information to the log-file.LOG_DEBUG = 1 # If true the script logs additional information to the log-file.LOG_INFO = 1 #If true the script logs warnings to the log-file.LOG_WARNING = 1 #If true the script logs errors to the log-file.LOG_ERR = 1 /* Specify the hosts fully qualified domain name. This name will be used to perform some checks and also appear in the information and error mails */ENVIRONMENT = <FQDN> [Database]BACKEND = LDAPSERVER = ldaps://ldapm.tombstone.orgPORT = 636ADMIN_USER = cn=Manager,dc=stoney-cloud,dc=orgADMIN_PASSWORD = <PASSWORD>SERVICE_SUBTREE = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=orgCOOKIE_FILE = /etc/Provisioning/Backup/rsnapshot.cookieDEFAULT_COOKIE = rid=001,csn=SEARCH_FILTER = (&(entryCSN>=%entryCSN%)(sstProvisioningState=0)) /* Specifies the service itself. As it is the prov-backup-rsnapshot module, the SERVICE is "Backup" and the TYPE is "Rsnapshot". * The MODUS is as usual selfcare and the TRANSPORTAPI is LocalCLI. This is because the daemon is running on the same host as the * backup accounts are provisioned and the commands can be executed on this host using the cli. * For more information about MODUS and TRANSPORTAPI see https://int.stepping-stone.ch/wiki/provisioning.pl#Service_Konfiguration */[Service]MODUS = selfcareTRANSPORTAPI = LocalCLISERVICE = BackupTYPE = Rsnapshot SYSLOG = prov-backup-rsnapshot /* For the TRANSPORTAPI LocalCLI there is no gateway required because there is no connection to establish. So set HOST, USER and * DSA_FILE to whatever you want. Don't leave it blank, otherwise the provisioning daemon would log some error messages saying * these attributes are empty */[Gateway]HOST = localhostUSER = provisioningDSA_FILE = none /* Information about the backup itself (how to setup everything). Note that the %uid% int the RSNAPSHOT_CONFIG_FILE parameter will * be replaced by the accounts UID. The script CREATE_CHROOT_CMD was installed with the prov-backup-rsnapshot module, so do not * change this parameter. The quota parameters (SET_QUOTA_CMD, MOUNTPOINT, QUOTA_FILE, PROJECTS_FILE and PROJID_FILE) represent * the quota setup as described on http://wiki.stoney-cloud.org/index.php/stoney_backup:_Server_set-up#Quota. If you followed this * manual, you can copy-paste them into your configuration file, otherwise adapt them according to your quota setup. */[Backup]RSNAPSHOT_CONFIG_FILE = /etc/rsnapshot/rsnapshot.conf.%uid%SET_QUOTA_CMD = /usr/sbin/xfs_quotaCREATE_CHROOT_CMD = /usr/libexec/createBackupDirectory.shMOUNTPOINT = /varQUOTA_FILE = /etc/backupSizePROJECTS_FILE = /etc/projectsPROJID_FILE = /etc/projid</pre> == backup utils ==Install the backup utils (multiple scripts which help you to manage and monitor your backup server and backup accounts) using the package manager. For more information about the scripts please see the [[stoney_backup:_Service_Software | stoney backup Service Software]] page. <pre>emerge -va sys-apps/sst-backup-utils</pre>=== Configuration ===Please refer to the configuration sections for the different scripts in [[stoney_backup:_Service_Software | stoney backup Service Software]].
= Links =
* [http://www.openldap.org/ OpenLDAP], an open source implementation of the Lightweight Directory Access Protocol.
* [http://arthurdejong.org/nss-pam-ldapd/ nss-pam-ldapd], a Name Service Switch (NSS) module that allows your LDAP server to provide user account, group, host name, alias, netgroup, and basically any other information that you would normally get from /etc flat files or NIS.
* [http://www.gentoo.org/doc/de/ldap-howto.xml Gentoo Leitfaden zur OpenLDAP Authentifikation].
* [http://wiki.gentoo.org/wiki/Centralized_authentication_using_OpenLDAP Centralized authentication using OpenLDAP].
* [https://code.google.com/p/openssh-lpk/source/browse/trunk/schemas/openssh-lpk_openldap.schema openssh-lpk_openldap.schema] OpenSSH LDAP Public Keys.
* [http://sourceforge.net/projects/linuxquota/ linuxquota] Linux DiskQuota.
* [http://www.rsnapshot.org/ rsnapshot], a remote filesystem snapshot utility, based on rsync.
* [http://olivier.sessink.nl/jailkit/ Jailkit], set of utilities to limit user accounts to specific files using chroot() and or specific commands. Also includes a tool to build a chroot environment.
* [http://www.busybox.net/ Busybox] BusyBox combines tiny versions of many common UNIX utilities into a single small executable. Useful to reduce the number of files (and thus the complexity) when building a chroot.
[[Category:Servicesstoney backup]]
486
edits