stoney backup: Server set-up
Contents
- 1 Abstract
- 2 Overview
- 3 Software Installation
- 4 Software Configuration
- 4.1 Quota
- 4.2 OpenSSH
- 4.3 OpenLDAP
- 4.4 nss-pam-ldapd
- 4.5 system-auth
- 4.6 sshd keys
- 4.7 sshd_config
- 4.8 rsnapshot
- 4.9 prov-backup-rsnapshot
- 4.10 schedule warning
- 4.11 writeAccountSize
- 5 Links
Abstract
This document describes server setup for the stoney cloud (Online) Backup service, built upon the Gentoo Linux distribution.
Overview
After working through this documentation, you will be able to set up and configure your own (Online) Backup service server.
Software Installation
Requirements
A working stoney cloud installation.
Keywords & USE-Flags
For a minimal OpenLDAP directory installation (just the necessary tools, this is what we would normally choose for a productive environment):
echo "net-nds/openldap ~amd64" >> /etc/portage/package.keywords
For an optional full OpenLDAP directory installation:
echo "net-nds/openldap overlays perl sasl" >> /etc/portage/package.use
NSS and PAM modules for lookups using LDAP:
echo "sys-auth/nss-pam-ldapd sasl" >> /etc/portage/package.use echo "sys-auth/nss-pam-ldapd ~amd64" >> /etc/portage/package.keywords echo "sys-fs/quota ldap" >> /etc/portage/package.use
For the prov-backup-rsnapshot daemon:
echo "dev-perl/Net-SMTPS ~amd64" >> /etc/portage/package.keywords echo "perl-core/Switch ~amd64" >> /etc/portage/package.keywords
To build puttygen only without X11 (ebuild currently from our overlay, see Gentoo Bug #482816):
echo "net-misc/putty ~amd64" >> /etc/portage/package.keywords echo "net-misc/putty -gtk" >> /etc/portage/package.use
Emerge
emerge ⁻va nss-pam-ldapd emerge -va rsnapshot emerge -va quota emerge -va dev-perl/Config-IniFiles emerge -va dev-perl/LockFile-Simple emerge -va dev-perl/Net-SMTPS emerge -va dev-perl/perl-ldap emerge -va virtual/perl-Switch emerge -va dev-perl/Parallel-ForkManager emerge -va dev-perl/XML-Simple emerge -va dev-perl/Date-Calc emerge -va dev-perl/DateManip emerge -va dev-perl/DateTime-Format-Strptime emerge -va dev-perl/text-template emerge -va perl-core/Switch emerge -va net-misc/putty emerge -va sys-apps/haveged
CPAN
Install the Time::Stopwatch, XML::Validator::Schema and Schedule::Cron::Events lib form CPAN (no ebuild available)
cpan
cpan[1]> install Time::Stopwatch LWP not available Fetching with Net::FTP: ftp://tux.rainside.sk/CPAN/authors/01mailrc.txt.gz Going to read '/root/.cpan/sources/authors/01mailrc.txt.gz' ... ILTZU/Time-Stopwatch-1.00.tar.gz /usr/bin/make install -- OK cpan[2]> install XML::Validator::Schema Going to read '/root/.cpan/Metadata' ... SAMTREGAR/XML-Validator-Schema-1.10.tar.gz /usr/bin/make install -- OK cpan[3]> install Schedule::Cron::Events Going to read '/root/.cpan/Metadata' ... /usr/bin/make install -- OK cpan[4]> exit Terminal does not support GetHistory. Lockfile removed.
Software Configuration
Quota
32-bit Project Identifier Support
We need to enable 32-bit project identifier support (PROJID32BIT feature) for our naming scheme (uid numbers larger than 65'536):
mkfs.xfs -i projid32bit=1 /dev/vdb1
Mount
Make sure, that you have user quota (uqota) and project quota (pquota) set as options on the chosen mount point in /etc/fstab. For example:
/dev/vdb1 /var/backup xfs noatime,uquota,pquota 0 0
Verify
Some important options for xfs_quota:
- -x: Enable expert mode.
- -c: Pass arguments on the command line. Multiple arguments may be given.
Remount the file system /var/backup and check, if /var/backup has the desired values:
xfs_quota -x -c state /var/backup
As you can see (items marked bold), we have achieved our goal:
User quota state on /var/backup (/dev/vdb1) Accounting: ON Enforcement: ON Inode: #131 (3 blocks, 2 extents) Group quota state on /var/backup (/dev/vdb1) Accounting: OFF Enforcement: OFF Inode: #809717 (1 blocks, 1 extents) Project quota state on /var/backup (/dev/vdb1) Accounting: ON Enforcement: ON Inode: #809717 (1 blocks, 1 extents) Blocks grace time: [7 days 00:00:30] Inodes grace time: [7 days 00:00:30] Realtime Blocks grace time: [7 days 00:00:30]
User Quotas
Adding a User Quota
Set a quota of 1 Gigabyte for the user 4000187 (the values are in kilobytes, so 1048576 kilobyte are 1024 megabytes which corresponds to 1 gigabyte):
xfs_quota -x -c 'limit bhard=1048576k 4000187' /var/backup
Or in bytes:
xfs_quota -x -c 'limit bhard=1073741824 4000187' /var/backup
Read the quota information for the user 4000187:
xfs_quota -x -c 'quota -v -N -u 4000187' /var/backup
/dev/vdb1 0 0 1048576 00 [--------] /var/backup
If the user has data in the project, that belongs to him, the result will change:
/dev/vdb1 512000 0 1048576 00 [--------] /var/backup
Modifiying a User Quota
To modify a users quota, you just set a new quota (limit):
xfs_quota -x -c 'limit bhard=1048576k 4000187' /var/backup
Read the quota information for the user 4000187:
xfs_quota -x -c 'quota -v -N -u 4000187' /var/backup
/dev/vdb1 0 0 1048576 00 [--------] /var/backup
If the user has data in the project, that belongs to him, the result will change:
/dev/vdb1 512000 0 1048576 00 [--------] /var/backup
Removing a User Quota
Removing a quota for a user:
xfs_quota -x -c 'limit bhard=0 4000187' /var/backup
The following command should give you an empty result:
xfs_quota -x -c 'quota -v -N -u 4000187' /var/backup
Project (Directory) Quotas
Adding a Project (Directory) Quota
The XFS file system additionally allows you to set quotas on individual directory hierarchies in the file system that are known as managed trees. Each managed tree is uniquely identified by a project ID and an optional project name. We'll use the following values in the examples:
- project_ID: The uid of the online backup account (4000187).
- project_name: The uid of the online backup account (4000187). This could be a human readable name.
- mountpoint: The mountpoint of the xfs-filesystem (/var/backup). See the
/etc/fstab
entry from above. - directory: The directory of the project (187/4000187), starting from the mountpoint of the xfs-filesystem (/var/backup).
Define a unique project ID for the directory hierarchy in the /etc/projects
file (project_ID:mountpoint/directory):
echo "4000187:/var/backup/187/4000187/home/4000187" >> /etc/projects
Create an entry in the /etc/projid
file that maps a project name to the project ID (project_name:project_ID):
echo "4000187:4000187" >> /etc/projid
Set Project:
xfs_quota -x -c 'project -s -p /var/backup/187/4000187/home/4000187 4000187' /var/backup
Set Quota (limit) on Project:
xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var/backup
Check your Quota (limit)
xfs_quota -x -c 'quota -p 4000187' /var/backup
Check the Quota:
-
-v
: increase verbosity in reporting (also dumps zero values). -
-N
: suppress the initial header. -
-p
: display project quota information. -
-h
: human readable format.
xfs_quota -x -c 'quota -v -N -p 4000187' /var/backup
/dev/vdb1 0 0 1048576 00 [--------] /var/backup
If you copied data into the project, the output will look something like:
/dev/vdb1 512000 0 1048576 00 [--------] /var/backup
To give you an overall view of the whole system:
xfs_quota -x -c report /var/backup
User quota on /var/backup (/dev/vdb1) Blocks User ID Used Soft Hard Warn/Grace ---------- -------------------------------------------------- root 1024000 0 0 00 [--------] 4000187 0 0 1048576 00 [--------] Project quota on /var/backup (/dev/vdb1) Blocks Project ID Used Soft Hard Warn/Grace ---------- -------------------------------------------------- 4000187 512000 0 1048576 00 [--------]
Modifying a Project (Directory) Quota
To modify a project (directory) quota, you just set an new quota (limit) on the chosen project:
xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var/backup
Check your quota (limit)
xfs_quota -x -c 'quota -p 4000187' /var/backup
Removing a Project (Directory) Quota
Removing a quota from a project:
xfs_quota -x -c 'limit -p bhard=0 4000187' /var/backup
Chreck the results:
xfs_quota -x -c report /var/backup
User quota on /var/backup (/dev/vdb1) Blocks User ID Used Soft Hard Warn/Grace ---------- -------------------------------------------------- root 512000 0 0 00 [--------] 4000187 0 0 1024 00 [--------]
As you can see, the line with the Project ID 4000187 has disappeared:
4000187 512000 0 1048576 00 [--------]
Don't forget to remove the project from /etc/projects
and /etc/projid
:
sed -i -e '/4000187/d' /etc/projects sed -i -e '/4000187/d' /etc/projid
Some important notes concerning XFS
- The quotacheck command has no effect on XFS filesystems. The first time quota accounting is turned on (at mount time), XFS does an automatic quotacheck internally; afterwards, the quota system will always be completely consistent until quotas are manually turned off.
- There is no need for quota file(s) in the root of the XFS filesystem.
OpenSSH
Configure the OpenSSH daemon:
vi /etc/ssh/sshd_config
Set following options:
PubkeyAuthentication yes PasswordAuthentication yes UsePAM yes Subsystem sftp internal-sftp
Make sure, that Subsystem sftp internal-sftp
is the last line in the configuration file.
We want to reduce the numbers of chroot environments in one folder. As the ChrootDirectory
configuration option only allows %h
(home directory of the user) and %u
(username of the user), we need to create the necessary matching rules in the form of:
Match User *000 ChrootDirectory /var/backup/000/%u AuthorizedKeysFile /var/backup/000/%u/%h/.ssh/authorized_keys Match Match User *001 ChrootDirectory /var/backup/001/%u AuthorizedKeysFile /var/backup/001/%u/%h/.ssh/authorized_keys Match ... Match User *999 ChrootDirectory /var/backup/999/%u AuthorizedKeysFile /var/backup/999/%u/%h/.ssh/authorized_keys Match
The creation of the matching rules is done by executing the following bash commands:
FILE=/etc/ssh/sshd_config; for x in {0..999} ; do \ printf "Match User *%03d\n" $x >> ${FILE}; \ printf " ChrootDirectory /var/backup/%03d/%%u\n" $x >> ${FILE}; \ printf " AuthorizedKeysFile /var/backup/%03d/%%u/%%h/.ssh/authorized_keys\n" $x >> ${FILE}; \ printf "Match\n" >> ${FILE}; \ done
Don't forget to restart the OpenSSH daemon:
/etc/init.d/sshd restart
OpenLDAP
/etc/openldap/ldap.conf
nss-pam-ldapd
/etc/nslcd.conf
# This is the configuration file for the LDAP nameservice # switch library's nslcd daemon. It configures the mapping # between NSS names (see /etc/nsswitch.conf) and LDAP # information in the directory. # See the manual page nslcd.conf(5) for more information. # The user and group nslcd should run as. uid nslcd gid nslcd # The uri pointing to the LDAP server to use for name lookups. # Multiple entries may be specified. The address that is used # here should be resolvable without using LDAP (obviously). #uri ldap://127.0.0.1/ #uri ldaps://127.0.0.1/ #uri ldapi://%2fvar%2frun%2fldapi_sock/ # Note: %2f encodes the '/' used as directory separator uri ldaps://ldapm.tombstone.ch # The LDAP version to use (defaults to 3 # if supported by client library) #ldap_version 3 # The distinguished name of the search base. base dc=stoney-cloud,dc=org # The distinguished name to bind to the server with. # Optional: default is to bind anonymously. binddn cn=Manager,dc=stoney-cloud,dc=org # The credentials to bind with. # Optional: default is no credentials. # Note that if you set a bindpw you should check the permissions of this file. bindpw myverysecretpassword # The distinguished name to perform password modifications by root by. #rootpwmoddn cn=admin,dc=example,dc=com # The default search scope. #scope sub #scope one #scope base # Customize certain database lookups. #base group ou=Groups,dc=example,dc=com base group ou=groups,ou=backup,ou=services,dc=stoney-cloud,dc=org base passwd ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org base shadow ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org #scope group onelevel #scope hosts sub #filter group (&(objectClass=posixGroup)(sstIsActive=TRUE)) filter passwd (&(objectClass=posixAccount)(sstIsActive=TRUE)) filter shadow (&(objectClass=shadowAccount)(sstIsActive=TRUE)) # Bind/connect timelimit. #bind_timelimit 30 # Search timelimit. #timelimit 30 # Idle timelimit. nslcd will close connections if the # server has not been contacted for the number of seconds. #idle_timelimit 3600 # Use StartTLS without verifying the server certificate. #ssl start_tls tls_reqcert never # CA certificates for server certificate verification #tls_cacertdir /etc/ssl/certs #tls_cacertfile /etc/ssl/ca.cert # Seed the PRNG if /dev/urandom is not provided #tls_randfile /var/run/egd-pool # SSL cipher suite # See man ciphers for syntax #tls_ciphers TLSv1 # Client certificate and key # Use these, if your server requires client authentication. #tls_cert #tls_key # Mappings for Services for UNIX 3.5 #filter passwd (objectClass=User) #map passwd uid msSFU30Name #map passwd userPassword msSFU30Password #map passwd homeDirectory msSFU30HomeDirectory #map passwd homeDirectory msSFUHomeDirectory #filter shadow (objectClass=User) #map shadow uid msSFU30Name #map shadow userPassword msSFU30Password #filter group (objectClass=Group) #map group member msSFU30PosixMember # Mappings for Services for UNIX 2.0 #filter passwd (objectClass=User) #map passwd uid msSFUName #map passwd userPassword msSFUPassword #map passwd homeDirectory msSFUHomeDirectory #map passwd gecos msSFUName #filter shadow (objectClass=User) #map shadow uid msSFUName #map shadow userPassword msSFUPassword #map shadow shadowLastChange pwdLastSet #filter group (objectClass=Group) #map group member posixMember # Mappings for Active Directory #pagesize 1000 #referrals off #idle_timelimit 800 #filter passwd (&(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*)) #map passwd uid sAMAccountName #map passwd homeDirectory unixHomeDirectory #map passwd gecos displayName #filter shadow (&(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*)) #map shadow uid sAMAccountName #map shadow shadowLastChange pwdLastSet #filter group (objectClass=group) # Alternative mappings for Active Directory # (replace the SIDs in the objectSid mappings with the value for your domain) #pagesize 1000 #referrals off #idle_timelimit 800 #filter passwd (&(objectClass=user)(objectClass=person)(!(objectClass=computer))) #map passwd uid cn #map passwd uidNumber objectSid:S-1-5-21-3623811015-3361044348-30300820 #map passwd gidNumber objectSid:S-1-5-21-3623811015-3361044348-30300820 #map passwd homeDirectory "/home/$cn" #map passwd gecos displayName #map passwd loginShell "/bin/bash" #filter group (|(objectClass=group)(objectClass=person)) #map group gidNumber objectSid:S-1-5-21-3623811015-3361044348-30300820 # Mappings for AIX SecureWay #filter passwd (objectClass=aixAccount) #map passwd uid userName #map passwd userPassword passwordChar #map passwd uidNumber uid #map passwd gidNumber gid #filter group (objectClass=aixAccessGroup) #map group cn groupName #map group gidNumber gid
/etc/nsswitch.conf
passwd: files ldap shadow: files ldap group: files ldap # passwd: db files nis # shadow: db files nis # group: db files nis hosts: files dns networks: files dns services: db files protocols: db files rpc: db files ethers: db files netmasks: files netgroup: files bootparams: files automount: files aliases: files
system-auth
vi /etc/pam.d/system-auth
auth required pam_env.so auth sufficient pam_unix.so try_first_pass likeauth nullok auth sufficient pam_ldap.so minimum_uid=1000 use_first_pass auth required pam_deny.so account sufficient pam_ldap.so minimum_uid=1000 account required pam_unix.so account sufficient pam_ldap.so minimum_uid=1000 password sufficient pam_unix.so try_first_pass nullok sha512 shadow password sufficient pam_ldap.so minimum_uid=1000 try_first_pass password required pam_deny.so session required pam_limits.so session required pam_env.so session required pam_unix.so session optional pam_ldap.so minimum_uid=1000
sshd keys
If you migrate from a existing backup server, you might want to copy the ssh host keys to the new server. If you do so clients want see a difference between the two hosts as the fingerprint remains the same. Copy the following files from the existing host to the new:
- /etc/ssh/ssh_host_dsa_key
- /etc/ssh/ssh_host_ecdsa_key
- /etc/ssh/ssh_host_key
- /etc/ssh/ssh_host_rsa_key
- /etc/ssh/ssh_host_dsa_key.pub
- /etc/ssh/ssh_host_ecdsa_key.pub
- /etc/ssh/ssh_host_key.pub
- /etc/ssh/ssh_host_rsa_key.pub
Set the correct permissions on the new host:
chmod 600 /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_ecdsa_key /etc/ssh/ssh_host_key /etc/ssh/ssh_host_rsa_key
And restart the ssh daemon. Caution: do not close your existing ssh session as long as you are not sure the ssh daemon restarted properly and you can login again.
/etc/init.d/sshd restart
sshd_config
vi /etc/ssh/sshd_config
Changes:
PasswordAuthentication yes UsePAM yes #AllowUsers
rsnapshot
Install the source
cd /var/work git clone --recursive https://github.com/stepping-stone/backup-util.git cd backup-util/bin ln -s ../perl-utils/lib/PerlUtil/ PerlUtil
Configuration
vi /var/work/backup-util/etc/snapshot.conf
[General] MaxParallelProcesses = 5 Rsnapshot_command = /usr/bin/nice -n 19 /usr/bin/rsnapshot -c /etc/rsnapshot/rsnapshot.conf.%uid% %interval% [LDAP] Host = ldaps://ldapm.tombstone.ch Port = 636 User = cn=Manager,dc=foss-cloud,dc=org Password = <Password> CA_Path = /etc/ssl/certs Accounts_Base = ou=accounts,ou=backup,ou=services,dc=foss-cloud,dc=org
Cronjobs
crontab -e
... # Rsnapshot for all users 30 22 * * * /var/work/backup-util/bin/snapshot.pl --interval daily 15 22 * * sun /var/work/backup-util/bin/snapshot.pl --interval weekly 00 22 1 * * /var/work/backup-util/bin/snapshot.pl --interval monthly ...
prov-backup-rsnapshot
cd /var/work git clone --recursive https://github.com/stepping-stone/prov-backup-rsnapshot.git cd /var/work/prov-backup-rsnapshot/Provisioning/etc/Provisioning/ ln -s ../../../etc/Provisioning/Backup/ Backup cd /var/work/prov-backup-rsnapshot/Provisioning/lib/Provisioning/ ln -s ../../../lib/Provisioning/Backup/ Backup chmod -R a+rX /var/work
Configuration
The configuration file is currently located in the /var/work/prov-backup-rsnapshot directory:
vi /var/work/prov-backup-rsnapshot/etc/Provisioning/Backup/Rsnapshot_test.conf
# Copyright (C) 2013 stepping stone GmbH # Switzerland # http://www.stepping-stone.ch # support@stepping-stone.ch # # Authors: # Pat Kläy <pat.klaey@stepping-stone.ch> # # Licensed under the EUPL, Version 1.1. # # You may not use this work except in compliance with the # Licence. # You may obtain a copy of the Licence at: # # http://www.osor.eu/eupl # # Unless required by applicable law or agreed to in # writing, software distributed under the Licence is # distributed on an "AS IS" basis, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either # express or implied. # See the Licence for the specific language governing # permissions and limitations under the Licence. # [Global] # If true the script logs every information to the log-file. LOG_DEBUG = 1 # If true the script logs additional information to the log-file. LOG_INFO = 1 #If true the script logs warnings to the log-file. LOG_WARNING = 1 #If true the script logs errors to the log-file. LOG_ERR = 1 ENVIRONMENT = [Database] BACKEND = LDAP SERVER = ldaps://ldapm.tombstone.ch PORT = 636 ADMIN_USER = cn=Manager,dc=stoney-cloud,dc=org ADMIN_PASSWORD = <PASSWORD> SERVICE_SUBTREE = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org COOKIE_FILE = /var/work/prov-backup-rsnapshot/Provisioning/etc/Provisioning/Backup/rnsapshot.cookie DEFAULT_COOKIE = rid=001,csn= SEARCH_FILTER = (&(entryCSN>=%entryCSN%)(objectClass=*)) [Service] MODUS = selfcare TRANSPORTAPI = LocalCLI SERVICE = Backup TYPE = Rsnapshot SYSLOG = Backup-Rsnapshot [Gateway] HOST = localhost USER = provisioning DSA_FILE = none [Backup] RSNAPSHOT_CONFIG_FILE = /etc/rsnapshot/rsnapshot.conf.%uid% SET_QUOTA_CMD = /usr/sbin/setquota CREATE_CHROOT_CMD = /root/createDummyBackupDirectory.sh # You might want to change this for the productive system MOUNTPOINT = / # You might want to change this for the productive system
Init Scripts
Currently we just create very basic init scripts which start and stop the deamon:
/etc/init.d/prov-backup-rsnapshot
#!/sbin/runscript # Copyright 1999-2013 Gentoo Foundation # Distributed under the terms of the GNU General Public License v2 # $Header: $ depend() { need net after slapd } start() { ebegin "Starting backup-rsnapshot provisioning daemon" start-stop-daemon --start \ --background \ --user ${USER:-root}:${GROUP:-root} \ --make-pidfile \ --pidfile "${PIDFILE}" \ --exec /var/work/prov-backup-rsnapshot/Provisioning/bin/provisioning.pl \ --interpreted \ -- ${OPTIONS} \ -c /var/work/prov-backup-rsnapshot/Provisioning/etc/Provisioning/Backup/Rsnapshot_test.conf \ -g /var/work/prov-backup-rsnapshot/Provisioning/etc/Provisioning/Global.conf eend $? } stop() { ebegin "Stopping backup-rsnapshot provisioning daemon" start-stop-daemon --stop \ --pidfile "${PIDFILE}" eend $? }
/etc/conf.d/prov-backup-rsnapshot
USER="root" GROUP="root" PIDFILE="/run/prov-backup-rsnapshot.pid" # OPTIONS="..."
Run-Level
rc-update add prov-backup-rsnapshot default
schedule warning
To install the new schedule warning script you have to execute the following commands:
cd /var/work/ git clone --recursive https://github.com/stepping-stone/backup-surveillance.git cd backup-surveillance/bin/ ln -s ../perl-utils/lib/PerlUtil/ PerlUtil
Configuration
vi /var/work/backup-surveillance/etc/config.conf
[XML] SCHEDULE_FILE = %homeDirectory%/incoming/%computerName%/.sepiola_backup/scheduler.xml SCHEDULE_XSD = %configpath%/../etc/schema/scheduler_schema.xsd BACKUP_ENDED_FILE = %homeDirectory%/incoming/%computerName%/.sepiola_backup/backupEnded.xml BACKUP_ENDED_XSD = %configpath%/../etc/schema/backupended_schema.xsd BACKUP_STARTED_FILE = %homeDirectory%/incoming/%computerName%/.sepiola_backup/backupStarted.xml BACKUP_STARTED_XSD = %configpath%/../etc/schema/backupstarted_schema.xsd [TEMPLATE] Salutation_Default_de-CH = Liebe Kundin / Lieber Kunde Salutation_m_de-CH = Sehr geehrter Herr Salutation_f_de-CH = Sehr geehrte Frau Salutation_Default_en-GB = Dear customer Salutation_m_en-GB = Dear Mr. Salutation_f_en-GB = Dear Mrs. [LDAP] SERVER = ldaps://ldapm.tombstone.ch PORT = 636 DEBUG = 1 ADMIN_DN = cn=Manager,dc=foss-cloud,dc=org ADMIN_PASSWORD = <Password> BACKUP_BASE = ou=accounts,ou=backup,ou=services,dc=foss-cloud,dc=org PEOPLE_BASE = ou=people,dc=foss-cloud,dc=org RESELLER_BASE = ou=reseller,ou=configuration,ou=backup,ou=services,dc=foss-cloud,dc=org SCOPE = sub [MAIL] mailTo = host = mail.stepping-stone.ch port = 587 username = password = from =
writeAccountSize
If you have already installed the rsnapshot script, you also have the writeAccountSize script. Otherwise follow these instructions (installation only)
Configuration
vi /var/work/backup-util/etc/writeAccountSize.conf
[Global] INCOMING_DIRECTORY = /incoming ACCOUNT_SIZE_FILE = /etc/backupSize SNAPSHOTS = 1 [Syslog] SYSLOG = rsnapshot [Directory] LDAP_SERVER = ldaps://ldapm.tombstone.ch LDAP_PORT = 636 LDAP_BIND_DN = cn=Manager,dc=foss-cloud,dc=org LDAP_BIND_PW = <password> LDAP_BASE_DN = ou=accounts,ou=backup,ou=services,dc=foss-cloud,dc=org LDAP_PERSON_BASE = ou=people,dc=foss-cloud,dc=org LDAP_RESELLER_BASE = ou=reseller,ou=configuration,ou=backup,ou=services,dc=foss-cloud,dc=org LDAP_EMAIL_ATTRIBUTE = mail [Notification] EMAIL_SENDER = stepping stone GmbH Supprt <support@stepping-stone.ch> EMAIL_ALERT_THRESHOLD = 85 Salutation_Default_de-CH = Liebe Kundin / Lieber Kunde Salutation_m_de-CH = Sehr geehrter Herr Salutation_f_de-CH = Sehr geehrte Frau Salutation_Default_en-GB = Dear customer Salutation_m_en-GB = Dear Mr. Salutation_f_en-GB = Dear Mrs. [MAIL] host = mail.stepping-stone.ch port = 587 username = support@stepping-stone.ch password = <password>
Links
- OpenLDAP, an open source implementation of the Lightweight Directory Access Protocol.
- nss-pam-ldapd, a Name Service Switch (NSS) module that allows your LDAP server to provide user account, group, host name, alias, netgroup, and basically any other information that you would normally get from /etc flat files or NIS.
- Gentoo Leitfaden zur OpenLDAP Authentifikation.
- Centralized authentication using OpenLDAP.
- openssh-lpk_openldap.schema OpenSSH LDAP Public Keys.
- linuxquota Linux DiskQuota.
- rsnapshot, a remote filesystem snapshot utility, based on rsync.
- Jailkit, set of utilities to limit user accounts to specific files using chroot() and or specific commands. Also includes a tool to build a chroot environment.
- Busybox BusyBox combines tiny versions of many common UNIX utilities into a single small executable. Useful to reduce the number of files (and thus the complexity) when building a chroot.