Changes

stoney backup: Server set-up

1,626 bytes added, 07:41, 27 June 2014
/* Provisioning global configuration */
== Requirements ==
A working stoney cloud installation, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].
== Keywords & USE-Flags ==
echo "sys-auth/nss-pam-ldapd ~amd64" >> /etc/portage/package.keywords
echo "sys-fs/quota ldap" >> /etc/portage/package.use
 
echo "=app-admin/jailkit-2.16 ~amd64" >> /etc/portage/package.keywords
For the prov-backup-rsnapshot daemon:
echo "perl-core/Switch ~amd64" >> /etc/portage/package.keywords
To build puttygen only without X11 (ebuild currently from our overlay, see [https://bugs.gentoo.org/show_bug.cgi?id=482816 Gentoo Bug #482816]):
echo "net-misc/putty ~amd64" >> /etc/portage/package.keywords
echo "net-misc/putty -gtk" >> /etc/portage/package.use
== Emerge ==
emerge -va nss-pam-ldapd \
rsnapshot \
quota \
net-misc/putty \
app-admin/jailkit \ sys-apps/haveged\ net-misc/putty \ sys-apps/sst-backup-utils \ sys-apps/sst-prov-backup-rsnapshot
The following perl libraries should be installed through To list the prov-backup-rsnapshot and backup-utils dependencies of ebuilds, you can use <code>equery</code>: emerge equery depgraph sst-va devbackup-perl/Config-IniFiles emerge -va dev-perl/LockFile-Simple emerge -va dev-perl/Net-SMTPS emerge -va dev-perl/perl-ldap emerge -va virtual/perl-Switch emerge -va dev-perl/Parallel-ForkManager emerge -va dev-perl/XML-Simple emerge -va dev-perl/Date-Calc emerge -va dev-perl/DateManip emerge -va dev-perl/DateTime-Format-Strptime emerge -va dev-perl/text-template emerge -va perl-core/Switch === CPAN ===Install the Time::Stopwatch, XML::Validator::Schema and Schedule::Cron::Events lib form CPAN (no ebuild available) cpanutils
<pre>
cpan[1]> install Time::Stopwatch LWP not availableFetching with Net::FTP:ftp://tux.rainside.sk/CPAN/authors/01mailrc.txt.gzGoing to read '/root/.cpan/sources/authors/01mailrc.txt.gz'... ILTZU/Time * Searching for sst-Stopwatchbackup-1utils .00.tar.gz /usr/bin/make install -- OK
cpan[2] * dependency graph for sys-apps/sst-backup-utils-0.1.0 `-- sys-apps/sst-backup-utils-0.1.0 amd64 `-- dev-perl/PerlUtil-0.1.0 (> install =dev-perl/PerlUtil-0.1.0) amd64 `-- virtual/perl-Sys-Syslog-0.320.0 (virtual/perl-Sys-Syslog) amd64 `-- dev-perl/perl-ldap-0.530.0 (dev-perl/perl-ldap) amd64 `-- dev-perl/XML::-Simple-2.200.0 (dev-perl/XML-Simple) amd64 `-- dev-perl/Config-IniFiles-2.780.0 (dev-perl/Config-IniFiles) amd64 `-- dev-perl/XML-Validator::-Schema-1.100.0 (dev-perl/XML-Validator-Schema) amd64 Going to read ' `-- dev-perl/rootDate-Calc-6.300.0 (dev-perl/Date-Calc) amd64 `-- dev-perl/DateManip-6.cpan310.0 (dev-perl/Metadata'DateManip) amd64 `-- dev-perl/Schedule-Cron-Events-1.930.0 (dev-perl/Schedule-Cron-Events) amd64 `-- dev-perl/DateTime-Format-Strptime-1.520.0 (dev-perl/DateTime-Format-Strptime) amd64 SAMTREGAR `-- dev-perl/XML-ValidatorSAX-Schema0.990.0 (dev-perl/XML-SAX) amd64 `-- virtual/perl-MIME-Base64-3.130.0-r2 (virtual/perl-MIME-Base64) amd64 `-- dev-perl/Authen-SASL-2.160.0 (dev-perl/Authen-SASL) amd64 `-- dev-perl/Net-SMTPS-0.30.0 (dev-perl/Net-SMTPS) ~amd64 `-- dev-perl/text-template-1.10450.tar0 (dev-perl/text-template) amd64 `-- virtual/perl-Getopt-Long-2.gz380.0-r2 (virtual/perl-Getopt-Long) amd64 `-- dev-perl/usrParallel-ForkManager-1.20.0 (dev-perl/binParallel-ForkManager) amd64 `-- dev-perl/make install Time-Stopwatch-1.0.0 (dev-perl/Time- OKStopwatch) amd64 `-- app-backup/rsnapshot-1.3.1-r1 (app-backup/rsnapshot) amd64 [ sys-apps/sst-backup-utils-0.1.0 stats: packages (20), max depth (1) ]</pre>
cpanFor more information, visit the [3]> install Schedule::Cron:http:EventsGoing to read '/root/www.cpan/Metadata'gentoo... org/usrdoc/binen/make install -- OK cpan[4gentoolkit.xml Gentoolkit]> exitTerminal does not support GetHistorypage.Lockfile removed.</pre>
= Base Server Software Configuration =
== OpenSSH ==
=== OpenSSH Configuration ===
=== Root CA Certificate Installation ===
Install the root CA certificate into the OpenSSL default certificate storage directory:
fqdn="cloud.stoney-cloud.org" # The fully qualified domain name of the server containing the root certificate.
cd /etc/ssl/certs/
wget --no-check-certificate https://cloud.stepping-stone.ch${fqdn}/ca/FOSS-Cloud_CA.cert.pem
chown root:root /etc/ssl/certs/FOSS-Cloud_CA.cert.pem
chmod 444 /etc/ssl/certs/FOSS-Cloud_CA.cert.pem
# server is listening on. If no port number is provided, the default port for the scheme is
# used (389 for ldap://, 636 for ldaps://). A space separated list of URIs may be provided.
URI ldaps://ldapm.steppingstoney-stonecloud.chorg
# Used to specify the default base DN to use when performing ldap operations. The base must be
...
== Random Number Generator (haveged) ==
Tools like putty are dependent on random numbers to be able to create certificates.
nscd=== haveged - Generate random numbers and feed linux random device ===The haveged daemon doesn't need any special configuration, therefore you can start it from the command line interface: nslcd/etc/init.d/haveged start
Check, if the start was successful:
ps auxf | grep haveged
 
root 18001 1.0 0.0 7420 3616 ? Ss 08:48 0:00 /usr/sbin/haveged -r 0 -w 1024 -v 1
 
Add the haveged daemon to the default run level:
rc-update add haveged default
== nss-pam-ldapd ==
=== nslcd.conf — configuration file for LDAP nameservice daemon ===
/etc/nslcd.conf
</pre>
=== nsswitch.conf - Name Service Switch configuration file ===
/etc/nsswitch.conf
</pre>
=== system-auth ===
vi /etc/pam.d/system-auth
<pre>
auth required pam_env.so
auth required sufficient pam_unix.so try_first_pass likeauth nullok
auth sufficient pam_ldap.so minimum_uid=1000 use_first_pass
auth optional required pam_permitpam_deny.so
account required pam_unix.so
account sufficient pam_ldap.so minimum_uid=1000 use_first_pass
account optional pam_permit.so
password required pam_cracklib.so difok=2 minlen=8 dcredit=2 ocredit=2 retry=3
password required pam_unix.so try_first_pass use_authtok nullok sha512 shadow
password sufficient pam_ldap.so minimum_uid=1000 use_first_pass
password optional required pam_permitpam_deny.so
session required pam_limits.so
session required pam_unix.so
session sufficient pam_ldap.so minimum_uid=1000 use_first_pass
session optional pam_permit.so
</pre>
 
=== Test the Setup ===
nslcd -d
 
=== Update the Default Run Levels ===
rc-update add nslcd default
rc-update add nscd default
 
=== Start the necessary Daemons ===
/etc/init.d/nslcd start
/etc/init.d/nscd start
== Quota ==
=== 32-bit Project Identifier Support ===
We need to enable 32-bit project identifier support (PROJID32BIT feature) for our naming scheme (uid numbers larger than 65'536), which is already the default on the stepping stone virtual machines: mkfs.xfs '''-i projid32bit=1''' /dev/vdb1vg-local-01/var
=== Update /etc/fstab and Mount ===
Make sure, that you have user quota (uqota) and project quota (pquota) set as options on the chosen mount point in /etc/fstab. For example:
/dev/vdb1 LABEL=LV-VAR /var/backup xfs noatime,discard,inode64,uquota,pquota 0 02  reboot Check, if everything went ok: df -h | grep var  /dev/mapper/vg--local--01-var 1023G 220G 804G 22% /var
=== Verify ===
* -c: Pass arguments on the command line. Multiple arguments may be given.
Remount the file system /var/backup and check, if /var/backup has the desired values: xfs_quota -x -c state /var/backup
As you can see (items marked bold), we have achieved our goal:
User quota state on /var/backup (/dev/vdb1mapper/vg--local--01-var)
Accounting: '''ON'''
Enforcement: '''ON'''
Inode: #131 (3 1 blocks, 2 1 extents) Group quota state on /var/backup (/dev/vdb1mapper/vg--local--01-var)
Accounting: OFF
Enforcement: OFF
Inode: #809717 132 (1 blocks, 1 extents) Project quota state on /var/backup (/dev/vdb1mapper/vg--local--01-var)
Accounting: '''ON'''
Enforcement: '''ON'''
Inode: #809717 132 (1 blocks, 1 extents)
Blocks grace time: [7 days 00:00:30]
Inodes grace time: [7 days 00:00:30]
==== Adding a User Quota ====
Set a quota of 1 Gigabyte for the user 4000187 (the values are in kilobytes, so 1048576 kilobyte are 1024 megabytes which corresponds to 1 gigabyte):
xfs_quota -x -c 'limit bhard=1048576k 4000187' /var/backup
Or in bytes:
xfs_quota -x -c 'limit bhard=1073741824 4000187' /var/backup
Read the quota information for the user 4000187:
xfs_quota -x -c 'quota -v -N -u 4000187' /var/backup
/dev/vdb1 mapper/vg--local--01-var 0 0 1048576 00 [--------] /var/backup
If the user has data in the project, that belongs to him, the result will change:
/dev/vdb1 mapper/vg--local--01-var 512000 0 1048576 00 [--------] /var/backup
==== Modifiying a User Quota ====
To modify a users quota, you just set a new quota (limit):
xfs_quota -x -c 'limit bhard=1048576k 4000187' /var/backup
Read the quota information for the user 4000187:
xfs_quota -x -c 'quota -v -N -u 4000187' /var/backup
/dev/vdb1 mapper/vg--local--01-var 0 0 1048576 00 [--------] /var/backup
If the user has data in the project, that belongs to him, the result will change:
/dev/vdb1 mapper/vg--local--01-var 512000 0 1048576 00 [--------] /var/backup
==== Removing a User Quota ====
Removing a quota for a user:
xfs_quota -x -c 'limit bhard=0 4000187' /var/backup
The following command should give you an empty result:
xfs_quota -x -c 'quota -v -N -u 4000187' /var/backup
=== Project (Directory) Quotas ===
* project_ID: The uid of the online backup account (4000187).
* project_name: The uid of the online backup account (4000187). This could be a human readable name.
* mountpoint: The mountpoint of the xfs-filesystem (/var/backup). See the <code>/etc/fstab</code> entry from above.* directory: The directory of the project (187/4000187), starting from the mountpoint of the xfs-filesystem (/var/backup).
Define a unique project ID for the directory hierarchy in the <code>/etc/projects</code> file (project_ID:mountpoint/directory):
Set Project:
xfs_quota -x -c 'project -s -p /var/backup/187/4000187/home/4000187 4000187' /var/backup
Set Quota (limit) on Project:
xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var/backup
Check your Quota (limit)
xfs_quota -x -c 'quota -p 4000187' /var/backup
Check the Quota:
* <code>-p</code>: display project quota information.
* <code>-h</code>: human readable format.
xfs_quota -x -c 'quota -v -N -p 4000187' /var/backup
/dev/vdb1 mapper/vg--local--01-var 0 0 1048576 00 [--------] /var/backup
If you copied data into the project, the output will look something like:
/dev/vdb1 mapper/vg--local--01-var 512000 0 1048576 00 [--------] /var/backup
To give you an overall view of the whole system:
xfs_quota -x -c report /var/backup
<pre>
User quota on /var/backup (/dev/vdb1mapper/vg--local--01-var)
Blocks
User ID Used Soft Hard Warn/Grace
4000187 0 0 1048576 00 [--------]
Project quota on /var/backup (/dev/vdb1mapper/vg--local--01-var)
Blocks
Project ID Used Soft Hard Warn/Grace
==== Modifying a Project (Directory) Quota ====
To modify a project (directory) quota, you just set an new quota (limit) on the chosen project:
xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var/backup
Check your quota (limit)
xfs_quota -x -c 'quota -p 4000187' /var/backup
==== Removing a Project (Directory) Quota ====
Removing a quota from a project:
xfs_quota -x -c 'limit -p bhard=0 4000187' /var/backup
Chreck the results:
xfs_quota -x -c report /var/backup
<pre>
User quota on /var/backup (/dev/vdb1mapper/vg--local--01-var)
Blocks
User ID Used Soft Hard Warn/Grace
# There is '''no need for quota file(s)''' in the root of the XFS filesystem.
== prov-backup-rsnapshot ==Install the source[[stoney_backup:_prov-backup-rsnapshot | prov-backup-rsnasphot ]] daemon script using the package manager:
<pre>
cd /var/workgit clone emerge -va sys-recursive https:apps//github.com/steppingsst-stone/backupprov-util.gitcd backup-util/binln -s ../perl-utils/lib/PerlUtil/ PerlUtilrsnapshot
</pre>
 
=== Configuration ===
vi If it is the first provisioning module running on this server (very likely) you first have to configure the provisioning daemon (you can skip this step if you have already another provisioning module running on this server) ==== Provisioning global configuration ====The global configuration for the provisioning daemon (which was installed with the first provisioning module and the <code>sys-apps/var/work/backupsst-utilprovisioning</code> package) applies to all provisioning modules running on the server. This configuration therefore contains information about the provisioning daemon itself and no information at all about the specific modules. /etc/snapshotProvisioning/Global.conf
<pre>
[General]# Copyright (C) 2012 stepping stone GmbHMaxParallelProcesses = 5# SwitzerlandRsnapshot_command = # http:/usr/bin/nice www.stepping-n 19 /usr/bin/rsnapshot stone.ch# support@stepping-stone.ch## Authors:# Pat Kläy <pat.klaey@stepping-c stone.ch># # Licensed under the EUPL, Version 1.1.## You may not use this work except in compliance with the# Licence.# You may obtain a copy of the Licence at:## http:/etc/rsnapshotwww.osor.eu/rsnapshoteupl## Unless required by applicable law or agreed to in# writing, software distributed under the Licence is# distributed on an "AS IS" basis,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either# express or implied.conf# See the Licence for the specific language governing# permissions and limitations under the Licence.%uid% %interval%#
[LDAPGlobal]Host = ldaps://ldapm# If true the script logs every information to the log-file.tombstone.chPort LOG_DEBUG = 636User = cn=Manager,dc=foss-cloud,dc=orgPassword = <Password>CA_Path = /etc/ssl/certsAccounts_Base = ou=accounts,ou=backup,ou=services,dc=foss-cloud,dc=org</pre>0
=== Cronjobs === crontab -e<pre>...# Rsnapshot for all users30 22 * * * /var/work/backupIf true the script logs additional information to the log-util/bin/snapshotfile.pl --interval daily15 22 * * sun /var/work/backup-util/bin/snapshot.pl --interval weekly00 22 LOG_INFO = 1 * * /var/work/backup-util/bin/snapshot.pl --interval monthly...</pre>
#If true the script logs warnings to the log-file.LOG_WARNING == prov1 #If true the script logs errors to the log-backup-rsnapshot file.LOG_ERR =1  # The number of seconds to wait before retry contacting the backend server during startup.SLEEP =10<pre>cd /var/work# Number of backend server connection retries during startup.git clone ATTEMPTS = 3 [Operation Mode]# The number of seconds to wait before retry contacting the backend server in case of a service interruptions.SLEEP = 30 # Number of backend server connection retries in case of a service interruptions.ATTEMPTS = 3 [Mail]# Error messages are sent to the mail configured below.SENDTO = <YOUR-MAIL-recursive https://githubADDRESS>HOST = mail.com/stepping-stone/prov-backup-rsnapshot.gitchcd /var/work/provPORT = 587USERNAME = <YOUR-backupNOTIFICATION-rsnapshot/EMAIL-ADDRESS>PASSWORD = <PASSWORD>FROMNAME = Provisioning/etc/Provisioning/daemonln -s ../../..CA_DIR = /etc/Provisioning/Backupssl/ Backupcertscd /var/work/prov-backup-rsnapshot/Provisioning/lib/Provisioning/SSL = starttlsln -s AUTH_METHOD = LOGIN # Additionally, you can be informed about creation, modification and deletion of services../../../lib/Provisioning/Backup/ Backupchmod -R a+rX /var/workWANTINFOMAIL = 1
</pre>
 === Configuration = Provisioning daemon prov-backup-rsnapshot module ====The module specific configuration file is currently located in the /varetc/workProvisioning/prov-backup-rsnapshot directory: vi /var/work<Service>/<Type>.conf. In the case of the prov-backup-rsnapshotmodule this is <code>/etc/Provisioning/Backup/Rsnapshot_testRsnapshot.conf</code>. (Note: Comments starting with /* are not in the configuration file, they are only in the wiki to add some additional information)
<pre>
#
/* If you want, you can override the log information from the global configuration file this might be useful for debugging */
[Global]
# If true the script logs every information to the log-file.
LOG_ERR = 1
/* Specify the hosts fully qualified domain name. This name will be used to perform some checks and also appear in the information and error mails */ENVIRONMENT = <FQDN>
[Database]
BACKEND = LDAP
SERVER = ldaps://ldapm.tombstone.chorg
PORT = 636
ADMIN_USER = cn=Manager,dc=stoney-cloud,dc=org
ADMIN_PASSWORD = <PASSWORD>
SERVICE_SUBTREE = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org
COOKIE_FILE = /var/work/prov-backup-rsnapshot/Provisioning/etc/Provisioning/Backup/rnsapshotrsnapshot.cookie
DEFAULT_COOKIE = rid=001,csn=
SEARCH_FILTER = (&(entryCSN>=%entryCSN%)(objectClasssstProvisioningState=*0))
/* Specifies the service itself. As it is the prov-backup-rsnapshot module, the SERVICE is "Backup" and the TYPE is "Rsnapshot".
* The MODUS is as usual selfcare and the TRANSPORTAPI is LocalCLI. This is because the daemon is running on the same host as the
* backup accounts are provisioned and the commands can be executed on this host using the cli.
* For more information about MODUS and TRANSPORTAPI see https://int.stepping-stone.ch/wiki/provisioning.pl#Service_Konfiguration
*/
[Service]
MODUS = selfcare
TYPE = Rsnapshot
SYSLOG = Backupprov-Rsnapshotbackup-rsnapshot
/* For the TRANSPORTAPI LocalCLI there is no gateway required because there is no connection to establish. So set HOST, USER and
* DSA_FILE to whatever you want. Don't leave it blank, otherwise the provisioning daemon would log some error messages saying
* these attributes are empty
*/
[Gateway]
HOST = localhost
DSA_FILE = none
/* Information about the backup itself (how to setup everything). Note that the %uid% int the RSNAPSHOT_CONFIG_FILE parameter will
* be replaced by the accounts UID. The script CREATE_CHROOT_CMD was installed with the prov-backup-rsnapshot module, so do not
* change this parameter. The quota parameters (SET_QUOTA_CMD, MOUNTPOINT, QUOTA_FILE, PROJECTS_FILE and PROJID_FILE) represent
* the quota setup as described on http://wiki.stoney-cloud.org/index.php/stoney_backup:_Server_set-up#Quota. If you followed this
* manual, you can copy-paste them into your configuration file, otherwise adapt them according to your quota setup.
*/
[Backup]
RSNAPSHOT_CONFIG_FILE = /etc/rsnapshot/rsnapshot.conf.%uid%
SET_QUOTA_CMD = /usr/sbin/setquotaxfs_quotaCREATE_CHROOT_CMD = /rootusr/createDummyBackupDirectorylibexec/createBackupDirectory.sh # You might want to change this for the productive systemMOUNTPOINT = / # You might want to change this for the productive systemvarQUOTA_FILE = /etc/backupSizePROJECTS_FILE = /etc/projectsPROJID_FILE = /etc/projid
</pre>
=== Init Scripts =backup utils ==Currently we just create very basic init Install the backup utils (multiple scripts which start help you to manage and stop monitor your backup server and backup accounts) using the deamon: /etc/initpackage manager.d/prov-For more information about the scripts please see the [[stoney_backup:_Service_Software | stoney backup-rsnapshotService Software]] page.
<pre>
#!/sbin/runscript# Copyright 1999emerge -2013 Gentoo Foundation# Distributed under the terms of the GNU General Public License v2# $Header: $ depend() { need net after slapd} start() { ebegin "Starting backupva sys-rsnapshot provisioning daemon" start-stop-daemon --start \ --background \ --user ${USER:-root}:${GROUP:-root} \ --make-pidfile \ --pidfile "${PIDFILE}" \ --exec apps/var/work/provsst-backup-rsnapshot/Provisioning/bin/provisioning.pl \ --interpreted \ -- ${OPTIONS} \ -c /var/work/prov-backup-rsnapshot/Provisioning/etc/Provisioning/Backup/Rsnapshot_test.conf \ -g /var/work/prov-backup-rsnapshot/Provisioning/etc/Provisioning/Global.conf eend $?} stop() { ebegin "Stopping backup-rsnapshot provisioning daemon" start-stop-daemon --stop \ --pidfile "${PIDFILE}" eend $?}utils
</pre>
 
/etc/conf.d/prov-backup-rsnapshot
<pre>
USER="root"
GROUP="root"
 
PIDFILE="/run/prov-backup-rsnapshot.pid"
 
# OPTIONS="..."
 
</pre>
==== Run-Level ====
rc-update add prov-backup-rsnapshot default
 
== schedule warning ==
To install the new schedule warning script you have to execute the following commands:
<pre>
cd /var/work/
git clone --recursive https://github.com/stepping-stone/backup-surveillance.git
cd backup-surveillance/bin/
ln -s ../perl-utils/lib/PerlUtil/ PerlUtil
</pre>
 
=== Configuration ===
vi /var/work/backup-surveillance/etc/config.conf<pre>[XML]SCHEDULE_FILE = %homeDirectory%/incoming/%computerName%/.sepiola_backup/scheduler.xmlSCHEDULE_XSD = %configpath%/../etc/schema/scheduler_schema.xsdBACKUP_ENDED_FILE = %homeDirectory%/incoming/%computerName%/.sepiola_backup/backupEnded.xmlBACKUP_ENDED_XSD = %configpath%/../etc/schema/backupended_schema.xsdBACKUP_STARTED_FILE = %homeDirectory%/incoming/%computerName%/.sepiola_backup/backupStarted.xmlBACKUP_STARTED_XSD = %configpath%/../etc/schema/backupstarted_schema.xsd  [TEMPLATE]Salutation_Default_de-CH = Liebe Kundin / Lieber KundeSalutation_m_de-CH = Sehr geehrter HerrSalutation_f_de-CH = Sehr geehrte FrauSalutation_Default_en-GB = Dear customerSalutation_m_en-GB = Dear Mr.Salutation_f_en-GB = Dear Mrs. [LDAP] SERVER = ldaps://ldapm.tombstone.chPORT = 636DEBUG = 1 ADMIN_DN = cn=Manager,dc=foss-cloud,dc=org ADMIN_PASSWORD = <Password> BACKUP_BASE = ou=accounts,ou=backup,ou=services,dc=foss-cloud,dc=orgPEOPLE_BASE = ou=people,dc=foss-cloud,dc=orgRESELLER_BASE = ou=reseller,ou=Please refer to the configuration,ou=backup,ou=services,dc=foss-cloud,dc=orgSCOPE = sub [MAIL]mailTo = host = mail.stepping-stone.chport = 587username = password = from =  </pre> == writeAccountSize ==If you have already installed sections for the different scripts in [[Backup_(Server_Setup)#rsnapshot stoney_backup:_Service_Software | rsnapshot]] script, you also have the writeAccountSize script. Otherwise follow [[Backup_(Server_Setup)#rsnapshot | these instructions (installation only)]]=== Configuration === vi /var/work/stoney backup-util/etc/writeAccountSize.conf <pre>[GlobalService Software]INCOMING_DIRECTORY = /incomingACCOUNT_SIZE_FILE = /etc/backupSizeSNAPSHOTS = 1 [Syslog]SYSLOG = rsnapshot [Directory]LDAP_SERVER = ldaps://ldapm.tombstone.chLDAP_PORT = 636LDAP_BIND_DN = cn=Manager,dc=foss-cloud,dc=orgLDAP_BIND_PW = <password>LDAP_BASE_DN = ou=accounts,ou=backup,ou=services,dc=foss-cloud,dc=orgLDAP_PERSON_BASE = ou=people,dc=foss-cloud,dc=orgLDAP_RESELLER_BASE = ou=reseller,ou=configuration,ou=backup,ou=services,dc=foss-cloud,dc=orgLDAP_EMAIL_ATTRIBUTE = mail [Notification]EMAIL_SENDER = stepping stone GmbH Supprt <support@stepping-stone.ch>EMAIL_ALERT_THRESHOLD = 85 Salutation_Default_de-CH = Liebe Kundin / Lieber KundeSalutation_m_de-CH = Sehr geehrter HerrSalutation_f_de-CH = Sehr geehrte FrauSalutation_Default_en-GB = Dear customerSalutation_m_en-GB = Dear Mr.Salutation_f_en-GB = Dear Mrs. [MAIL]host = mail.stepping-stone.chport = 587username = support@stepping-stone.chpassword = <password></pre>
= Links =
* [http://www.busybox.net/ Busybox] BusyBox combines tiny versions of many common UNIX utilities into a single small executable. Useful to reduce the number of files (and thus the complexity) when building a chroot.
  [[Category:Servicesstoney backup]]
486
edits