Difference between revisions of "stoney backup: Server set-up"

From stoney cloud
Jump to: navigation, search
[unchecked revision][unchecked revision]
(Links)
(Provisioning global configuration)
 
(81 intermediate revisions by 5 users not shown)
Line 8: Line 8:
  
 
== Requirements ==
 
== Requirements ==
A working stoney cloud installation.
+
A working stoney cloud, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]].
  
 
== Keywords & USE-Flags ==
 
== Keywords & USE-Flags ==
Line 26: Line 26:
 
  echo "perl-core/Switch ~amd64" >> /etc/portage/package.keywords
 
  echo "perl-core/Switch ~amd64" >> /etc/portage/package.keywords
  
To build puttygen only without X11 (ebuild currently from our overlay, see [https://bugs.gentoo.org/show_bug.cgi?id=482816 Gentoo Bug #482816]):
+
To build puttygen only without X11:
 
  echo "net-misc/putty ~amd64" >> /etc/portage/package.keywords
 
  echo "net-misc/putty ~amd64" >> /etc/portage/package.keywords
 
  echo "net-misc/putty -gtk" >> /etc/portage/package.use
 
  echo "net-misc/putty -gtk" >> /etc/portage/package.use
Line 71: Line 71:
 
For more information, visit the [http://www.gentoo.org/doc/en/gentoolkit.xml Gentoolkit] page.
 
For more information, visit the [http://www.gentoo.org/doc/en/gentoolkit.xml Gentoolkit] page.
  
= Software Configuration =
+
= Base Server Software Configuration =
 
== OpenSSH ==
 
== OpenSSH ==
 
=== OpenSSH Configuration ===
 
=== OpenSSH Configuration ===
Line 151: Line 151:
 
=== Root CA Certificate Installation ===
 
=== Root CA Certificate Installation ===
 
Install the root CA certificate into the OpenSSL default certificate storage directory:
 
Install the root CA certificate into the OpenSSL default certificate storage directory:
 +
fqdn="cloud.stoney-cloud.org"    # The fully qualified domain name of the server containing the root certificate.
 +
 
  cd /etc/ssl/certs/
 
  cd /etc/ssl/certs/
  wget --no-check-certificate https://cloud.stepping-stone.ch/FOSS-Cloud_CA.cert.pem
+
  wget --no-check-certificate https://${fqdn}/ca/FOSS-Cloud_CA.cert.pem
 
  chown root:root /etc/ssl/certs/FOSS-Cloud_CA.cert.pem
 
  chown root:root /etc/ssl/certs/FOSS-Cloud_CA.cert.pem
 
  chmod 444 /etc/ssl/certs/FOSS-Cloud_CA.cert.pem
 
  chmod 444 /etc/ssl/certs/FOSS-Cloud_CA.cert.pem
Line 184: Line 186:
 
# server is listening on. If no port number is provided, the default port for the scheme is
 
# server is listening on. If no port number is provided, the default port for the scheme is
 
# used (389 for ldap://, 636 for ldaps://). A space separated list of URIs may be provided.
 
# used (389 for ldap://, 636 for ldaps://). A space separated list of URIs may be provided.
URI            ldaps://ldapm.stepping-stone.ch
+
URI            ldaps://ldapm.stoney-cloud.org
  
 
# Used to specify the default base DN to use when performing ldap operations. The base must be
 
# Used to specify the default base DN to use when performing ldap operations. The base must be
Line 440: Line 442:
 
=== Start the necessary Daemons ===
 
=== Start the necessary Daemons ===
 
  /etc/init.d/nslcd start
 
  /etc/init.d/nslcd start
  /etc/init.d/nscd start
+
  /etc/init.d/nscd start
  
 
== Quota ==
 
== Quota ==
 
=== 32-bit Project Identifier Support ===
 
=== 32-bit Project Identifier Support ===
We need to enable 32-bit project identifier support (PROJID32BIT feature) for our naming scheme (uid numbers larger than 65'536):
+
We need to enable 32-bit project identifier support (PROJID32BIT feature) for our naming scheme (uid numbers larger than 65'536), which is already the default on the stepping stone virtual machines:
  mkfs.xfs '''-i projid32bit=1''' /dev/vdb
+
  mkfs.xfs '''-i projid32bit=1''' /dev/vg-local-01/var
  
 
=== Update /etc/fstab and Mount ===
 
=== Update /etc/fstab and Mount ===
 
Make sure, that you have user quota (uqota) and project quota (pquota) set as options on the chosen mount point in /etc/fstab. For example:
 
Make sure, that you have user quota (uqota) and project quota (pquota) set as options on the chosen mount point in /etc/fstab. For example:
  /dev/vdb                /var/backup    xfs            noatime,uquota,pquota   0 0
+
  LABEL=LV-VAR            /var           xfs            noatime,discard,inode64,uquota,pquota 0 2
  
  mkdir /var/backup
+
  reboot
chmod 755 /var/backup
+
 
+
mount /var/backup
+
  
 
Check, if everything went ok:
 
Check, if everything went ok:
  df -h | grep backup
+
  df -h | grep var
  
  /dev/vdb        1.0T  33M 1.0T  1% /var/backup
+
  /dev/mapper/vg--local--01-var 1023G  220G  804G  22% /var
  
 
=== Verify ===
 
=== Verify ===
Line 466: Line 465:
 
* -c: Pass arguments on the command line. Multiple arguments may be given.
 
* -c: Pass arguments on the command line. Multiple arguments may be given.
  
Remount the file system /var/backup and check, if /var/backup has the desired values:
+
Remount the file system /var and check, if /var has the desired values:
  xfs_quota -x -c state /var/backup
+
  xfs_quota -x -c state /var
  
 
As you can see (items marked bold), we have achieved our goal:
 
As you can see (items marked bold), we have achieved our goal:
  User quota state on /var/backup (/dev/vdb)
+
  User quota state on /var (/dev/mapper/vg--local--01-var)
 
   Accounting: '''ON'''
 
   Accounting: '''ON'''
 
   Enforcement: '''ON'''
 
   Enforcement: '''ON'''
 
   Inode: #131 (1 blocks, 1 extents)
 
   Inode: #131 (1 blocks, 1 extents)
  Group quota state on /var/backup (/dev/vdb)
+
  Group quota state on /var (/dev/mapper/vg--local--01-var)
 
   Accounting: OFF
 
   Accounting: OFF
 
   Enforcement: OFF
 
   Enforcement: OFF
 
   Inode: #132 (1 blocks, 1 extents)
 
   Inode: #132 (1 blocks, 1 extents)
  Project quota state on /var/backup (/dev/vdb)
+
  Project quota state on /var (/dev/mapper/vg--local--01-var)
 
   Accounting: '''ON'''
 
   Accounting: '''ON'''
 
   Enforcement: '''ON'''
 
   Enforcement: '''ON'''
Line 489: Line 488:
 
==== Adding a User Quota ====
 
==== Adding a User Quota ====
 
Set a quota of 1 Gigabyte for the user 4000187 (the values are in kilobytes, so 1048576 kilobyte are 1024 megabytes which corresponds to 1 gigabyte):
 
Set a quota of 1 Gigabyte for the user 4000187 (the values are in kilobytes, so 1048576 kilobyte are 1024 megabytes which corresponds to 1 gigabyte):
  xfs_quota -x -c 'limit bhard=1048576k 4000187' /var/backup
+
  xfs_quota -x -c 'limit bhard=1048576k 4000187' /var
  
 
Or in bytes:
 
Or in bytes:
  xfs_quota -x -c 'limit bhard=1073741824 4000187' /var/backup
+
  xfs_quota -x -c 'limit bhard=1073741824 4000187' /var
  
 
Read the quota information for the user 4000187:
 
Read the quota information for the user 4000187:
  xfs_quota -x -c 'quota -v -N -u 4000187' /var/backup
+
  xfs_quota -x -c 'quota -v -N -u 4000187' /var
  
  /dev/vdb                     0          0    1048576  00 [--------] /var/backup
+
  /dev/mapper/vg--local--01-var                     0          0    1048576  00 [--------] /var
  
 
If the user has data in the project, that belongs to him, the result will change:
 
If the user has data in the project, that belongs to him, the result will change:
  /dev/vdb               512000          0    1048576  00 [--------] /var/backup
+
  /dev/mapper/vg--local--01-var               512000          0    1048576  00 [--------] /var
  
 
==== Modifiying a User Quota ====
 
==== Modifiying a User Quota ====
 
To modify a users quota, you just set a new quota (limit):
 
To modify a users quota, you just set a new quota (limit):
  xfs_quota -x -c 'limit bhard=1048576k 4000187' /var/backup
+
  xfs_quota -x -c 'limit bhard=1048576k 4000187' /var
  
 
Read the quota information for the user 4000187:
 
Read the quota information for the user 4000187:
  xfs_quota -x -c 'quota -v -N -u 4000187' /var/backup
+
  xfs_quota -x -c 'quota -v -N -u 4000187' /var
  
  /dev/vdb                     0          0    1048576  00 [--------] /var/backup
+
  /dev/mapper/vg--local--01-var                     0          0    1048576  00 [--------] /var
  
 
If the user has data in the project, that belongs to him, the result will change:
 
If the user has data in the project, that belongs to him, the result will change:
  /dev/vdb               512000          0    1048576  00 [--------] /var/backup
+
  /dev/mapper/vg--local--01-var               512000          0    1048576  00 [--------] /var
  
 
==== Removing a User Quota ====
 
==== Removing a User Quota ====
 
Removing a quota for a user:
 
Removing a quota for a user:
  xfs_quota -x -c 'limit bhard=0 4000187' /var/backup
+
  xfs_quota -x -c 'limit bhard=0 4000187' /var
  
 
The following command should give you an empty result:
 
The following command should give you an empty result:
  xfs_quota -x -c 'quota -v -N -u 4000187' /var/backup
+
  xfs_quota -x -c 'quota -v -N -u 4000187' /var
  
 
=== Project (Directory) Quotas ===
 
=== Project (Directory) Quotas ===
Line 526: Line 525:
 
* project_ID: The uid of the online backup account (4000187).
 
* project_ID: The uid of the online backup account (4000187).
 
* project_name: The uid of the online backup account (4000187). This could be a human readable name.
 
* project_name: The uid of the online backup account (4000187). This could be a human readable name.
* mountpoint: The mountpoint of the xfs-filesystem (/var/backup). See the <code>/etc/fstab</code> entry from above.
+
* mountpoint: The mountpoint of the xfs-filesystem (/var). See the <code>/etc/fstab</code> entry from above.
* directory: The directory of the project (187/4000187), starting from the mountpoint of the xfs-filesystem (/var/backup).
+
* directory: The directory of the project (187/4000187), starting from the mountpoint of the xfs-filesystem (/var).
  
 
Define a unique project ID for the directory hierarchy in the <code>/etc/projects</code> file (project_ID:mountpoint/directory):
 
Define a unique project ID for the directory hierarchy in the <code>/etc/projects</code> file (project_ID:mountpoint/directory):
Line 536: Line 535:
  
 
Set Project:
 
Set Project:
  xfs_quota -x -c 'project -s -p /var/backup/187/4000187/home/4000187 4000187' /var/backup
+
  xfs_quota -x -c 'project -s -p /var/backup/187/4000187/home/4000187 4000187' /var
  
 
Set Quota (limit) on Project:
 
Set Quota (limit) on Project:
  xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var/backup
+
  xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var
  
 
Check your Quota (limit)
 
Check your Quota (limit)
  xfs_quota -x -c 'quota -p 4000187' /var/backup
+
  xfs_quota -x -c 'quota -p 4000187' /var
  
 
Check the Quota:
 
Check the Quota:
Line 549: Line 548:
 
* <code>-p</code>: display project quota information.
 
* <code>-p</code>: display project quota information.
 
* <code>-h</code>: human readable format.
 
* <code>-h</code>: human readable format.
  xfs_quota -x -c 'quota -v -N -p 4000187' /var/backup
+
  xfs_quota -x -c 'quota -v -N -p 4000187' /var
  
  /dev/vdb                     0          0    1048576  00 [--------] /var/backup
+
  /dev/mapper/vg--local--01-var                     0          0    1048576  00 [--------] /var
  
 
If you copied data into the project, the output will look something like:
 
If you copied data into the project, the output will look something like:
  /dev/vdb               512000          0    1048576  00 [--------] /var/backup
+
  /dev/mapper/vg--local--01-var               512000          0    1048576  00 [--------] /var
  
 
To give you an overall view of the whole system:
 
To give you an overall view of the whole system:
  xfs_quota -x -c report /var/backup
+
  xfs_quota -x -c report /var
  
 
<pre>
 
<pre>
User quota on /var/backup (/dev/vdb)
+
User quota on /var (/dev/mapper/vg--local--01-var)
 
                               Blocks                     
 
                               Blocks                     
 
User ID          Used      Soft      Hard    Warn/Grace     
 
User ID          Used      Soft      Hard    Warn/Grace     
Line 567: Line 566:
 
4000187            0          0    1048576    00 [--------]
 
4000187            0          0    1048576    00 [--------]
  
Project quota on /var/backup (/dev/vdb)
+
Project quota on /var (/dev/mapper/vg--local--01-var)
 
                               Blocks                     
 
                               Blocks                     
 
Project ID      Used      Soft      Hard    Warn/Grace     
 
Project ID      Used      Soft      Hard    Warn/Grace     
Line 576: Line 575:
 
==== Modifying a Project (Directory) Quota ====
 
==== Modifying a Project (Directory) Quota ====
 
To modify a project (directory) quota, you just set an new quota (limit) on the chosen project:
 
To modify a project (directory) quota, you just set an new quota (limit) on the chosen project:
  xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var/backup
+
  xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var
  
 
Check your quota (limit)
 
Check your quota (limit)
  xfs_quota -x -c 'quota -p 4000187' /var/backup
+
  xfs_quota -x -c 'quota -p 4000187' /var
  
 
==== Removing a Project (Directory) Quota ====
 
==== Removing a Project (Directory) Quota ====
 
Removing a quota from a project:
 
Removing a quota from a project:
  xfs_quota -x -c 'limit -p bhard=0 4000187' /var/backup
+
  xfs_quota -x -c 'limit -p bhard=0 4000187' /var
  
 
Chreck the results:
 
Chreck the results:
  xfs_quota -x -c report /var/backup
+
  xfs_quota -x -c report /var
  
 
<pre>
 
<pre>
User quota on /var/backup (/dev/vdb)
+
User quota on /var (/dev/mapper/vg--local--01-var)
 
                               Blocks                     
 
                               Blocks                     
 
User ID          Used      Soft      Hard    Warn/Grace     
 
User ID          Used      Soft      Hard    Warn/Grace     
Line 609: Line 608:
  
 
== prov-backup-rsnapshot ==
 
== prov-backup-rsnapshot ==
 +
Install the [[stoney_backup:_prov-backup-rsnapshot | prov-backup-rsnasphot ]] daemon script using the package manager:
 
<pre>
 
<pre>
cd /var/work
+
emerge -va sys-apps/sst-prov-backup-rsnapshot
git clone --recursive https://github.com/stepping-stone/prov-backup-rsnapshot.git
+
cd /var/work/prov-backup-rsnapshot/Provisioning/etc/Provisioning/
+
ln -s ../../../etc/Provisioning/Backup/ Backup
+
cd /var/work/prov-backup-rsnapshot/Provisioning/lib/Provisioning/
+
ln -s ../../../lib/Provisioning/Backup/ Backup
+
chmod -R a+rX /var/work
+
 
</pre>
 
</pre>
 +
 
=== Configuration ===
 
=== Configuration ===
The configuration file is currently located in the /var/work/prov-backup-rsnapshot directory:
+
If it is the first provisioning module running on this server (very likely) you first have to configure the provisioning daemon (you can skip this step if you have already another provisioning module running on this server)
vi /var/work/prov-backup-rsnapshot/etc/Provisioning/Backup/Rsnapshot_test.conf
+
 
 +
==== Provisioning global configuration ====
 +
The global configuration for the provisioning daemon (which was installed with the first provisioning module and the <code>sys-apps/sst-provisioning</code> package) applies to all provisioning modules running on the server. This configuration therefore contains information about the provisioning daemon itself and no information at all about the specific modules.
 +
/etc/Provisioning/Global.conf
 +
<pre>
 +
# Copyright (C) 2012 stepping stone GmbH
 +
#                    Switzerland
 +
#                    http://www.stepping-stone.ch
 +
#                    support@stepping-stone.ch
 +
#
 +
# Authors:
 +
#  Pat Kläy <pat.klaey@stepping-stone.ch>
 +
 +
# Licensed under the EUPL, Version 1.1.
 +
#
 +
# You may not use this work except in compliance with the
 +
# Licence.
 +
# You may obtain a copy of the Licence at:
 +
#
 +
# http://www.osor.eu/eupl
 +
#
 +
# Unless required by applicable law or agreed to in
 +
# writing, software distributed under the Licence is
 +
# distributed on an "AS IS" basis,
 +
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
 +
# express or implied.
 +
# See the Licence for the specific language governing
 +
# permissions and limitations under the Licence.
 +
#
 +
 
 +
[Global]
 +
# If true the script logs every information to the log-file.
 +
LOG_DEBUG = 0
 +
 
 +
# If true the script logs additional information to the log-file.
 +
LOG_INFO = 1
 +
 
 +
#If true the script logs warnings to the log-file.
 +
LOG_WARNING = 1
 +
 
 +
#If true the script logs errors to the log-file.
 +
LOG_ERR = 1
 +
 
 +
 
 +
# The number of seconds to wait before retry contacting the backend server during startup.
 +
SLEEP = 10
 +
 
 +
# Number of backend server connection retries during startup.
 +
ATTEMPTS = 3
 +
 
 +
[Operation Mode]
 +
# The number of seconds to wait before retry contacting the backend server in case of a service interruptions.
 +
SLEEP = 30
 +
 
 +
# Number of backend server connection retries in case of a service interruptions.
 +
ATTEMPTS = 3
 +
 
 +
[Mail]
 +
# Error messages are sent to the mail configured below.
 +
SENDTO = <YOUR-MAIL-ADDRESS>
 +
HOST = mail.stepping-stone.ch
 +
PORT = 587
 +
USERNAME = <YOUR-NOTIFICATION-EMAIL-ADDRESS>
 +
PASSWORD = <PASSWORD>
 +
FROMNAME = Provisioning daemon
 +
CA_DIR = /etc/ssl/certs
 +
SSL = starttls
 +
AUTH_METHOD = LOGIN
 +
 
 +
# Additionally, you can be informed about creation, modification and deletion of services.
 +
WANTINFOMAIL = 1
 +
</pre>
 +
 
 +
==== Provisioning daemon prov-backup-rsnapshot module ====
 +
The module specific configuration is located in /etc/Provisioning/<Service>/<Type>.conf. In the case of the prov-backup-rsnapshot module this is <code>/etc/Provisioning/Backup/Rsnapshot.conf</code>. (Note: Comments starting with /* are not in the configuration file, they are only in the wiki to add some additional information)
  
 
<pre>
 
<pre>
Line 648: Line 717:
 
#
 
#
  
 
+
/* If you want, you can override the log information from the global configuration file this might be useful for debugging */
 
[Global]
 
[Global]
 
# If true the script logs every information to the log-file.
 
# If true the script logs every information to the log-file.
Line 662: Line 731:
 
LOG_ERR = 1
 
LOG_ERR = 1
  
ENVIRONMENT =  
+
/* Specify the hosts fully qualified domain name. This name will be used to perform some checks and also appear in the information and error mails */
 +
ENVIRONMENT = <FQDN>
 
   
 
   
 
[Database]
 
[Database]
 
BACKEND = LDAP
 
BACKEND = LDAP
SERVER = ldaps://ldapm.tombstone.ch
+
SERVER = ldaps://ldapm.tombstone.org
 
PORT = 636
 
PORT = 636
 
ADMIN_USER = cn=Manager,dc=stoney-cloud,dc=org
 
ADMIN_USER = cn=Manager,dc=stoney-cloud,dc=org
 
ADMIN_PASSWORD = <PASSWORD>
 
ADMIN_PASSWORD = <PASSWORD>
 
SERVICE_SUBTREE = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org
 
SERVICE_SUBTREE = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org
COOKIE_FILE = /var/work/prov-backup-rsnapshot/Provisioning/etc/Provisioning/Backup/rnsapshot.cookie
+
COOKIE_FILE = /etc/Provisioning/Backup/rsnapshot.cookie
 
DEFAULT_COOKIE = rid=001,csn=
 
DEFAULT_COOKIE = rid=001,csn=
SEARCH_FILTER = (&(entryCSN>=%entryCSN%)(objectClass=*))
+
SEARCH_FILTER = (&(entryCSN>=%entryCSN%)(sstProvisioningState=0))
  
 +
/* Specifies the service itself. As it is the prov-backup-rsnapshot module, the SERVICE is "Backup" and the TYPE is "Rsnapshot".
 +
* The MODUS is as usual selfcare and the TRANSPORTAPI is LocalCLI. This is because the daemon is running on the same host as the
 +
* backup accounts are provisioned and the commands can be executed on this host using the cli.
 +
* For more information about MODUS and TRANSPORTAPI see https://int.stepping-stone.ch/wiki/provisioning.pl#Service_Konfiguration
 +
*/
 
[Service]
 
[Service]
 
MODUS = selfcare
 
MODUS = selfcare
Line 681: Line 756:
 
TYPE = Rsnapshot
 
TYPE = Rsnapshot
  
SYSLOG = Backup-Rsnapshot
+
SYSLOG = prov-backup-rsnapshot
  
 +
/* For the TRANSPORTAPI LocalCLI there is no gateway required because there is no connection to establish. So set HOST, USER and
 +
* DSA_FILE to whatever you want. Don't leave it blank, otherwise the provisioning daemon would log some error messages saying
 +
* these attributes are empty
 +
*/
 
[Gateway]
 
[Gateway]
 
HOST = localhost
 
HOST = localhost
Line 688: Line 767:
 
DSA_FILE = none
 
DSA_FILE = none
  
 +
/* Information about the backup itself (how to setup everything). Note that the %uid% int the RSNAPSHOT_CONFIG_FILE parameter will
 +
* be replaced by the accounts UID. The script CREATE_CHROOT_CMD was installed with the prov-backup-rsnapshot module, so do not
 +
* change this parameter. The quota parameters (SET_QUOTA_CMD, MOUNTPOINT, QUOTA_FILE, PROJECTS_FILE and PROJID_FILE) represent
 +
* the quota setup as described on http://wiki.stoney-cloud.org/index.php/stoney_backup:_Server_set-up#Quota. If you followed this
 +
* manual, you can copy-paste them into your configuration file, otherwise adapt them according to your quota setup.
 +
*/
 
[Backup]
 
[Backup]
 
RSNAPSHOT_CONFIG_FILE = /etc/rsnapshot/rsnapshot.conf.%uid%
 
RSNAPSHOT_CONFIG_FILE = /etc/rsnapshot/rsnapshot.conf.%uid%
SET_QUOTA_CMD = /usr/sbin/setquota
+
SET_QUOTA_CMD = /usr/sbin/xfs_quota
CREATE_CHROOT_CMD = /root/createDummyBackupDirectory.sh # You might want to change this for the productive system
+
CREATE_CHROOT_CMD = /usr/libexec/createBackupDirectory.sh
MOUNTPOINT = / # You might want to change this for the productive system
+
MOUNTPOINT = /var
 +
QUOTA_FILE = /etc/backupSize
 +
PROJECTS_FILE = /etc/projects
 +
PROJID_FILE = /etc/projid
 
</pre>
 
</pre>
  
=== Init Scripts ===
+
== backup utils ==
Currently we just create very basic init scripts which start and stop the deamon:
+
Install the backup utils (multiple scripts which help you to manage and monitor your backup server and backup accounts) using the package manager. For more information about the scripts please see the [[stoney_backup:_Service_Software | stoney backup Service Software]] page.
/etc/init.d/prov-backup-rsnapshot
+
 
<pre>
 
<pre>
#!/sbin/runscript
+
emerge -va sys-apps/sst-backup-utils
# Copyright 1999-2013 Gentoo Foundation
+
# Distributed under the terms of the GNU General Public License v2
+
# $Header: $
+
 
+
depend() {
+
    need net
+
    after slapd
+
}
+
 
+
start() {
+
    ebegin "Starting backup-rsnapshot provisioning daemon"
+
    start-stop-daemon --start \
+
        --background \
+
        --user ${USER:-root}:${GROUP:-root} \
+
        --make-pidfile \
+
        --pidfile "${PIDFILE}" \
+
        --exec /var/work/prov-backup-rsnapshot/Provisioning/bin/provisioning.pl \
+
        --interpreted \
+
        -- ${OPTIONS} \
+
            -c /var/work/prov-backup-rsnapshot/Provisioning/etc/Provisioning/Backup/Rsnapshot_test.conf \
+
            -g /var/work/prov-backup-rsnapshot/Provisioning/etc/Provisioning/Global.conf
+
    eend $?
+
}
+
 
+
stop() {
+
    ebegin "Stopping backup-rsnapshot provisioning daemon"
+
    start-stop-daemon --stop \
+
        --pidfile "${PIDFILE}"
+
    eend $?
+
}
+
 
</pre>
 
</pre>
 
/etc/conf.d/prov-backup-rsnapshot
 
<pre>
 
USER="root"
 
GROUP="root"
 
 
PIDFILE="/run/prov-backup-rsnapshot.pid"
 
 
# OPTIONS="..."
 
 
</pre>
 
==== Run-Level ====
 
rc-update add prov-backup-rsnapshot default
 
 
== Write Backup Account Size ==
 
If you have already installed the [[Backup_(Server_Setup)#rsnapshot | rsnapshot]] script, you also have the writeAccountSize script. Otherwise follow [[Backup_(Server_Setup)#rsnapshot | these instructions (installation only)]]
 
 
=== Write Backup Account Size Configuration ===
 
/etc/backup-utils/writeAccountSize.conf
 
<pre>
 
[Global]
 
INCOMING_DIRECTORY = /incoming
 
ACCOUNT_SIZE_FILE = /etc/backupSize
 
SNAPSHOTS = 1
 
 
[Syslog]
 
SYSLOG = rsnapshot
 
 
[Directory]
 
LDAP_SERVER = ldaps://ldapm.tombstone.ch
 
LDAP_PORT = 636
 
LDAP_BIND_DN = cn=Manager,dc=stoney-cloud,dc=org
 
LDAP_BIND_PW = <password>
 
LDAP_BASE_DN = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org
 
LDAP_PERSON_BASE = ou=people,dc=stoney-cloud,dc=org
 
LDAP_RESELLER_BASE = ou=reseller,ou=configuration,ou=backup,ou=services,dc=stoney-cloud,dc=org
 
LDAP_EMAIL_ATTRIBUTE = mail
 
 
[Notification]
 
EMAIL_SENDER = stepping stone GmbH Supprt <support@stepping-stone.ch>
 
EMAIL_ALERT_THRESHOLD = 85
 
 
Salutation_Default_de-CH = Liebe Kundin / Lieber Kunde
 
Salutation_m_de-CH = Sehr geehrter Herr
 
Salutation_f_de-CH = Sehr geehrte Frau
 
Salutation_Default_en-GB = Dear customer
 
Salutation_m_en-GB = Dear Mr.
 
Salutation_f_en-GB = Dear Mrs.
 
 
[MAIL]
 
host = mail.stepping-stone.ch
 
port = 587
 
username = support@stepping-stone.ch
 
password = <password>
 
</pre>
 
 
=== Write Backup Account Size Test ===
 
/usr/libexec/backup-utils/writeAccountSize.pl -U 4000080 -d
 
<pre>
 
Debug modus was turned on
 
 
Debug sub checkUsersHomeDirectory: $localUsersHomeDirectory: /var/backup/080/4000080/home/4000080
 
Debug sub checkUsersHomeDirectory: The $localUsersHomeDirectory /var/backup/080/4000080/home/4000080 exists
 
 
Debug sub checkUsersIncomingDirectory: $localUsersHomeDirectory:  /var/backup/080/4000080/home/4000080
 
Debug sub checkUsersIncomingDirectory: $localUsersIncomingDirectory: /incoming
 
Debug sub checkUsersIncomingDirectory: $localIncomingPath:          /var/backup/080/4000080/home/4000080/incoming
 
 
Debug sub checkUsersIncomingDirectory: The $localIncomingPath /var/backup/080/4000080/home/4000080/incoming exists
 
Total Quota: 1048576 kilobytes
 
Total used Space: 0 kilobytes
 
Incoming Size: 0 kilobytes
 
Debug sub getSnapshotsSize: $localUsedQuota:  0
 
Debug sub getSnapshotsSize: $localSnapshotsSize:  0
 
Debug writeAccountSize: Working on /var/backup/080/4000080/etc/backupSize
 
Debug: wrote 1024 0 0 to /var/backup/080/4000080/etc/backupSize
 
DEBUG:  Successfully executed the following modifications for entry uid=4000080,ou=accounts,ou=backup,ou=services,o=stepping-stone,c=ch: sstBackupSize => 0
 
 
DEBUG:  Successfully executed the following modifications for entry uid=4000080,ou=accounts,ou=backup,ou=services,o=stepping-stone,c=ch: sstIncrementSize => 0
 
 
Alert Threshold: 85 %
 
Calculated value: 0
 
</pre>
 
 
Now write some data (200 megaytes in this example) into the users incoming directory and then execute the script again:
 
dd if=/dev/zero of=/var/backup/080/4000080/home/4000080/incoming/test.zeros bs=1024k count=200
 
chown 4000080:4000080 /var/backup/080/4000080/home/4000080/incoming/test.zeros
 
/usr/libexec/backup-utils/writeAccountSize.pl -U 4000080 -d
 
 
<pre>
 
Debug modus was turned on
 
 
Debug sub checkUsersHomeDirectory: $localUsersHomeDirectory: /var/backup/080/4000080/home/4000080
 
Debug sub checkUsersHomeDirectory: The $localUsersHomeDirectory /var/backup/080/4000080/home/4000080 exists
 
 
Debug sub checkUsersIncomingDirectory: $localUsersHomeDirectory:  /var/backup/080/4000080/home/4000080
 
Debug sub checkUsersIncomingDirectory: $localUsersIncomingDirectory: /incoming
 
Debug sub checkUsersIncomingDirectory: $localIncomingPath:          /var/backup/080/4000080/home/4000080/incoming
 
 
Debug sub checkUsersIncomingDirectory: The $localIncomingPath /var/backup/080/4000080/home/4000080/incoming exists
 
Total Quota: 1048576 kilobytes
 
Total used Space: 204800 kilobytes
 
Incoming Size: 204800 kilobytes
 
Debug sub getSnapshotsSize: $localUsedQuota:  204800
 
Debug sub getSnapshotsSize: $localSnapshotsSize:  0
 
Debug writeAccountSize: Working on /var/backup/080/4000080/etc/backupSize
 
Debug: wrote 1024 200 0 to /var/backup/080/4000080/etc/backupSize
 
DEBUG:  Successfully executed the following modifications for entry uid=4000080,ou=accounts,ou=backup,ou=services,o=stepping-stone,c=ch: sstBackupSize => 209715200
 
 
DEBUG:  Successfully executed the following modifications for entry uid=4000080,ou=accounts,ou=backup,ou=services,o=stepping-stone,c=ch: sstIncrementSize => 0
 
 
Alert Threshold: 85 %
 
Calculated value: 19.53125
 
</pre>
 
Everything seems to be working fine!
 
 
== Snapshots ==
 
We use rsnapshot - remote filesystem snapshot utility for the actual snapshots and a handful of wrapper scripts, that do things like:
 
* Read the users and their settings from the LDAP directory.
 
* Execute rsnapshot according to the users settings.
 
* Write the backup quotas backup (incoming), iterations (.snapshots) and free space to the users local backupSize file and update the LDAP directory.
 
* Inform the reseller, customer or user (depending on the settings in the LDAP directory) via mail, if the quota limit has been reached.
 
* Depending on the users settings in the LDAP directory, warning mail will be sent to the reseller, customer or user, if a backup was not executed on time.
 
 
=== rsnaphot configuration directory ===
 
The users individual rsnapshot configurations are stored under <code>/etc/rsnapshot</code>. Please make sure, that the directory exists:
 
ls -al /etc | grep rsnapshot
 
 
drwx------  2 root  root      64 30. Aug 20:20 rsnapshot
 
 
If not, create it:
 
mkdir /etc/rsnapshot
 
chmod 700 /etc/rsnapshot
 
 
=== snapshot.pl Configuration ===
 
The snapshot.pl script is responsible for the execution of rsnapshot according to the users settings.
 
/etc/backup-utils/snapshot.conf
 
<pre>
 
[General]
 
MaxParallelProcesses = 5
 
Rsnapshot_command = /usr/bin/nice -n 19 /usr/bin/rsnapshot  -c /etc/rsnapshot/rsnapshot.conf.%uid% %interval%
 
 
[LDAP]
 
Host = ldaps://ldapm.tombstone.ch
 
Port = 636
 
User = cn=Manager,dc=stoney-cloud,dc=org
 
Password = <Password>
 
CA_Path = /etc/ssl/certs
 
Accounts_Base = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org
 
</pre>
 
 
Legend:
 
* '''%uid%''': The backup account and login uid as a numeric number. For example: 4000205.
 
* '''%interval%''': The backup level to be executed. Possible values are hourly, daily, weekly, monthly and yearly.
 
 
=== snapshot.pl Test ===
 
Before adding the necessary cronjob entries, we need to make sure, that we've configured the snapshot.pl script correctly:
 
/usr/libexec/backup-utils/snapshot.pl --interval daily -d
 
 
If everything worked as planned, you should receive feedback looking roughly like:
 
<pre>
 
INFO:  Starting rsnapshot for interval daily with maximum 5 parallel processes
 
 
INFO:  Executing snapshot for 4000080
 
 
INFO:  Executing snapshot for 4000079
 
 
INFO:  Snapshot process for 4000079 finished in 0.18 seconds with status 0
 
 
INFO:  Snapshot process for 4000080 finished in 0.19 seconds with status 0
 
 
INFO:  rsnapshot for all backups done. Took 0.24 seconds
 
</pre>
 
 
Just to make sure, that everything did work out fine, execute <code>writeAccountSize.pl</code> againg:
 
/usr/libexec/backup-utils/writeAccountSize.pl -U 4000080 -d
 
<pre>
 
Debug modus was turned on
 
 
Debug sub checkUsersHomeDirectory: $localUsersHomeDirectory: /var/backup/080/4000080/home/4000080
 
Debug sub checkUsersHomeDirectory: The $localUsersHomeDirectory /var/backup/080/4000080/home/4000080 exists
 
 
Debug sub checkUsersIncomingDirectory: $localUsersHomeDirectory:  /var/backup/080/4000080/home/4000080
 
Debug sub checkUsersIncomingDirectory: $localUsersIncomingDirectory: /incoming
 
Debug sub checkUsersIncomingDirectory: $localIncomingPath:          /var/backup/080/4000080/home/4000080/incoming
 
 
Debug sub checkUsersIncomingDirectory: The $localIncomingPath /var/backup/080/4000080/home/4000080/incoming exists
 
Total Quota: 1048576 kilobytes
 
Total used Space: 409600 kilobytes
 
Incoming Size: 204800 kilobytes
 
Debug sub getSnapshotsSize: $localUsedQuota:  409600
 
Debug sub getSnapshotsSize: $localSnapshotsSize:  204800
 
Debug writeAccountSize: Working on /var/backup/080/4000080/etc/backupSize
 
Debug: wrote 1024 200 200 to /var/backup/080/4000080/etc/backupSize
 
DEBUG:  Successfully executed the following modifications for entry uid=4000080,ou=accounts,ou=backup,ou=services,o=stepping-stone,c=ch: sstBackupSize => 209715200
 
 
DEBUG:  Successfully executed the following modifications for entry uid=4000080,ou=accounts,ou=backup,ou=services,o=stepping-stone,c=ch: sstIncrementSize => 209715200
 
 
Alert Threshold: 85 %
 
Calculated value: 39.0625
 
</pre>
 
 
As you can see, the total used space has risen to 39.0625.
 
 
=== Cronjobs ===
 
After making sure, that everything worked as planned, you can update your crontab entry:
 
crontab -e
 
<pre>
 
...
 
# Rsnapshot for all users
 
30 22 * * * /usr/libexec/backup-utils/snapshot.pl --interval daily
 
15 22 * * sun /usr/libexec/backup-utils/snapshot.pl --interval weekly
 
00 22 1 * * /usr/libexec/backup-utils/snapshot.pl --interval monthly
 
...
 
</pre>
 
 
== schedule warning ==
 
To install the new schedule warning script you have to execute the following commands:
 
<pre>
 
cd /var/work/
 
git clone --recursive https://github.com/stepping-stone/backup-surveillance.git
 
cd backup-surveillance/bin/
 
ln -s ../perl-utils/lib/PerlUtil/ PerlUtil
 
</pre>
 
 
 
=== Configuration ===
 
=== Configuration ===
vi /var/work/backup-surveillance/etc/config.conf
+
Please refer to the configuration sections for the different scripts in [[stoney_backup:_Service_Software | stoney backup Service Software]].
<pre>
+
[XML]
+
SCHEDULE_FILE = %homeDirectory%/incoming/%computerName%/.sepiola_backup/scheduler.xml
+
SCHEDULE_XSD = %configpath%/../etc/schema/scheduler_schema.xsd
+
BACKUP_ENDED_FILE = %homeDirectory%/incoming/%computerName%/.sepiola_backup/backupEnded.xml
+
BACKUP_ENDED_XSD =  %configpath%/../etc/schema/backupended_schema.xsd
+
BACKUP_STARTED_FILE = %homeDirectory%/incoming/%computerName%/.sepiola_backup/backupStarted.xml
+
BACKUP_STARTED_XSD =  %configpath%/../etc/schema/backupstarted_schema.xsd
+
 
+
 
+
[TEMPLATE]
+
Salutation_Default_de-CH = Liebe Kundin / Lieber Kunde
+
Salutation_m_de-CH = Sehr geehrter Herr
+
Salutation_f_de-CH = Sehr geehrte Frau
+
Salutation_Default_en-GB = Dear customer
+
Salutation_m_en-GB = Dear Mr.
+
Salutation_f_en-GB = Dear Mrs.
+
 
+
[LDAP]
+
 
+
SERVER = ldaps://ldapm.tombstone.ch
+
PORT = 636
+
DEBUG = 1
+
 
+
ADMIN_DN = cn=Manager,dc=foss-cloud,dc=org
+
ADMIN_PASSWORD = <Password>
+
 
+
BACKUP_BASE = ou=accounts,ou=backup,ou=services,dc=foss-cloud,dc=org
+
PEOPLE_BASE = ou=people,dc=foss-cloud,dc=org
+
RESELLER_BASE = ou=reseller,ou=configuration,ou=backup,ou=services,dc=foss-cloud,dc=org
+
SCOPE = sub
+
 
+
[MAIL]
+
mailTo =
+
host = mail.stepping-stone.ch
+
port = 587
+
username =
+
password =
+
from =
+
 
+
</pre>
+
  
 
= Links =
 
= Links =

Latest revision as of 08:41, 27 June 2014

Abstract

This document describes server setup for the stoney cloud (Online) Backup service, built upon the Gentoo Linux distribution.

Overview

After working through this documentation, you will be able to set up and configure your own (Online) Backup service server.

Software Installation

Requirements

A working stoney cloud, installed according to stoney cloud: Single-Node Installation or stoney cloud: Multi-Node Installation.

Keywords & USE-Flags

For a minimal OpenLDAP directory installation:

echo "net-nds/openldap minimal sasl" >> /etc/portage/package.use
echo "net-nds/openldap ~amd64" >> /etc/portage/package.keywords

NSS and PAM modules for lookups using LDAP:

echo "sys-auth/nss-pam-ldapd sasl" >> /etc/portage/package.use
echo "sys-auth/nss-pam-ldapd ~amd64" >> /etc/portage/package.keywords
echo "sys-fs/quota ldap" >> /etc/portage/package.use
echo "=app-admin/jailkit-2.16 ~amd64" >> /etc/portage/package.keywords

For the prov-backup-rsnapshot daemon:

echo "dev-perl/Net-SMTPS ~amd64" >> /etc/portage/package.keywords
echo "perl-core/Switch ~amd64" >> /etc/portage/package.keywords

To build puttygen only without X11:

echo "net-misc/putty ~amd64" >> /etc/portage/package.keywords
echo "net-misc/putty -gtk" >> /etc/portage/package.use

Emerge

emerge -va nss-pam-ldapd \
           quota \
           net-misc/putty \
           app-admin/jailkit \
           sys-apps/haveged \
           net-misc/putty \
           sys-apps/sst-backup-utils \
           sys-apps/sst-prov-backup-rsnapshot

To list the dependencies of ebuilds, you can use equery:

equery depgraph sst-backup-utils
 * Searching for sst-backup-utils ...

 * dependency graph for sys-apps/sst-backup-utils-0.1.0
 `--  sys-apps/sst-backup-utils-0.1.0  amd64 
   `--  dev-perl/PerlUtil-0.1.0  (>=dev-perl/PerlUtil-0.1.0) amd64 
   `--  virtual/perl-Sys-Syslog-0.320.0  (virtual/perl-Sys-Syslog) amd64 
   `--  dev-perl/perl-ldap-0.530.0  (dev-perl/perl-ldap) amd64 
   `--  dev-perl/XML-Simple-2.200.0  (dev-perl/XML-Simple) amd64 
   `--  dev-perl/Config-IniFiles-2.780.0  (dev-perl/Config-IniFiles) amd64 
   `--  dev-perl/XML-Validator-Schema-1.100.0  (dev-perl/XML-Validator-Schema) amd64 
   `--  dev-perl/Date-Calc-6.300.0  (dev-perl/Date-Calc) amd64 
   `--  dev-perl/DateManip-6.310.0  (dev-perl/DateManip) amd64 
   `--  dev-perl/Schedule-Cron-Events-1.930.0  (dev-perl/Schedule-Cron-Events) amd64 
   `--  dev-perl/DateTime-Format-Strptime-1.520.0  (dev-perl/DateTime-Format-Strptime) amd64 
   `--  dev-perl/XML-SAX-0.990.0  (dev-perl/XML-SAX) amd64 
   `--  virtual/perl-MIME-Base64-3.130.0-r2  (virtual/perl-MIME-Base64) amd64 
   `--  dev-perl/Authen-SASL-2.160.0  (dev-perl/Authen-SASL) amd64 
   `--  dev-perl/Net-SMTPS-0.30.0  (dev-perl/Net-SMTPS) ~amd64 
   `--  dev-perl/text-template-1.450.0  (dev-perl/text-template) amd64 
   `--  virtual/perl-Getopt-Long-2.380.0-r2  (virtual/perl-Getopt-Long) amd64 
   `--  dev-perl/Parallel-ForkManager-1.20.0  (dev-perl/Parallel-ForkManager) amd64 
   `--  dev-perl/Time-Stopwatch-1.0.0  (dev-perl/Time-Stopwatch) amd64 
   `--  app-backup/rsnapshot-1.3.1-r1  (app-backup/rsnapshot) amd64 
[ sys-apps/sst-backup-utils-0.1.0 stats: packages (20), max depth (1) ]

For more information, visit the Gentoolkit page.

Base Server Software Configuration

OpenSSH

OpenSSH Configuration

Configure the OpenSSH daemon:

vi /etc/ssh/sshd_config

Set following options:

PubkeyAuthentication yes
PasswordAuthentication yes
UsePAM yes
Subsystem     sftp   internal-sftp

Make sure, that Subsystem sftp internal-sftp is the last line in the configuration file.

We want to reduce the numbers of chroot environments in one folder. As the ChrootDirectory configuration option only allows %h (home directory of the user) and %u (username of the user), we need to create the necessary matching rules in the form of:

Match User *000
  ChrootDirectory /var/backup/000/%u
  AuthorizedKeysFile /var/backup/000/%u/%h/.ssh/authorized_keys
Match
Match User *001
  ChrootDirectory /var/backup/001/%u
  AuthorizedKeysFile /var/backup/001/%u/%h/.ssh/authorized_keys
Match
...
Match User *999
  ChrootDirectory /var/backup/999/%u
  AuthorizedKeysFile /var/backup/999/%u/%h/.ssh/authorized_keys
Match

The creation of the matching rules is done by executing the following bash commands:

FILE=/etc/ssh/sshd_config;
 
for x in {0..999} ; do \
  printf "Match User *%03d\n" $x >> ${FILE}; \
  printf "  ChrootDirectory /var/backup/%03d/%%u\n" $x >> ${FILE}; \
  printf "  AuthorizedKeysFile /var/backup/%03d/%%u/%%h/.ssh/authorized_keys\n" $x >> ${FILE}; \
  printf "Match\n" >> ${FILE}; \
done

Don't forget to restart the OpenSSH daemon:

/etc/init.d/sshd restart

OpenSSH Host Keys

If you migrate from a existing backup server, you might want to copy the ssh host keys to the new server. If you do so clients want see a difference between the two hosts as the fingerprint remains the same. Copy the following files from the existing host to the new:

  • /etc/ssh/ssh_host_dsa_key
  • /etc/ssh/ssh_host_ecdsa_key
  • /etc/ssh/ssh_host_key
  • /etc/ssh/ssh_host_rsa_key
  • /etc/ssh/ssh_host_dsa_key.pub
  • /etc/ssh/ssh_host_ecdsa_key.pub
  • /etc/ssh/ssh_host_key.pub
  • /etc/ssh/ssh_host_rsa_key.pub

Set the correct permissions on the new host:

chmod 600 /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_ecdsa_key /etc/ssh/ssh_host_key /etc/ssh/ssh_host_rsa_key
chmod 644 /etc/ssh/*.pub

And restart the ssh daemon. Caution: do not close your existing ssh session as long as you are not sure the ssh daemon has restarted properly and you can login again.

/etc/init.d/sshd restart

OpenLDAP

/etc/hosts

Update the /etc/hosts with the LDAP server:

/etc/hosts
# VIP of the LDAP Server
31.216.40.4      ldapm.stoney-cloud.org

Root CA Certificate Installation

Install the root CA certificate into the OpenSSL default certificate storage directory:

fqdn="cloud.stoney-cloud.org"    # The fully qualified domain name of the server containing the root certificate.

cd /etc/ssl/certs/
wget --no-check-certificate https://${fqdn}/ca/FOSS-Cloud_CA.cert.pem
chown root:root /etc/ssl/certs/FOSS-Cloud_CA.cert.pem
chmod 444 /etc/ssl/certs/FOSS-Cloud_CA.cert.pem

Rebuild the CA hashes

c_rehash /etc/ssl/certs/

/etc/openldap/ldap.conf

Update the /etc/openldap/ldap.confLDAP configuration file/environment variables:

/etc/openldap/ldap.conf
# Used to specify a size limit to use when performing searches. The number should be an
# non-negative integer. SIZELIMIT of zero (0) specifies unlimited search size.
SIZELIMIT       20000

# Used to specify a time limit to use when performing searches. The number should be an
# non-negative integer. TIMELIMIT of zero (0) specifies unlimited search time to be used.
TIMELIMIT       45

# Specify how aliases dereferencing is done. DEREF should be set to one of never, always, search,
# or find to specify that aliases are never dereferenced, always dereferenced, dereferenced when
# searching, or dereferenced only when locating the base object for the search. The default is to
# never dereference aliases.
DEREF           never

# Specifies the URI(s) of an LDAP server(s) to which the LDAP library should connect. The URI
# scheme may be either ldapor ldaps which refer to LDAP over TCP and LDAP over SSL (TLS)
# respectively. Each server's name can be specified as a domain- style name or an IP address
# literal. Optionally, the server's name can followed by a ':' and the port number the LDAP
# server is listening on. If no port number is provided, the default port for the scheme is
# used (389 for ldap://, 636 for ldaps://). A space separated list of URIs may be provided.
URI             ldaps://ldapm.stoney-cloud.org

# Used to specify the default base DN to use when performing ldap operations. The base must be
# specified as a Distinguished Name in LDAP format.
BASE            dc=stoney-cloud,dc=org

# This is a local copy of the certificate of the certificate authority
# used to sign the server certificate for the LDAP server I am using
TLS_CACERT      /etc/ssl/certs/FOSS-Cloud_CA.cert.pem

Check you configuration by doing a search:

ldapsearch -v -H "ldaps://ldapm.stoney-cloud.org" \
              -b "dc=stoney-cloud,dc=org" \
              -D "cn=Manager,dc=stoney-cloud,dc=org" \
              -s one "(objectClass=*)" \
              -LLL -W

The result should look something like:

ldap_initialize( ldaps://ldapm.stoney-cloud.org:636/??base )
filter: (objectClass=*)
requesting: All userApplication attributes
dn: ou=administration,dc=stoney-cloud,dc=org
objectClass: top
objectClass: organizationalUnit
ou: administration
...

Random Number Generator (haveged)

Tools like putty are dependent on random numbers to be able to create certificates.

haveged - Generate random numbers and feed linux random device

The haveged daemon doesn't need any special configuration, therefore you can start it from the command line interface:

/etc/init.d/haveged start

Check, if the start was successful:

ps auxf | grep haveged
root     18001  1.0  0.0   7420  3616 ?        Ss   08:48   0:00 /usr/sbin/haveged -r 0 -w 1024 -v 1

Add the haveged daemon to the default run level:

rc-update add haveged default

nss-pam-ldapd

nslcd.conf — configuration file for LDAP nameservice daemon

/etc/nslcd.conf
# This is the configuration file for the LDAP nameservice
# switch library's nslcd daemon. It configures the mapping
# between NSS names (see /etc/nsswitch.conf) and LDAP
# information in the directory.
# See the manual page nslcd.conf(5) for more information.

# The user and group nslcd should run as.
uid nslcd
gid nslcd

# The uri pointing to the LDAP server to use for name lookups.
# Multiple entries may be specified. The address that is used
# here should be resolvable without using LDAP (obviously).
#uri ldap://127.0.0.1/
#uri ldaps://127.0.0.1/
#uri ldapi://%2fvar%2frun%2fldapi_sock/
# Note: %2f encodes the '/' used as directory separator
uri ldaps://ldapm.tombstone.ch

# The LDAP version to use (defaults to 3
# if supported by client library)
#ldap_version 3

# The distinguished name of the search base.
base dc=stoney-cloud,dc=org

# The distinguished name to bind to the server with.
# Optional: default is to bind anonymously.
binddn cn=Manager,dc=stoney-cloud,dc=org

# The credentials to bind with.
# Optional: default is no credentials.
# Note that if you set a bindpw you should check the permissions of this file.
bindpw myverysecretpassword

# The distinguished name to perform password modifications by root by.
#rootpwmoddn cn=admin,dc=example,dc=com

# The default search scope.
#scope sub
#scope one
#scope base

# Customize certain database lookups.
#base   group  ou=Groups,dc=example,dc=com
base   group  ou=groups,ou=backup,ou=services,dc=stoney-cloud,dc=org
base   passwd ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org
base   shadow ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org
#scope  group  onelevel
#scope  hosts  sub

#filter group  (&(objectClass=posixGroup)(sstIsActive=TRUE))
filter passwd (&(objectClass=posixAccount)(sstIsActive=TRUE))
filter shadow (&(objectClass=shadowAccount)(sstIsActive=TRUE))

# Bind/connect timelimit.
#bind_timelimit 30

# Search timelimit.
#timelimit 30

# Idle timelimit. nslcd will close connections if the
# server has not been contacted for the number of seconds.
#idle_timelimit 3600

# Use StartTLS without verifying the server certificate.
#ssl start_tls
tls_reqcert never

# CA certificates for server certificate verification
#tls_cacertdir /etc/ssl/certs
#tls_cacertfile /etc/ssl/ca.cert

# Seed the PRNG if /dev/urandom is not provided
#tls_randfile /var/run/egd-pool

# SSL cipher suite
# See man ciphers for syntax
#tls_ciphers TLSv1

# Client certificate and key
# Use these, if your server requires client authentication.
#tls_cert
#tls_key

# Mappings for Services for UNIX 3.5
#filter passwd (objectClass=User)
#map    passwd uid              msSFU30Name
#map    passwd userPassword     msSFU30Password
#map    passwd homeDirectory    msSFU30HomeDirectory
#map    passwd homeDirectory    msSFUHomeDirectory
#filter shadow (objectClass=User)
#map    shadow uid              msSFU30Name
#map    shadow userPassword     msSFU30Password
#filter group  (objectClass=Group)
#map    group  member           msSFU30PosixMember

# Mappings for Services for UNIX 2.0
#filter passwd (objectClass=User)
#map    passwd uid              msSFUName
#map    passwd userPassword     msSFUPassword
#map    passwd homeDirectory    msSFUHomeDirectory
#map    passwd gecos            msSFUName
#filter shadow (objectClass=User)
#map    shadow uid              msSFUName
#map    shadow userPassword     msSFUPassword
#map    shadow shadowLastChange pwdLastSet
#filter group  (objectClass=Group)
#map    group  member           posixMember

# Mappings for Active Directory
#pagesize 1000
#referrals off
#idle_timelimit 800
#filter passwd (&(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*))
#map    passwd uid              sAMAccountName
#map    passwd homeDirectory    unixHomeDirectory
#map    passwd gecos            displayName
#filter shadow (&(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*))
#map    shadow uid              sAMAccountName
#map    shadow shadowLastChange pwdLastSet
#filter group  (objectClass=group)

# Alternative mappings for Active Directory
# (replace the SIDs in the objectSid mappings with the value for your domain)
#pagesize 1000
#referrals off
#idle_timelimit 800
#filter passwd (&(objectClass=user)(objectClass=person)(!(objectClass=computer)))
#map    passwd uid           cn
#map    passwd uidNumber     objectSid:S-1-5-21-3623811015-3361044348-30300820
#map    passwd gidNumber     objectSid:S-1-5-21-3623811015-3361044348-30300820
#map    passwd homeDirectory "/home/$cn"
#map    passwd gecos         displayName
#map    passwd loginShell    "/bin/bash"
#filter group (|(objectClass=group)(objectClass=person))
#map    group gidNumber      objectSid:S-1-5-21-3623811015-3361044348-30300820

# Mappings for AIX SecureWay
#filter passwd (objectClass=aixAccount)
#map    passwd uid              userName
#map    passwd userPassword     passwordChar
#map    passwd uidNumber        uid
#map    passwd gidNumber        gid
#filter group  (objectClass=aixAccessGroup)
#map    group  cn               groupName
#map    group  gidNumber        gid

nsswitch.conf - Name Service Switch configuration file

/etc/nsswitch.conf
passwd:      files ldap
shadow:      files ldap
group:       files ldap

# passwd:    db files nis
# shadow:    db files nis
# group:     db files nis

hosts:       files dns
networks:    files dns

services:    db files
protocols:   db files
rpc:         db files
ethers:      db files
netmasks:    files
netgroup:    files
bootparams:  files

automount:   files
aliases:     files

system-auth

vi /etc/pam.d/system-auth
auth            required        pam_env.so
auth            sufficient      pam_unix.so try_first_pass likeauth nullok
auth            sufficient      pam_ldap.so minimum_uid=1000 use_first_pass
auth            required        pam_deny.so

account         required        pam_unix.so
account         sufficient      pam_ldap.so minimum_uid=1000 use_first_pass

password        required        pam_cracklib.so difok=2 minlen=8 dcredit=2 ocredit=2 retry=3
password        required        pam_unix.so try_first_pass use_authtok nullok sha512 shadow
password        sufficient      pam_ldap.so minimum_uid=1000 use_first_pass
password        required        pam_deny.so

session         required        pam_limits.so
session         required        pam_env.so
session         required        pam_unix.so
session         sufficient      pam_ldap.so minimum_uid=1000 use_first_pass

Test the Setup

nslcd -d

Update the Default Run Levels

rc-update add nslcd default
rc-update add nscd default

Start the necessary Daemons

/etc/init.d/nslcd start
/etc/init.d/nscd start

Quota

32-bit Project Identifier Support

We need to enable 32-bit project identifier support (PROJID32BIT feature) for our naming scheme (uid numbers larger than 65'536), which is already the default on the stepping stone virtual machines:

mkfs.xfs -i projid32bit=1 /dev/vg-local-01/var

Update /etc/fstab and Mount

Make sure, that you have user quota (uqota) and project quota (pquota) set as options on the chosen mount point in /etc/fstab. For example:

LABEL=LV-VAR            /var            xfs             noatime,discard,inode64,uquota,pquota  0 2
reboot

Check, if everything went ok:

df -h | grep var
/dev/mapper/vg--local--01-var  1023G  220G  804G  22% /var

Verify

Some important options for xfs_quota:

  • -x: Enable expert mode.
  • -c: Pass arguments on the command line. Multiple arguments may be given.

Remount the file system /var and check, if /var has the desired values:

xfs_quota -x -c state /var

As you can see (items marked bold), we have achieved our goal:

User quota state on /var (/dev/mapper/vg--local--01-var)
  Accounting: ON
  Enforcement: ON
  Inode: #131 (1 blocks, 1 extents)
Group quota state on /var (/dev/mapper/vg--local--01-var)
  Accounting: OFF
  Enforcement: OFF
  Inode: #132 (1 blocks, 1 extents)
Project quota state on /var (/dev/mapper/vg--local--01-var)
  Accounting: ON
  Enforcement: ON
  Inode: #132 (1 blocks, 1 extents)
Blocks grace time: [7 days 00:00:30]
Inodes grace time: [7 days 00:00:30]
Realtime Blocks grace time: [7 days 00:00:30]

User Quotas

Adding a User Quota

Set a quota of 1 Gigabyte for the user 4000187 (the values are in kilobytes, so 1048576 kilobyte are 1024 megabytes which corresponds to 1 gigabyte):

xfs_quota -x -c 'limit bhard=1048576k 4000187' /var

Or in bytes:

xfs_quota -x -c 'limit bhard=1073741824 4000187' /var

Read the quota information for the user 4000187:

xfs_quota -x -c 'quota -v -N -u 4000187' /var
/dev/mapper/vg--local--01-var                     0          0    1048576   00 [--------] /var

If the user has data in the project, that belongs to him, the result will change:

/dev/mapper/vg--local--01-var                512000          0    1048576   00 [--------] /var

Modifiying a User Quota

To modify a users quota, you just set a new quota (limit):

xfs_quota -x -c 'limit bhard=1048576k 4000187' /var

Read the quota information for the user 4000187:

xfs_quota -x -c 'quota -v -N -u 4000187' /var
/dev/mapper/vg--local--01-var                     0          0    1048576   00 [--------] /var

If the user has data in the project, that belongs to him, the result will change:

/dev/mapper/vg--local--01-var                512000          0    1048576   00 [--------] /var

Removing a User Quota

Removing a quota for a user:

xfs_quota -x -c 'limit bhard=0 4000187' /var

The following command should give you an empty result:

xfs_quota -x -c 'quota -v -N -u 4000187' /var

Project (Directory) Quotas

Adding a Project (Directory) Quota

The XFS file system additionally allows you to set quotas on individual directory hierarchies in the file system that are known as managed trees. Each managed tree is uniquely identified by a project ID and an optional project name. We'll use the following values in the examples:

  • project_ID: The uid of the online backup account (4000187).
  • project_name: The uid of the online backup account (4000187). This could be a human readable name.
  • mountpoint: The mountpoint of the xfs-filesystem (/var). See the /etc/fstab entry from above.
  • directory: The directory of the project (187/4000187), starting from the mountpoint of the xfs-filesystem (/var).

Define a unique project ID for the directory hierarchy in the /etc/projects file (project_ID:mountpoint/directory):

echo "4000187:/var/backup/187/4000187/home/4000187" >> /etc/projects

Create an entry in the /etc/projid file that maps a project name to the project ID (project_name:project_ID):

echo "4000187:4000187" >> /etc/projid

Set Project:

xfs_quota -x -c 'project -s -p /var/backup/187/4000187/home/4000187 4000187' /var

Set Quota (limit) on Project:

xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var

Check your Quota (limit)

xfs_quota -x -c 'quota -p 4000187' /var

Check the Quota:

  • -v: increase verbosity in reporting (also dumps zero values).
  • -N: suppress the initial header.
  • -p: display project quota information.
  • -h: human readable format.
xfs_quota -x -c 'quota -v -N -p 4000187' /var
/dev/mapper/vg--local--01-var                     0          0    1048576   00 [--------] /var

If you copied data into the project, the output will look something like:

/dev/mapper/vg--local--01-var                512000          0    1048576   00 [--------] /var

To give you an overall view of the whole system:

xfs_quota -x -c report /var
User quota on /var (/dev/mapper/vg--local--01-var)
                               Blocks                     
User ID          Used       Soft       Hard    Warn/Grace     
---------- -------------------------------------------------- 
root          1024000          0          0     00 [--------]
4000187             0          0    1048576     00 [--------]

Project quota on /var (/dev/mapper/vg--local--01-var)
                               Blocks                     
Project ID       Used       Soft       Hard    Warn/Grace     
---------- -------------------------------------------------- 
4000187        512000          0    1048576     00 [--------]

Modifying a Project (Directory) Quota

To modify a project (directory) quota, you just set an new quota (limit) on the chosen project:

xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var

Check your quota (limit)

xfs_quota -x -c 'quota -p 4000187' /var

Removing a Project (Directory) Quota

Removing a quota from a project:

xfs_quota -x -c 'limit -p bhard=0 4000187' /var

Chreck the results:

xfs_quota -x -c report /var
User quota on /var (/dev/mapper/vg--local--01-var)
                               Blocks                     
User ID          Used       Soft       Hard    Warn/Grace     
---------- -------------------------------------------------- 
root           512000          0          0     00 [--------]
4000187             0          0       1024     00 [--------]

As you can see, the line with the Project ID 4000187 has disappeared:

4000187        512000          0    1048576     00 [--------]

Don't forget to remove the project from /etc/projects and /etc/projid:

sed -i -e '/4000187/d' /etc/projects
sed -i -e '/4000187/d' /etc/projid

Some important notes concerning XFS

  1. The quotacheck command has no effect on XFS filesystems. The first time quota accounting is turned on (at mount time), XFS does an automatic quotacheck internally; afterwards, the quota system will always be completely consistent until quotas are manually turned off.
  2. There is no need for quota file(s) in the root of the XFS filesystem.

prov-backup-rsnapshot

Install the prov-backup-rsnasphot daemon script using the package manager:

emerge -va sys-apps/sst-prov-backup-rsnapshot

Configuration

If it is the first provisioning module running on this server (very likely) you first have to configure the provisioning daemon (you can skip this step if you have already another provisioning module running on this server)

Provisioning global configuration

The global configuration for the provisioning daemon (which was installed with the first provisioning module and the sys-apps/sst-provisioning package) applies to all provisioning modules running on the server. This configuration therefore contains information about the provisioning daemon itself and no information at all about the specific modules.

/etc/Provisioning/Global.conf
# Copyright (C) 2012 stepping stone GmbH
#                    Switzerland
#                    http://www.stepping-stone.ch
#                    support@stepping-stone.ch
#
# Authors:
#  Pat Kläy <pat.klaey@stepping-stone.ch>
#  
# Licensed under the EUPL, Version 1.1.
#
# You may not use this work except in compliance with the
# Licence.
# You may obtain a copy of the Licence at:
#
# http://www.osor.eu/eupl
#
# Unless required by applicable law or agreed to in
# writing, software distributed under the Licence is
# distributed on an "AS IS" basis,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied.
# See the Licence for the specific language governing
# permissions and limitations under the Licence.
#

[Global]
# If true the script logs every information to the log-file.
LOG_DEBUG = 0

# If true the script logs additional information to the log-file.
LOG_INFO = 1

#If true the script logs warnings to the log-file.
LOG_WARNING = 1

#If true the script logs errors to the log-file.
LOG_ERR = 1


# The number of seconds to wait before retry contacting the backend server during startup.
SLEEP = 10

# Number of backend server connection retries during startup.
ATTEMPTS = 3

[Operation Mode]
# The number of seconds to wait before retry contacting the backend server in case of a service interruptions.
SLEEP = 30

# Number of backend server connection retries in case of a service interruptions.
ATTEMPTS = 3

[Mail]
# Error messages are sent to the mail configured below.
SENDTO = <YOUR-MAIL-ADDRESS>
HOST = mail.stepping-stone.ch
PORT = 587
USERNAME = <YOUR-NOTIFICATION-EMAIL-ADDRESS>
PASSWORD = <PASSWORD>
FROMNAME = Provisioning daemon
CA_DIR = /etc/ssl/certs
SSL = starttls
AUTH_METHOD = LOGIN

# Additionally, you can be informed about creation, modification and deletion of services.
WANTINFOMAIL = 1

Provisioning daemon prov-backup-rsnapshot module

The module specific configuration is located in /etc/Provisioning/<Service>/<Type>.conf. In the case of the prov-backup-rsnapshot module this is /etc/Provisioning/Backup/Rsnapshot.conf. (Note: Comments starting with /* are not in the configuration file, they are only in the wiki to add some additional information)

# Copyright (C) 2013 stepping stone GmbH
#                    Switzerland
#                    http://www.stepping-stone.ch
#                    support@stepping-stone.ch
#
# Authors:
#  Pat Kläy <pat.klaey@stepping-stone.ch>
#  
# Licensed under the EUPL, Version 1.1.
#
# You may not use this work except in compliance with the
# Licence.
# You may obtain a copy of the Licence at:
#
# http://www.osor.eu/eupl
#
# Unless required by applicable law or agreed to in
# writing, software distributed under the Licence is
# distributed on an "AS IS" basis,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied.
# See the Licence for the specific language governing
# permissions and limitations under the Licence.
#

/* If you want, you can override the log information from the global configuration file this might be useful for debugging */
[Global]
# If true the script logs every information to the log-file.
LOG_DEBUG = 1

# If true the script logs additional information to the log-file.
LOG_INFO = 1

#If true the script logs warnings to the log-file.
LOG_WARNING = 1

#If true the script logs errors to the log-file.
LOG_ERR = 1

/* Specify the hosts fully qualified domain name. This name will be used to perform some checks and also appear in the information and error mails */
ENVIRONMENT = <FQDN>
 
[Database]
BACKEND = LDAP
SERVER = ldaps://ldapm.tombstone.org
PORT = 636
ADMIN_USER = cn=Manager,dc=stoney-cloud,dc=org
ADMIN_PASSWORD = <PASSWORD>
SERVICE_SUBTREE = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org
COOKIE_FILE = /etc/Provisioning/Backup/rsnapshot.cookie
DEFAULT_COOKIE = rid=001,csn=
SEARCH_FILTER = (&(entryCSN>=%entryCSN%)(sstProvisioningState=0))

/* Specifies the service itself. As it is the prov-backup-rsnapshot module, the SERVICE is "Backup" and the TYPE is "Rsnapshot".
 * The MODUS is as usual selfcare and the TRANSPORTAPI is LocalCLI. This is because the daemon is running on the same host as the
 * backup accounts are provisioned and the commands can be executed on this host using the cli.
 * For more information about MODUS and TRANSPORTAPI see https://int.stepping-stone.ch/wiki/provisioning.pl#Service_Konfiguration
 */
[Service]
MODUS = selfcare
TRANSPORTAPI = LocalCLI
SERVICE = Backup
TYPE = Rsnapshot

SYSLOG = prov-backup-rsnapshot

/* For the TRANSPORTAPI LocalCLI there is no gateway required because there is no connection to establish. So set HOST, USER and
 * DSA_FILE to whatever you want. Don't leave it blank, otherwise the provisioning daemon would log some error messages saying
 * these attributes are empty 
 */
[Gateway]
HOST = localhost
USER = provisioning
DSA_FILE = none

/* Information about the backup itself (how to setup everything). Note that the %uid% int the RSNAPSHOT_CONFIG_FILE parameter will
 * be replaced by the accounts UID. The script CREATE_CHROOT_CMD was installed with the prov-backup-rsnapshot module, so do not
 * change this parameter. The quota parameters (SET_QUOTA_CMD, MOUNTPOINT, QUOTA_FILE, PROJECTS_FILE and PROJID_FILE) represent 
 * the quota setup as described on http://wiki.stoney-cloud.org/index.php/stoney_backup:_Server_set-up#Quota. If you followed this
 * manual, you can copy-paste them into your configuration file, otherwise adapt them according to your quota setup.
 */
[Backup]
RSNAPSHOT_CONFIG_FILE = /etc/rsnapshot/rsnapshot.conf.%uid%
SET_QUOTA_CMD = /usr/sbin/xfs_quota
CREATE_CHROOT_CMD = /usr/libexec/createBackupDirectory.sh
MOUNTPOINT = /var
QUOTA_FILE = /etc/backupSize
PROJECTS_FILE = /etc/projects
PROJID_FILE = /etc/projid

backup utils

Install the backup utils (multiple scripts which help you to manage and monitor your backup server and backup accounts) using the package manager. For more information about the scripts please see the stoney backup Service Software page.

emerge -va sys-apps/sst-backup-utils

Configuration

Please refer to the configuration sections for the different scripts in stoney backup Service Software.

Links

  • OpenLDAP, an open source implementation of the Lightweight Directory Access Protocol.
  • nss-pam-ldapd, a Name Service Switch (NSS) module that allows your LDAP server to provide user account, group, host name, alias, netgroup, and basically any other information that you would normally get from /etc flat files or NIS.
  • Gentoo Leitfaden zur OpenLDAP Authentifikation.
  • Centralized authentication using OpenLDAP.
  • openssh-lpk_openldap.schema OpenSSH LDAP Public Keys.
  • linuxquota Linux DiskQuota.
  • rsnapshot, a remote filesystem snapshot utility, based on rsync.
  • Jailkit, set of utilities to limit user accounts to specific files using chroot() and or specific commands. Also includes a tool to build a chroot environment.
  • Busybox BusyBox combines tiny versions of many common UNIX utilities into a single small executable. Useful to reduce the number of files (and thus the complexity) when building a chroot.