Difference between revisions of "stoney backup: Server set-up"
[unchecked revision] | [unchecked revision] |
(→32-bit Project Identifier Support) |
(→Provisioning global configuration) |
||
(171 intermediate revisions by 5 users not shown) | |||
Line 8: | Line 8: | ||
== Requirements == | == Requirements == | ||
− | A working stoney cloud | + | A working stoney cloud, installed according to [[stoney cloud: Single-Node Installation]] or [[stoney cloud: Multi-Node Installation]]. |
− | == USE- | + | == Keywords & USE-Flags == |
− | For a | + | For a minimal OpenLDAP directory installation: |
− | echo "net-nds/openldap | + | echo "net-nds/openldap minimal sasl" >> /etc/portage/package.use |
+ | echo "net-nds/openldap ~amd64" >> /etc/portage/package.keywords | ||
− | + | NSS and PAM modules for lookups using LDAP: | |
echo "sys-auth/nss-pam-ldapd sasl" >> /etc/portage/package.use | echo "sys-auth/nss-pam-ldapd sasl" >> /etc/portage/package.use | ||
echo "sys-auth/nss-pam-ldapd ~amd64" >> /etc/portage/package.keywords | echo "sys-auth/nss-pam-ldapd ~amd64" >> /etc/portage/package.keywords | ||
− | |||
echo "sys-fs/quota ldap" >> /etc/portage/package.use | echo "sys-fs/quota ldap" >> /etc/portage/package.use | ||
− | For the prov-backup-rsnapshot daemon | + | echo "=app-admin/jailkit-2.16 ~amd64" >> /etc/portage/package.keywords |
+ | |||
+ | For the prov-backup-rsnapshot daemon: | ||
echo "dev-perl/Net-SMTPS ~amd64" >> /etc/portage/package.keywords | echo "dev-perl/Net-SMTPS ~amd64" >> /etc/portage/package.keywords | ||
echo "perl-core/Switch ~amd64" >> /etc/portage/package.keywords | echo "perl-core/Switch ~amd64" >> /etc/portage/package.keywords | ||
+ | |||
+ | To build puttygen only without X11: | ||
+ | echo "net-misc/putty ~amd64" >> /etc/portage/package.keywords | ||
+ | echo "net-misc/putty -gtk" >> /etc/portage/package.use | ||
== Emerge == | == Emerge == | ||
− | emerge | + | emerge -va nss-pam-ldapd \ |
− | + | quota \ | |
− | + | net-misc/putty \ | |
− | + | app-admin/jailkit \ | |
− | + | sys-apps/haveged \ | |
− | + | net-misc/putty \ | |
− | + | sys-apps/sst-backup-utils \ | |
− | + | sys-apps/sst-prov-backup-rsnapshot | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | To list the dependencies of ebuilds, you can use <code>equery</code>: | |
− | + | equery depgraph sst-backup-utils | |
− | + | ||
<pre> | <pre> | ||
− | + | * Searching for sst-backup-utils ... | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | .. | + | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | * dependency graph for sys-apps/sst-backup-utils-0.1.0 | |
− | + | `-- sys-apps/sst-backup-utils-0.1.0 amd64 | |
− | + | `-- dev-perl/PerlUtil-0.1.0 (>=dev-perl/PerlUtil-0.1.0) amd64 | |
+ | `-- virtual/perl-Sys-Syslog-0.320.0 (virtual/perl-Sys-Syslog) amd64 | ||
+ | `-- dev-perl/perl-ldap-0.530.0 (dev-perl/perl-ldap) amd64 | ||
+ | `-- dev-perl/XML-Simple-2.200.0 (dev-perl/XML-Simple) amd64 | ||
+ | `-- dev-perl/Config-IniFiles-2.780.0 (dev-perl/Config-IniFiles) amd64 | ||
+ | `-- dev-perl/XML-Validator-Schema-1.100.0 (dev-perl/XML-Validator-Schema) amd64 | ||
+ | `-- dev-perl/Date-Calc-6.300.0 (dev-perl/Date-Calc) amd64 | ||
+ | `-- dev-perl/DateManip-6.310.0 (dev-perl/DateManip) amd64 | ||
+ | `-- dev-perl/Schedule-Cron-Events-1.930.0 (dev-perl/Schedule-Cron-Events) amd64 | ||
+ | `-- dev-perl/DateTime-Format-Strptime-1.520.0 (dev-perl/DateTime-Format-Strptime) amd64 | ||
+ | `-- dev-perl/XML-SAX-0.990.0 (dev-perl/XML-SAX) amd64 | ||
+ | `-- virtual/perl-MIME-Base64-3.130.0-r2 (virtual/perl-MIME-Base64) amd64 | ||
+ | `-- dev-perl/Authen-SASL-2.160.0 (dev-perl/Authen-SASL) amd64 | ||
+ | `-- dev-perl/Net-SMTPS-0.30.0 (dev-perl/Net-SMTPS) ~amd64 | ||
+ | `-- dev-perl/text-template-1.450.0 (dev-perl/text-template) amd64 | ||
+ | `-- virtual/perl-Getopt-Long-2.380.0-r2 (virtual/perl-Getopt-Long) amd64 | ||
+ | `-- dev-perl/Parallel-ForkManager-1.20.0 (dev-perl/Parallel-ForkManager) amd64 | ||
+ | `-- dev-perl/Time-Stopwatch-1.0.0 (dev-perl/Time-Stopwatch) amd64 | ||
+ | `-- app-backup/rsnapshot-1.3.1-r1 (app-backup/rsnapshot) amd64 | ||
+ | [ sys-apps/sst-backup-utils-0.1.0 stats: packages (20), max depth (1) ] | ||
</pre> | </pre> | ||
− | + | For more information, visit the [http://www.gentoo.org/doc/en/gentoolkit.xml Gentoolkit] page. | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
+ | = Base Server Software Configuration = | ||
== OpenSSH == | == OpenSSH == | ||
+ | === OpenSSH Configuration === | ||
Configure the OpenSSH daemon: | Configure the OpenSSH daemon: | ||
<source lang="bash"> | <source lang="bash"> | ||
Line 197: | Line 93: | ||
Match User *000 | Match User *000 | ||
ChrootDirectory /var/backup/000/%u | ChrootDirectory /var/backup/000/%u | ||
+ | AuthorizedKeysFile /var/backup/000/%u/%h/.ssh/authorized_keys | ||
Match | Match | ||
Match User *001 | Match User *001 | ||
ChrootDirectory /var/backup/001/%u | ChrootDirectory /var/backup/001/%u | ||
+ | AuthorizedKeysFile /var/backup/001/%u/%h/.ssh/authorized_keys | ||
Match | Match | ||
... | ... | ||
Match User *999 | Match User *999 | ||
ChrootDirectory /var/backup/999/%u | ChrootDirectory /var/backup/999/%u | ||
+ | AuthorizedKeysFile /var/backup/999/%u/%h/.ssh/authorized_keys | ||
Match | Match | ||
</source> | </source> | ||
Line 211: | Line 110: | ||
FILE=/etc/ssh/sshd_config; | FILE=/etc/ssh/sshd_config; | ||
− | for | + | for x in {0..999} ; do \ |
− | do \ | + | |
printf "Match User *%03d\n" $x >> ${FILE}; \ | printf "Match User *%03d\n" $x >> ${FILE}; \ | ||
printf " ChrootDirectory /var/backup/%03d/%%u\n" $x >> ${FILE}; \ | printf " ChrootDirectory /var/backup/%03d/%%u\n" $x >> ${FILE}; \ | ||
+ | printf " AuthorizedKeysFile /var/backup/%03d/%%u/%%h/.ssh/authorized_keys\n" $x >> ${FILE}; \ | ||
printf "Match\n" >> ${FILE}; \ | printf "Match\n" >> ${FILE}; \ | ||
done | done | ||
Line 223: | Line 122: | ||
/etc/init.d/sshd restart | /etc/init.d/sshd restart | ||
</source> | </source> | ||
+ | |||
+ | === OpenSSH Host Keys === | ||
+ | If you migrate from a existing backup server, you might want to copy the ssh host keys to the new server. If you do so clients want see a difference between the two hosts as the fingerprint remains the same. Copy the following files from the existing host to the new: | ||
+ | * /etc/ssh/ssh_host_dsa_key | ||
+ | * /etc/ssh/ssh_host_ecdsa_key | ||
+ | * /etc/ssh/ssh_host_key | ||
+ | * /etc/ssh/ssh_host_rsa_key | ||
+ | * /etc/ssh/ssh_host_dsa_key.pub | ||
+ | * /etc/ssh/ssh_host_ecdsa_key.pub | ||
+ | * /etc/ssh/ssh_host_key.pub | ||
+ | * /etc/ssh/ssh_host_rsa_key.pub | ||
+ | |||
+ | Set the correct permissions on the new host: | ||
+ | chmod 600 /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_ecdsa_key /etc/ssh/ssh_host_key /etc/ssh/ssh_host_rsa_key | ||
+ | chmod 644 /etc/ssh/*.pub | ||
+ | |||
+ | And restart the ssh daemon. ''Caution'': do not close your existing ssh session as long as you are not sure the ssh daemon has restarted properly and you can login again. | ||
+ | /etc/init.d/sshd restart | ||
== OpenLDAP == | == OpenLDAP == | ||
+ | === /etc/hosts === | ||
+ | Update the <code>/etc/hosts</code> with the LDAP server: | ||
+ | /etc/hosts | ||
+ | |||
+ | # VIP of the LDAP Server | ||
+ | 31.216.40.4 ldapm.stoney-cloud.org | ||
+ | |||
+ | === Root CA Certificate Installation === | ||
+ | Install the root CA certificate into the OpenSSL default certificate storage directory: | ||
+ | fqdn="cloud.stoney-cloud.org" # The fully qualified domain name of the server containing the root certificate. | ||
+ | |||
+ | cd /etc/ssl/certs/ | ||
+ | wget --no-check-certificate https://${fqdn}/ca/FOSS-Cloud_CA.cert.pem | ||
+ | chown root:root /etc/ssl/certs/FOSS-Cloud_CA.cert.pem | ||
+ | chmod 444 /etc/ssl/certs/FOSS-Cloud_CA.cert.pem | ||
+ | |||
+ | Rebuild the CA hashes | ||
+ | c_rehash /etc/ssl/certs/ | ||
+ | |||
+ | === /etc/openldap/ldap.conf === | ||
+ | Update the <code>/etc/openldap/ldap.conf</code>LDAP configuration file/environment variables: | ||
/etc/openldap/ldap.conf | /etc/openldap/ldap.conf | ||
+ | |||
+ | <pre> | ||
+ | # Used to specify a size limit to use when performing searches. The number should be an | ||
+ | # non-negative integer. SIZELIMIT of zero (0) specifies unlimited search size. | ||
+ | SIZELIMIT 20000 | ||
+ | |||
+ | # Used to specify a time limit to use when performing searches. The number should be an | ||
+ | # non-negative integer. TIMELIMIT of zero (0) specifies unlimited search time to be used. | ||
+ | TIMELIMIT 45 | ||
+ | |||
+ | # Specify how aliases dereferencing is done. DEREF should be set to one of never, always, search, | ||
+ | # or find to specify that aliases are never dereferenced, always dereferenced, dereferenced when | ||
+ | # searching, or dereferenced only when locating the base object for the search. The default is to | ||
+ | # never dereference aliases. | ||
+ | DEREF never | ||
+ | |||
+ | # Specifies the URI(s) of an LDAP server(s) to which the LDAP library should connect. The URI | ||
+ | # scheme may be either ldapor ldaps which refer to LDAP over TCP and LDAP over SSL (TLS) | ||
+ | # respectively. Each server's name can be specified as a domain- style name or an IP address | ||
+ | # literal. Optionally, the server's name can followed by a ':' and the port number the LDAP | ||
+ | # server is listening on. If no port number is provided, the default port for the scheme is | ||
+ | # used (389 for ldap://, 636 for ldaps://). A space separated list of URIs may be provided. | ||
+ | URI ldaps://ldapm.stoney-cloud.org | ||
+ | |||
+ | # Used to specify the default base DN to use when performing ldap operations. The base must be | ||
+ | # specified as a Distinguished Name in LDAP format. | ||
+ | BASE dc=stoney-cloud,dc=org | ||
+ | |||
+ | # This is a local copy of the certificate of the certificate authority | ||
+ | # used to sign the server certificate for the LDAP server I am using | ||
+ | TLS_CACERT /etc/ssl/certs/FOSS-Cloud_CA.cert.pem | ||
+ | </pre> | ||
+ | |||
+ | Check you configuration by doing a search: | ||
+ | ldapsearch -v -H "ldaps://ldapm.stoney-cloud.org" \ | ||
+ | -b "dc=stoney-cloud,dc=org" \ | ||
+ | -D "cn=Manager,dc=stoney-cloud,dc=org" \ | ||
+ | -s one "(objectClass=*)" \ | ||
+ | -LLL -W | ||
+ | |||
+ | The result should look something like: | ||
+ | ldap_initialize( ldaps://ldapm.stoney-cloud.org:636/??base ) | ||
+ | filter: (objectClass=*) | ||
+ | requesting: All userApplication attributes | ||
+ | dn: ou=administration,dc=stoney-cloud,dc=org | ||
+ | objectClass: top | ||
+ | objectClass: organizationalUnit | ||
+ | ou: administration | ||
+ | ... | ||
+ | |||
+ | == Random Number Generator (haveged) == | ||
+ | Tools like putty are dependent on random numbers to be able to create certificates. | ||
+ | |||
+ | === haveged - Generate random numbers and feed linux random device === | ||
+ | The haveged daemon doesn't need any special configuration, therefore you can start it from the command line interface: | ||
+ | /etc/init.d/haveged start | ||
+ | |||
+ | Check, if the start was successful: | ||
+ | ps auxf | grep haveged | ||
+ | |||
+ | root 18001 1.0 0.0 7420 3616 ? Ss 08:48 0:00 /usr/sbin/haveged -r 0 -w 1024 -v 1 | ||
+ | |||
+ | Add the haveged daemon to the default run level: | ||
+ | rc-update add haveged default | ||
== nss-pam-ldapd == | == nss-pam-ldapd == | ||
+ | === nslcd.conf — configuration file for LDAP nameservice daemon === | ||
/etc/nslcd.conf | /etc/nslcd.conf | ||
Line 380: | Line 383: | ||
</pre> | </pre> | ||
− | + | === nsswitch.conf - Name Service Switch configuration file === | |
/etc/nsswitch.conf | /etc/nsswitch.conf | ||
Line 407: | Line 410: | ||
</pre> | </pre> | ||
− | == system-auth == | + | === system-auth === |
vi /etc/pam.d/system-auth | vi /etc/pam.d/system-auth | ||
<pre> | <pre> | ||
− | auth | + | auth required pam_env.so |
− | auth | + | auth sufficient pam_unix.so try_first_pass likeauth nullok |
− | auth | + | auth sufficient pam_ldap.so minimum_uid=1000 use_first_pass |
− | auth | + | auth required pam_deny.so |
− | account | + | account required pam_unix.so |
− | + | account sufficient pam_ldap.so minimum_uid=1000 use_first_pass | |
− | account | + | |
− | password | + | password required pam_cracklib.so difok=2 minlen=8 dcredit=2 ocredit=2 retry=3 |
− | password | + | password required pam_unix.so try_first_pass use_authtok nullok sha512 shadow |
− | password | + | password sufficient pam_ldap.so minimum_uid=1000 use_first_pass |
+ | password required pam_deny.so | ||
− | session | + | session required pam_limits.so |
− | session | + | session required pam_env.so |
− | session | + | session required pam_unix.so |
− | session | + | session sufficient pam_ldap.so minimum_uid=1000 use_first_pass |
</pre> | </pre> | ||
− | == | + | === Test the Setup === |
− | + | nslcd -d | |
− | + | === Update the Default Run Levels === | |
− | + | rc-update add nslcd default | |
− | + | rc-update add nscd default | |
− | # | + | |
+ | === Start the necessary Daemons === | ||
+ | /etc/init.d/nslcd start | ||
+ | /etc/init.d/nscd start | ||
+ | |||
+ | == Quota == | ||
+ | === 32-bit Project Identifier Support === | ||
+ | We need to enable 32-bit project identifier support (PROJID32BIT feature) for our naming scheme (uid numbers larger than 65'536), which is already the default on the stepping stone virtual machines: | ||
+ | mkfs.xfs '''-i projid32bit=1''' /dev/vg-local-01/var | ||
+ | |||
+ | === Update /etc/fstab and Mount === | ||
+ | Make sure, that you have user quota (uqota) and project quota (pquota) set as options on the chosen mount point in /etc/fstab. For example: | ||
+ | LABEL=LV-VAR /var xfs noatime,discard,inode64,uquota,pquota 0 2 | ||
+ | |||
+ | reboot | ||
+ | |||
+ | Check, if everything went ok: | ||
+ | df -h | grep var | ||
+ | |||
+ | /dev/mapper/vg--local--01-var 1023G 220G 804G 22% /var | ||
+ | |||
+ | === Verify === | ||
+ | Some important options for xfs_quota: | ||
+ | * -x: Enable expert mode. | ||
+ | * -c: Pass arguments on the command line. Multiple arguments may be given. | ||
+ | |||
+ | Remount the file system /var and check, if /var has the desired values: | ||
+ | xfs_quota -x -c state /var | ||
+ | |||
+ | As you can see (items marked bold), we have achieved our goal: | ||
+ | User quota state on /var (/dev/mapper/vg--local--01-var) | ||
+ | Accounting: '''ON''' | ||
+ | Enforcement: '''ON''' | ||
+ | Inode: #131 (1 blocks, 1 extents) | ||
+ | Group quota state on /var (/dev/mapper/vg--local--01-var) | ||
+ | Accounting: OFF | ||
+ | Enforcement: OFF | ||
+ | Inode: #132 (1 blocks, 1 extents) | ||
+ | Project quota state on /var (/dev/mapper/vg--local--01-var) | ||
+ | Accounting: '''ON''' | ||
+ | Enforcement: '''ON''' | ||
+ | Inode: #132 (1 blocks, 1 extents) | ||
+ | Blocks grace time: [7 days 00:00:30] | ||
+ | Inodes grace time: [7 days 00:00:30] | ||
+ | Realtime Blocks grace time: [7 days 00:00:30] | ||
+ | |||
+ | === User Quotas === | ||
+ | ==== Adding a User Quota ==== | ||
+ | Set a quota of 1 Gigabyte for the user 4000187 (the values are in kilobytes, so 1048576 kilobyte are 1024 megabytes which corresponds to 1 gigabyte): | ||
+ | xfs_quota -x -c 'limit bhard=1048576k 4000187' /var | ||
+ | |||
+ | Or in bytes: | ||
+ | xfs_quota -x -c 'limit bhard=1073741824 4000187' /var | ||
+ | |||
+ | Read the quota information for the user 4000187: | ||
+ | xfs_quota -x -c 'quota -v -N -u 4000187' /var | ||
+ | |||
+ | /dev/mapper/vg--local--01-var 0 0 1048576 00 [--------] /var | ||
+ | |||
+ | If the user has data in the project, that belongs to him, the result will change: | ||
+ | /dev/mapper/vg--local--01-var 512000 0 1048576 00 [--------] /var | ||
+ | |||
+ | ==== Modifiying a User Quota ==== | ||
+ | To modify a users quota, you just set a new quota (limit): | ||
+ | xfs_quota -x -c 'limit bhard=1048576k 4000187' /var | ||
+ | |||
+ | Read the quota information for the user 4000187: | ||
+ | xfs_quota -x -c 'quota -v -N -u 4000187' /var | ||
+ | |||
+ | /dev/mapper/vg--local--01-var 0 0 1048576 00 [--------] /var | ||
+ | |||
+ | If the user has data in the project, that belongs to him, the result will change: | ||
+ | /dev/mapper/vg--local--01-var 512000 0 1048576 00 [--------] /var | ||
+ | |||
+ | ==== Removing a User Quota ==== | ||
+ | Removing a quota for a user: | ||
+ | xfs_quota -x -c 'limit bhard=0 4000187' /var | ||
+ | |||
+ | The following command should give you an empty result: | ||
+ | xfs_quota -x -c 'quota -v -N -u 4000187' /var | ||
+ | |||
+ | === Project (Directory) Quotas === | ||
+ | ==== Adding a Project (Directory) Quota ==== | ||
+ | The XFS file system additionally allows you to set quotas on individual directory hierarchies in the file system that are known as managed trees. Each managed tree is uniquely identified by a project ID and an optional project name. We'll use the following values in the examples: | ||
+ | * project_ID: The uid of the online backup account (4000187). | ||
+ | * project_name: The uid of the online backup account (4000187). This could be a human readable name. | ||
+ | * mountpoint: The mountpoint of the xfs-filesystem (/var). See the <code>/etc/fstab</code> entry from above. | ||
+ | * directory: The directory of the project (187/4000187), starting from the mountpoint of the xfs-filesystem (/var). | ||
+ | |||
+ | Define a unique project ID for the directory hierarchy in the <code>/etc/projects</code> file (project_ID:mountpoint/directory): | ||
+ | echo "4000187:/var/backup/187/4000187/home/4000187" >> /etc/projects | ||
+ | |||
+ | Create an entry in the <code>/etc/projid</code> file that maps a project name to the project ID (project_name:project_ID): | ||
+ | echo "4000187:4000187" >> /etc/projid | ||
+ | |||
+ | Set Project: | ||
+ | xfs_quota -x -c 'project -s -p /var/backup/187/4000187/home/4000187 4000187' /var | ||
+ | |||
+ | Set Quota (limit) on Project: | ||
+ | xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var | ||
+ | |||
+ | Check your Quota (limit) | ||
+ | xfs_quota -x -c 'quota -p 4000187' /var | ||
+ | |||
+ | Check the Quota: | ||
+ | * <code>-v</code>: increase verbosity in reporting (also dumps zero values). | ||
+ | * <code>-N</code>: suppress the initial header. | ||
+ | * <code>-p</code>: display project quota information. | ||
+ | * <code>-h</code>: human readable format. | ||
+ | xfs_quota -x -c 'quota -v -N -p 4000187' /var | ||
+ | |||
+ | /dev/mapper/vg--local--01-var 0 0 1048576 00 [--------] /var | ||
+ | |||
+ | If you copied data into the project, the output will look something like: | ||
+ | /dev/mapper/vg--local--01-var 512000 0 1048576 00 [--------] /var | ||
+ | |||
+ | To give you an overall view of the whole system: | ||
+ | xfs_quota -x -c report /var | ||
− | |||
− | |||
<pre> | <pre> | ||
− | + | User quota on /var (/dev/mapper/vg--local--01-var) | |
− | + | Blocks | |
− | + | User ID Used Soft Hard Warn/Grace | |
− | + | ---------- -------------------------------------------------- | |
− | + | root 1024000 0 0 00 [--------] | |
− | + | 4000187 0 0 1048576 00 [--------] | |
− | + | ||
− | + | ||
− | [ | + | |
− | + | ||
− | + | ||
− | + | Project quota on /var (/dev/mapper/vg--local--01-var) | |
− | + | Blocks | |
− | + | Project ID Used Soft Hard Warn/Grace | |
− | + | ---------- -------------------------------------------------- | |
− | + | 4000187 512000 0 1048576 00 [--------] | |
− | + | ||
− | + | ||
</pre> | </pre> | ||
− | === | + | ==== Modifying a Project (Directory) Quota ==== |
− | + | To modify a project (directory) quota, you just set an new quota (limit) on the chosen project: | |
+ | xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var | ||
+ | |||
+ | Check your quota (limit) | ||
+ | xfs_quota -x -c 'quota -p 4000187' /var | ||
+ | |||
+ | ==== Removing a Project (Directory) Quota ==== | ||
+ | Removing a quota from a project: | ||
+ | xfs_quota -x -c 'limit -p bhard=0 4000187' /var | ||
+ | |||
+ | Chreck the results: | ||
+ | xfs_quota -x -c report /var | ||
+ | |||
<pre> | <pre> | ||
− | + | User quota on /var (/dev/mapper/vg--local--01-var) | |
− | + | Blocks | |
− | + | User ID Used Soft Hard Warn/Grace | |
− | + | ---------- -------------------------------------------------- | |
− | 00 | + | root 512000 0 0 00 [--------] |
− | + | 4000187 0 0 1024 00 [--------] | |
</pre> | </pre> | ||
+ | |||
+ | As you can see, the line with the Project ID 4000187 has disappeared: | ||
+ | 4000187 512000 0 1048576 00 [--------] | ||
+ | |||
+ | Don't forget to remove the project from <code>/etc/projects</code> and <code>/etc/projid</code>: | ||
+ | sed -i -e '/4000187/d' /etc/projects | ||
+ | sed -i -e '/4000187/d' /etc/projid | ||
+ | |||
+ | === Some important notes concerning XFS === | ||
+ | # The '''quotacheck''' command has no effect on XFS filesystems. The first time quota accounting is turned on (at mount time), XFS does an automatic quotacheck internally; afterwards, the quota system will always be completely consistent until quotas are manually turned off. | ||
+ | # There is '''no need for quota file(s)''' in the root of the XFS filesystem. | ||
== prov-backup-rsnapshot == | == prov-backup-rsnapshot == | ||
+ | Install the [[stoney_backup:_prov-backup-rsnapshot | prov-backup-rsnasphot ]] daemon script using the package manager: | ||
<pre> | <pre> | ||
− | + | emerge -va sys-apps/sst-prov-backup-rsnapshot | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
</pre> | </pre> | ||
+ | |||
=== Configuration === | === Configuration === | ||
− | + | If it is the first provisioning module running on this server (very likely) you first have to configure the provisioning daemon (you can skip this step if you have already another provisioning module running on this server) | |
− | + | ||
+ | ==== Provisioning global configuration ==== | ||
+ | The global configuration for the provisioning daemon (which was installed with the first provisioning module and the <code>sys-apps/sst-provisioning</code> package) applies to all provisioning modules running on the server. This configuration therefore contains information about the provisioning daemon itself and no information at all about the specific modules. | ||
+ | /etc/Provisioning/Global.conf | ||
<pre> | <pre> | ||
− | # Copyright (C) | + | # Copyright (C) 2012 stepping stone GmbH |
# Switzerland | # Switzerland | ||
# http://www.stepping-stone.ch | # http://www.stepping-stone.ch | ||
Line 513: | Line 645: | ||
# | # | ||
+ | [Global] | ||
+ | # If true the script logs every information to the log-file. | ||
+ | LOG_DEBUG = 0 | ||
+ | |||
+ | # If true the script logs additional information to the log-file. | ||
+ | LOG_INFO = 1 | ||
+ | |||
+ | #If true the script logs warnings to the log-file. | ||
+ | LOG_WARNING = 1 | ||
+ | |||
+ | #If true the script logs errors to the log-file. | ||
+ | LOG_ERR = 1 | ||
+ | |||
+ | |||
+ | # The number of seconds to wait before retry contacting the backend server during startup. | ||
+ | SLEEP = 10 | ||
+ | |||
+ | # Number of backend server connection retries during startup. | ||
+ | ATTEMPTS = 3 | ||
+ | |||
+ | [Operation Mode] | ||
+ | # The number of seconds to wait before retry contacting the backend server in case of a service interruptions. | ||
+ | SLEEP = 30 | ||
+ | |||
+ | # Number of backend server connection retries in case of a service interruptions. | ||
+ | ATTEMPTS = 3 | ||
+ | |||
+ | [Mail] | ||
+ | # Error messages are sent to the mail configured below. | ||
+ | SENDTO = <YOUR-MAIL-ADDRESS> | ||
+ | HOST = mail.stepping-stone.ch | ||
+ | PORT = 587 | ||
+ | USERNAME = <YOUR-NOTIFICATION-EMAIL-ADDRESS> | ||
+ | PASSWORD = <PASSWORD> | ||
+ | FROMNAME = Provisioning daemon | ||
+ | CA_DIR = /etc/ssl/certs | ||
+ | SSL = starttls | ||
+ | AUTH_METHOD = LOGIN | ||
+ | |||
+ | # Additionally, you can be informed about creation, modification and deletion of services. | ||
+ | WANTINFOMAIL = 1 | ||
+ | </pre> | ||
+ | |||
+ | ==== Provisioning daemon prov-backup-rsnapshot module ==== | ||
+ | The module specific configuration is located in /etc/Provisioning/<Service>/<Type>.conf. In the case of the prov-backup-rsnapshot module this is <code>/etc/Provisioning/Backup/Rsnapshot.conf</code>. (Note: Comments starting with /* are not in the configuration file, they are only in the wiki to add some additional information) | ||
+ | |||
+ | <pre> | ||
+ | # Copyright (C) 2013 stepping stone GmbH | ||
+ | # Switzerland | ||
+ | # http://www.stepping-stone.ch | ||
+ | # support@stepping-stone.ch | ||
+ | # | ||
+ | # Authors: | ||
+ | # Pat Kläy <pat.klaey@stepping-stone.ch> | ||
+ | # | ||
+ | # Licensed under the EUPL, Version 1.1. | ||
+ | # | ||
+ | # You may not use this work except in compliance with the | ||
+ | # Licence. | ||
+ | # You may obtain a copy of the Licence at: | ||
+ | # | ||
+ | # http://www.osor.eu/eupl | ||
+ | # | ||
+ | # Unless required by applicable law or agreed to in | ||
+ | # writing, software distributed under the Licence is | ||
+ | # distributed on an "AS IS" basis, | ||
+ | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either | ||
+ | # express or implied. | ||
+ | # See the Licence for the specific language governing | ||
+ | # permissions and limitations under the Licence. | ||
+ | # | ||
+ | /* If you want, you can override the log information from the global configuration file this might be useful for debugging */ | ||
[Global] | [Global] | ||
# If true the script logs every information to the log-file. | # If true the script logs every information to the log-file. | ||
Line 527: | Line 731: | ||
LOG_ERR = 1 | LOG_ERR = 1 | ||
− | ENVIRONMENT = | + | /* Specify the hosts fully qualified domain name. This name will be used to perform some checks and also appear in the information and error mails */ |
+ | ENVIRONMENT = <FQDN> | ||
[Database] | [Database] | ||
BACKEND = LDAP | BACKEND = LDAP | ||
− | SERVER = ldaps://ldapm.tombstone. | + | SERVER = ldaps://ldapm.tombstone.org |
PORT = 636 | PORT = 636 | ||
ADMIN_USER = cn=Manager,dc=stoney-cloud,dc=org | ADMIN_USER = cn=Manager,dc=stoney-cloud,dc=org | ||
ADMIN_PASSWORD = <PASSWORD> | ADMIN_PASSWORD = <PASSWORD> | ||
SERVICE_SUBTREE = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org | SERVICE_SUBTREE = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org | ||
− | COOKIE_FILE = | + | COOKIE_FILE = /etc/Provisioning/Backup/rsnapshot.cookie |
DEFAULT_COOKIE = rid=001,csn= | DEFAULT_COOKIE = rid=001,csn= | ||
− | SEARCH_FILTER = (&(entryCSN>=%entryCSN%)( | + | SEARCH_FILTER = (&(entryCSN>=%entryCSN%)(sstProvisioningState=0)) |
+ | /* Specifies the service itself. As it is the prov-backup-rsnapshot module, the SERVICE is "Backup" and the TYPE is "Rsnapshot". | ||
+ | * The MODUS is as usual selfcare and the TRANSPORTAPI is LocalCLI. This is because the daemon is running on the same host as the | ||
+ | * backup accounts are provisioned and the commands can be executed on this host using the cli. | ||
+ | * For more information about MODUS and TRANSPORTAPI see https://int.stepping-stone.ch/wiki/provisioning.pl#Service_Konfiguration | ||
+ | */ | ||
[Service] | [Service] | ||
MODUS = selfcare | MODUS = selfcare | ||
Line 546: | Line 756: | ||
TYPE = Rsnapshot | TYPE = Rsnapshot | ||
− | SYSLOG = | + | SYSLOG = prov-backup-rsnapshot |
+ | /* For the TRANSPORTAPI LocalCLI there is no gateway required because there is no connection to establish. So set HOST, USER and | ||
+ | * DSA_FILE to whatever you want. Don't leave it blank, otherwise the provisioning daemon would log some error messages saying | ||
+ | * these attributes are empty | ||
+ | */ | ||
[Gateway] | [Gateway] | ||
HOST = localhost | HOST = localhost | ||
Line 553: | Line 767: | ||
DSA_FILE = none | DSA_FILE = none | ||
+ | /* Information about the backup itself (how to setup everything). Note that the %uid% int the RSNAPSHOT_CONFIG_FILE parameter will | ||
+ | * be replaced by the accounts UID. The script CREATE_CHROOT_CMD was installed with the prov-backup-rsnapshot module, so do not | ||
+ | * change this parameter. The quota parameters (SET_QUOTA_CMD, MOUNTPOINT, QUOTA_FILE, PROJECTS_FILE and PROJID_FILE) represent | ||
+ | * the quota setup as described on http://wiki.stoney-cloud.org/index.php/stoney_backup:_Server_set-up#Quota. If you followed this | ||
+ | * manual, you can copy-paste them into your configuration file, otherwise adapt them according to your quota setup. | ||
+ | */ | ||
[Backup] | [Backup] | ||
RSNAPSHOT_CONFIG_FILE = /etc/rsnapshot/rsnapshot.conf.%uid% | RSNAPSHOT_CONFIG_FILE = /etc/rsnapshot/rsnapshot.conf.%uid% | ||
− | SET_QUOTA_CMD = /usr/sbin/ | + | SET_QUOTA_CMD = /usr/sbin/xfs_quota |
− | CREATE_CHROOT_CMD = / | + | CREATE_CHROOT_CMD = /usr/libexec/createBackupDirectory.sh |
− | MOUNTPOINT = / | + | MOUNTPOINT = /var |
+ | QUOTA_FILE = /etc/backupSize | ||
+ | PROJECTS_FILE = /etc/projects | ||
+ | PROJID_FILE = /etc/projid | ||
</pre> | </pre> | ||
− | == | + | == backup utils == |
− | + | Install the backup utils (multiple scripts which help you to manage and monitor your backup server and backup accounts) using the package manager. For more information about the scripts please see the [[stoney_backup:_Service_Software | stoney backup Service Software]] page. | |
− | + | ||
<pre> | <pre> | ||
− | + | emerge -va sys-apps/sst-backup-utils | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
</pre> | </pre> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
=== Configuration === | === Configuration === | ||
− | + | Please refer to the configuration sections for the different scripts in [[stoney_backup:_Service_Software | stoney backup Service Software]]. | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
= Links = | = Links = | ||
Line 715: | Line 802: | ||
* [http://www.busybox.net/ Busybox] BusyBox combines tiny versions of many common UNIX utilities into a single small executable. Useful to reduce the number of files (and thus the complexity) when building a chroot. | * [http://www.busybox.net/ Busybox] BusyBox combines tiny versions of many common UNIX utilities into a single small executable. Useful to reduce the number of files (and thus the complexity) when building a chroot. | ||
− | + | [[Category:stoney backup]] | |
− | + | ||
− | [[Category: | + |
Latest revision as of 08:41, 27 June 2014
Contents
- 1 Abstract
- 2 Overview
- 3 Software Installation
- 4 Base Server Software Configuration
- 4.1 OpenSSH
- 4.2 OpenLDAP
- 4.3 Random Number Generator (haveged)
- 4.4 nss-pam-ldapd
- 4.5 Quota
- 4.6 prov-backup-rsnapshot
- 4.7 backup utils
- 5 Links
Abstract
This document describes server setup for the stoney cloud (Online) Backup service, built upon the Gentoo Linux distribution.
Overview
After working through this documentation, you will be able to set up and configure your own (Online) Backup service server.
Software Installation
Requirements
A working stoney cloud, installed according to stoney cloud: Single-Node Installation or stoney cloud: Multi-Node Installation.
Keywords & USE-Flags
For a minimal OpenLDAP directory installation:
echo "net-nds/openldap minimal sasl" >> /etc/portage/package.use echo "net-nds/openldap ~amd64" >> /etc/portage/package.keywords
NSS and PAM modules for lookups using LDAP:
echo "sys-auth/nss-pam-ldapd sasl" >> /etc/portage/package.use echo "sys-auth/nss-pam-ldapd ~amd64" >> /etc/portage/package.keywords echo "sys-fs/quota ldap" >> /etc/portage/package.use
echo "=app-admin/jailkit-2.16 ~amd64" >> /etc/portage/package.keywords
For the prov-backup-rsnapshot daemon:
echo "dev-perl/Net-SMTPS ~amd64" >> /etc/portage/package.keywords echo "perl-core/Switch ~amd64" >> /etc/portage/package.keywords
To build puttygen only without X11:
echo "net-misc/putty ~amd64" >> /etc/portage/package.keywords echo "net-misc/putty -gtk" >> /etc/portage/package.use
Emerge
emerge -va nss-pam-ldapd \ quota \ net-misc/putty \ app-admin/jailkit \ sys-apps/haveged \ net-misc/putty \ sys-apps/sst-backup-utils \ sys-apps/sst-prov-backup-rsnapshot
To list the dependencies of ebuilds, you can use equery
:
equery depgraph sst-backup-utils
* Searching for sst-backup-utils ... * dependency graph for sys-apps/sst-backup-utils-0.1.0 `-- sys-apps/sst-backup-utils-0.1.0 amd64 `-- dev-perl/PerlUtil-0.1.0 (>=dev-perl/PerlUtil-0.1.0) amd64 `-- virtual/perl-Sys-Syslog-0.320.0 (virtual/perl-Sys-Syslog) amd64 `-- dev-perl/perl-ldap-0.530.0 (dev-perl/perl-ldap) amd64 `-- dev-perl/XML-Simple-2.200.0 (dev-perl/XML-Simple) amd64 `-- dev-perl/Config-IniFiles-2.780.0 (dev-perl/Config-IniFiles) amd64 `-- dev-perl/XML-Validator-Schema-1.100.0 (dev-perl/XML-Validator-Schema) amd64 `-- dev-perl/Date-Calc-6.300.0 (dev-perl/Date-Calc) amd64 `-- dev-perl/DateManip-6.310.0 (dev-perl/DateManip) amd64 `-- dev-perl/Schedule-Cron-Events-1.930.0 (dev-perl/Schedule-Cron-Events) amd64 `-- dev-perl/DateTime-Format-Strptime-1.520.0 (dev-perl/DateTime-Format-Strptime) amd64 `-- dev-perl/XML-SAX-0.990.0 (dev-perl/XML-SAX) amd64 `-- virtual/perl-MIME-Base64-3.130.0-r2 (virtual/perl-MIME-Base64) amd64 `-- dev-perl/Authen-SASL-2.160.0 (dev-perl/Authen-SASL) amd64 `-- dev-perl/Net-SMTPS-0.30.0 (dev-perl/Net-SMTPS) ~amd64 `-- dev-perl/text-template-1.450.0 (dev-perl/text-template) amd64 `-- virtual/perl-Getopt-Long-2.380.0-r2 (virtual/perl-Getopt-Long) amd64 `-- dev-perl/Parallel-ForkManager-1.20.0 (dev-perl/Parallel-ForkManager) amd64 `-- dev-perl/Time-Stopwatch-1.0.0 (dev-perl/Time-Stopwatch) amd64 `-- app-backup/rsnapshot-1.3.1-r1 (app-backup/rsnapshot) amd64 [ sys-apps/sst-backup-utils-0.1.0 stats: packages (20), max depth (1) ]
For more information, visit the Gentoolkit page.
Base Server Software Configuration
OpenSSH
OpenSSH Configuration
Configure the OpenSSH daemon:
vi /etc/ssh/sshd_config
Set following options:
PubkeyAuthentication yes PasswordAuthentication yes UsePAM yes Subsystem sftp internal-sftp
Make sure, that Subsystem sftp internal-sftp
is the last line in the configuration file.
We want to reduce the numbers of chroot environments in one folder. As the ChrootDirectory
configuration option only allows %h
(home directory of the user) and %u
(username of the user), we need to create the necessary matching rules in the form of:
Match User *000 ChrootDirectory /var/backup/000/%u AuthorizedKeysFile /var/backup/000/%u/%h/.ssh/authorized_keys Match Match User *001 ChrootDirectory /var/backup/001/%u AuthorizedKeysFile /var/backup/001/%u/%h/.ssh/authorized_keys Match ... Match User *999 ChrootDirectory /var/backup/999/%u AuthorizedKeysFile /var/backup/999/%u/%h/.ssh/authorized_keys Match
The creation of the matching rules is done by executing the following bash commands:
FILE=/etc/ssh/sshd_config; for x in {0..999} ; do \ printf "Match User *%03d\n" $x >> ${FILE}; \ printf " ChrootDirectory /var/backup/%03d/%%u\n" $x >> ${FILE}; \ printf " AuthorizedKeysFile /var/backup/%03d/%%u/%%h/.ssh/authorized_keys\n" $x >> ${FILE}; \ printf "Match\n" >> ${FILE}; \ done
Don't forget to restart the OpenSSH daemon:
/etc/init.d/sshd restart
OpenSSH Host Keys
If you migrate from a existing backup server, you might want to copy the ssh host keys to the new server. If you do so clients want see a difference between the two hosts as the fingerprint remains the same. Copy the following files from the existing host to the new:
- /etc/ssh/ssh_host_dsa_key
- /etc/ssh/ssh_host_ecdsa_key
- /etc/ssh/ssh_host_key
- /etc/ssh/ssh_host_rsa_key
- /etc/ssh/ssh_host_dsa_key.pub
- /etc/ssh/ssh_host_ecdsa_key.pub
- /etc/ssh/ssh_host_key.pub
- /etc/ssh/ssh_host_rsa_key.pub
Set the correct permissions on the new host:
chmod 600 /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_ecdsa_key /etc/ssh/ssh_host_key /etc/ssh/ssh_host_rsa_key chmod 644 /etc/ssh/*.pub
And restart the ssh daemon. Caution: do not close your existing ssh session as long as you are not sure the ssh daemon has restarted properly and you can login again.
/etc/init.d/sshd restart
OpenLDAP
/etc/hosts
Update the /etc/hosts
with the LDAP server:
/etc/hosts
# VIP of the LDAP Server 31.216.40.4 ldapm.stoney-cloud.org
Root CA Certificate Installation
Install the root CA certificate into the OpenSSL default certificate storage directory:
fqdn="cloud.stoney-cloud.org" # The fully qualified domain name of the server containing the root certificate. cd /etc/ssl/certs/ wget --no-check-certificate https://${fqdn}/ca/FOSS-Cloud_CA.cert.pem chown root:root /etc/ssl/certs/FOSS-Cloud_CA.cert.pem chmod 444 /etc/ssl/certs/FOSS-Cloud_CA.cert.pem
Rebuild the CA hashes
c_rehash /etc/ssl/certs/
/etc/openldap/ldap.conf
Update the /etc/openldap/ldap.conf
LDAP configuration file/environment variables:
/etc/openldap/ldap.conf
# Used to specify a size limit to use when performing searches. The number should be an # non-negative integer. SIZELIMIT of zero (0) specifies unlimited search size. SIZELIMIT 20000 # Used to specify a time limit to use when performing searches. The number should be an # non-negative integer. TIMELIMIT of zero (0) specifies unlimited search time to be used. TIMELIMIT 45 # Specify how aliases dereferencing is done. DEREF should be set to one of never, always, search, # or find to specify that aliases are never dereferenced, always dereferenced, dereferenced when # searching, or dereferenced only when locating the base object for the search. The default is to # never dereference aliases. DEREF never # Specifies the URI(s) of an LDAP server(s) to which the LDAP library should connect. The URI # scheme may be either ldapor ldaps which refer to LDAP over TCP and LDAP over SSL (TLS) # respectively. Each server's name can be specified as a domain- style name or an IP address # literal. Optionally, the server's name can followed by a ':' and the port number the LDAP # server is listening on. If no port number is provided, the default port for the scheme is # used (389 for ldap://, 636 for ldaps://). A space separated list of URIs may be provided. URI ldaps://ldapm.stoney-cloud.org # Used to specify the default base DN to use when performing ldap operations. The base must be # specified as a Distinguished Name in LDAP format. BASE dc=stoney-cloud,dc=org # This is a local copy of the certificate of the certificate authority # used to sign the server certificate for the LDAP server I am using TLS_CACERT /etc/ssl/certs/FOSS-Cloud_CA.cert.pem
Check you configuration by doing a search:
ldapsearch -v -H "ldaps://ldapm.stoney-cloud.org" \ -b "dc=stoney-cloud,dc=org" \ -D "cn=Manager,dc=stoney-cloud,dc=org" \ -s one "(objectClass=*)" \ -LLL -W
The result should look something like:
ldap_initialize( ldaps://ldapm.stoney-cloud.org:636/??base ) filter: (objectClass=*) requesting: All userApplication attributes dn: ou=administration,dc=stoney-cloud,dc=org objectClass: top objectClass: organizationalUnit ou: administration ...
Random Number Generator (haveged)
Tools like putty are dependent on random numbers to be able to create certificates.
haveged - Generate random numbers and feed linux random device
The haveged daemon doesn't need any special configuration, therefore you can start it from the command line interface:
/etc/init.d/haveged start
Check, if the start was successful:
ps auxf | grep haveged
root 18001 1.0 0.0 7420 3616 ? Ss 08:48 0:00 /usr/sbin/haveged -r 0 -w 1024 -v 1
Add the haveged daemon to the default run level:
rc-update add haveged default
nss-pam-ldapd
nslcd.conf — configuration file for LDAP nameservice daemon
/etc/nslcd.conf
# This is the configuration file for the LDAP nameservice # switch library's nslcd daemon. It configures the mapping # between NSS names (see /etc/nsswitch.conf) and LDAP # information in the directory. # See the manual page nslcd.conf(5) for more information. # The user and group nslcd should run as. uid nslcd gid nslcd # The uri pointing to the LDAP server to use for name lookups. # Multiple entries may be specified. The address that is used # here should be resolvable without using LDAP (obviously). #uri ldap://127.0.0.1/ #uri ldaps://127.0.0.1/ #uri ldapi://%2fvar%2frun%2fldapi_sock/ # Note: %2f encodes the '/' used as directory separator uri ldaps://ldapm.tombstone.ch # The LDAP version to use (defaults to 3 # if supported by client library) #ldap_version 3 # The distinguished name of the search base. base dc=stoney-cloud,dc=org # The distinguished name to bind to the server with. # Optional: default is to bind anonymously. binddn cn=Manager,dc=stoney-cloud,dc=org # The credentials to bind with. # Optional: default is no credentials. # Note that if you set a bindpw you should check the permissions of this file. bindpw myverysecretpassword # The distinguished name to perform password modifications by root by. #rootpwmoddn cn=admin,dc=example,dc=com # The default search scope. #scope sub #scope one #scope base # Customize certain database lookups. #base group ou=Groups,dc=example,dc=com base group ou=groups,ou=backup,ou=services,dc=stoney-cloud,dc=org base passwd ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org base shadow ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org #scope group onelevel #scope hosts sub #filter group (&(objectClass=posixGroup)(sstIsActive=TRUE)) filter passwd (&(objectClass=posixAccount)(sstIsActive=TRUE)) filter shadow (&(objectClass=shadowAccount)(sstIsActive=TRUE)) # Bind/connect timelimit. #bind_timelimit 30 # Search timelimit. #timelimit 30 # Idle timelimit. nslcd will close connections if the # server has not been contacted for the number of seconds. #idle_timelimit 3600 # Use StartTLS without verifying the server certificate. #ssl start_tls tls_reqcert never # CA certificates for server certificate verification #tls_cacertdir /etc/ssl/certs #tls_cacertfile /etc/ssl/ca.cert # Seed the PRNG if /dev/urandom is not provided #tls_randfile /var/run/egd-pool # SSL cipher suite # See man ciphers for syntax #tls_ciphers TLSv1 # Client certificate and key # Use these, if your server requires client authentication. #tls_cert #tls_key # Mappings for Services for UNIX 3.5 #filter passwd (objectClass=User) #map passwd uid msSFU30Name #map passwd userPassword msSFU30Password #map passwd homeDirectory msSFU30HomeDirectory #map passwd homeDirectory msSFUHomeDirectory #filter shadow (objectClass=User) #map shadow uid msSFU30Name #map shadow userPassword msSFU30Password #filter group (objectClass=Group) #map group member msSFU30PosixMember # Mappings for Services for UNIX 2.0 #filter passwd (objectClass=User) #map passwd uid msSFUName #map passwd userPassword msSFUPassword #map passwd homeDirectory msSFUHomeDirectory #map passwd gecos msSFUName #filter shadow (objectClass=User) #map shadow uid msSFUName #map shadow userPassword msSFUPassword #map shadow shadowLastChange pwdLastSet #filter group (objectClass=Group) #map group member posixMember # Mappings for Active Directory #pagesize 1000 #referrals off #idle_timelimit 800 #filter passwd (&(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*)) #map passwd uid sAMAccountName #map passwd homeDirectory unixHomeDirectory #map passwd gecos displayName #filter shadow (&(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*)) #map shadow uid sAMAccountName #map shadow shadowLastChange pwdLastSet #filter group (objectClass=group) # Alternative mappings for Active Directory # (replace the SIDs in the objectSid mappings with the value for your domain) #pagesize 1000 #referrals off #idle_timelimit 800 #filter passwd (&(objectClass=user)(objectClass=person)(!(objectClass=computer))) #map passwd uid cn #map passwd uidNumber objectSid:S-1-5-21-3623811015-3361044348-30300820 #map passwd gidNumber objectSid:S-1-5-21-3623811015-3361044348-30300820 #map passwd homeDirectory "/home/$cn" #map passwd gecos displayName #map passwd loginShell "/bin/bash" #filter group (|(objectClass=group)(objectClass=person)) #map group gidNumber objectSid:S-1-5-21-3623811015-3361044348-30300820 # Mappings for AIX SecureWay #filter passwd (objectClass=aixAccount) #map passwd uid userName #map passwd userPassword passwordChar #map passwd uidNumber uid #map passwd gidNumber gid #filter group (objectClass=aixAccessGroup) #map group cn groupName #map group gidNumber gid
nsswitch.conf - Name Service Switch configuration file
/etc/nsswitch.conf
passwd: files ldap shadow: files ldap group: files ldap # passwd: db files nis # shadow: db files nis # group: db files nis hosts: files dns networks: files dns services: db files protocols: db files rpc: db files ethers: db files netmasks: files netgroup: files bootparams: files automount: files aliases: files
system-auth
vi /etc/pam.d/system-auth
auth required pam_env.so auth sufficient pam_unix.so try_first_pass likeauth nullok auth sufficient pam_ldap.so minimum_uid=1000 use_first_pass auth required pam_deny.so account required pam_unix.so account sufficient pam_ldap.so minimum_uid=1000 use_first_pass password required pam_cracklib.so difok=2 minlen=8 dcredit=2 ocredit=2 retry=3 password required pam_unix.so try_first_pass use_authtok nullok sha512 shadow password sufficient pam_ldap.so minimum_uid=1000 use_first_pass password required pam_deny.so session required pam_limits.so session required pam_env.so session required pam_unix.so session sufficient pam_ldap.so minimum_uid=1000 use_first_pass
Test the Setup
nslcd -d
Update the Default Run Levels
rc-update add nslcd default rc-update add nscd default
Start the necessary Daemons
/etc/init.d/nslcd start /etc/init.d/nscd start
Quota
32-bit Project Identifier Support
We need to enable 32-bit project identifier support (PROJID32BIT feature) for our naming scheme (uid numbers larger than 65'536), which is already the default on the stepping stone virtual machines:
mkfs.xfs -i projid32bit=1 /dev/vg-local-01/var
Update /etc/fstab and Mount
Make sure, that you have user quota (uqota) and project quota (pquota) set as options on the chosen mount point in /etc/fstab. For example:
LABEL=LV-VAR /var xfs noatime,discard,inode64,uquota,pquota 0 2
reboot
Check, if everything went ok:
df -h | grep var
/dev/mapper/vg--local--01-var 1023G 220G 804G 22% /var
Verify
Some important options for xfs_quota:
- -x: Enable expert mode.
- -c: Pass arguments on the command line. Multiple arguments may be given.
Remount the file system /var and check, if /var has the desired values:
xfs_quota -x -c state /var
As you can see (items marked bold), we have achieved our goal:
User quota state on /var (/dev/mapper/vg--local--01-var) Accounting: ON Enforcement: ON Inode: #131 (1 blocks, 1 extents) Group quota state on /var (/dev/mapper/vg--local--01-var) Accounting: OFF Enforcement: OFF Inode: #132 (1 blocks, 1 extents) Project quota state on /var (/dev/mapper/vg--local--01-var) Accounting: ON Enforcement: ON Inode: #132 (1 blocks, 1 extents) Blocks grace time: [7 days 00:00:30] Inodes grace time: [7 days 00:00:30] Realtime Blocks grace time: [7 days 00:00:30]
User Quotas
Adding a User Quota
Set a quota of 1 Gigabyte for the user 4000187 (the values are in kilobytes, so 1048576 kilobyte are 1024 megabytes which corresponds to 1 gigabyte):
xfs_quota -x -c 'limit bhard=1048576k 4000187' /var
Or in bytes:
xfs_quota -x -c 'limit bhard=1073741824 4000187' /var
Read the quota information for the user 4000187:
xfs_quota -x -c 'quota -v -N -u 4000187' /var
/dev/mapper/vg--local--01-var 0 0 1048576 00 [--------] /var
If the user has data in the project, that belongs to him, the result will change:
/dev/mapper/vg--local--01-var 512000 0 1048576 00 [--------] /var
Modifiying a User Quota
To modify a users quota, you just set a new quota (limit):
xfs_quota -x -c 'limit bhard=1048576k 4000187' /var
Read the quota information for the user 4000187:
xfs_quota -x -c 'quota -v -N -u 4000187' /var
/dev/mapper/vg--local--01-var 0 0 1048576 00 [--------] /var
If the user has data in the project, that belongs to him, the result will change:
/dev/mapper/vg--local--01-var 512000 0 1048576 00 [--------] /var
Removing a User Quota
Removing a quota for a user:
xfs_quota -x -c 'limit bhard=0 4000187' /var
The following command should give you an empty result:
xfs_quota -x -c 'quota -v -N -u 4000187' /var
Project (Directory) Quotas
Adding a Project (Directory) Quota
The XFS file system additionally allows you to set quotas on individual directory hierarchies in the file system that are known as managed trees. Each managed tree is uniquely identified by a project ID and an optional project name. We'll use the following values in the examples:
- project_ID: The uid of the online backup account (4000187).
- project_name: The uid of the online backup account (4000187). This could be a human readable name.
- mountpoint: The mountpoint of the xfs-filesystem (/var). See the
/etc/fstab
entry from above. - directory: The directory of the project (187/4000187), starting from the mountpoint of the xfs-filesystem (/var).
Define a unique project ID for the directory hierarchy in the /etc/projects
file (project_ID:mountpoint/directory):
echo "4000187:/var/backup/187/4000187/home/4000187" >> /etc/projects
Create an entry in the /etc/projid
file that maps a project name to the project ID (project_name:project_ID):
echo "4000187:4000187" >> /etc/projid
Set Project:
xfs_quota -x -c 'project -s -p /var/backup/187/4000187/home/4000187 4000187' /var
Set Quota (limit) on Project:
xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var
Check your Quota (limit)
xfs_quota -x -c 'quota -p 4000187' /var
Check the Quota:
-
-v
: increase verbosity in reporting (also dumps zero values). -
-N
: suppress the initial header. -
-p
: display project quota information. -
-h
: human readable format.
xfs_quota -x -c 'quota -v -N -p 4000187' /var
/dev/mapper/vg--local--01-var 0 0 1048576 00 [--------] /var
If you copied data into the project, the output will look something like:
/dev/mapper/vg--local--01-var 512000 0 1048576 00 [--------] /var
To give you an overall view of the whole system:
xfs_quota -x -c report /var
User quota on /var (/dev/mapper/vg--local--01-var) Blocks User ID Used Soft Hard Warn/Grace ---------- -------------------------------------------------- root 1024000 0 0 00 [--------] 4000187 0 0 1048576 00 [--------] Project quota on /var (/dev/mapper/vg--local--01-var) Blocks Project ID Used Soft Hard Warn/Grace ---------- -------------------------------------------------- 4000187 512000 0 1048576 00 [--------]
Modifying a Project (Directory) Quota
To modify a project (directory) quota, you just set an new quota (limit) on the chosen project:
xfs_quota -x -c 'limit -p bhard=1048576k 4000187' /var
Check your quota (limit)
xfs_quota -x -c 'quota -p 4000187' /var
Removing a Project (Directory) Quota
Removing a quota from a project:
xfs_quota -x -c 'limit -p bhard=0 4000187' /var
Chreck the results:
xfs_quota -x -c report /var
User quota on /var (/dev/mapper/vg--local--01-var) Blocks User ID Used Soft Hard Warn/Grace ---------- -------------------------------------------------- root 512000 0 0 00 [--------] 4000187 0 0 1024 00 [--------]
As you can see, the line with the Project ID 4000187 has disappeared:
4000187 512000 0 1048576 00 [--------]
Don't forget to remove the project from /etc/projects
and /etc/projid
:
sed -i -e '/4000187/d' /etc/projects sed -i -e '/4000187/d' /etc/projid
Some important notes concerning XFS
- The quotacheck command has no effect on XFS filesystems. The first time quota accounting is turned on (at mount time), XFS does an automatic quotacheck internally; afterwards, the quota system will always be completely consistent until quotas are manually turned off.
- There is no need for quota file(s) in the root of the XFS filesystem.
prov-backup-rsnapshot
Install the prov-backup-rsnasphot daemon script using the package manager:
emerge -va sys-apps/sst-prov-backup-rsnapshot
Configuration
If it is the first provisioning module running on this server (very likely) you first have to configure the provisioning daemon (you can skip this step if you have already another provisioning module running on this server)
Provisioning global configuration
The global configuration for the provisioning daemon (which was installed with the first provisioning module and the sys-apps/sst-provisioning
package) applies to all provisioning modules running on the server. This configuration therefore contains information about the provisioning daemon itself and no information at all about the specific modules.
/etc/Provisioning/Global.conf
# Copyright (C) 2012 stepping stone GmbH # Switzerland # http://www.stepping-stone.ch # support@stepping-stone.ch # # Authors: # Pat Kläy <pat.klaey@stepping-stone.ch> # # Licensed under the EUPL, Version 1.1. # # You may not use this work except in compliance with the # Licence. # You may obtain a copy of the Licence at: # # http://www.osor.eu/eupl # # Unless required by applicable law or agreed to in # writing, software distributed under the Licence is # distributed on an "AS IS" basis, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either # express or implied. # See the Licence for the specific language governing # permissions and limitations under the Licence. # [Global] # If true the script logs every information to the log-file. LOG_DEBUG = 0 # If true the script logs additional information to the log-file. LOG_INFO = 1 #If true the script logs warnings to the log-file. LOG_WARNING = 1 #If true the script logs errors to the log-file. LOG_ERR = 1 # The number of seconds to wait before retry contacting the backend server during startup. SLEEP = 10 # Number of backend server connection retries during startup. ATTEMPTS = 3 [Operation Mode] # The number of seconds to wait before retry contacting the backend server in case of a service interruptions. SLEEP = 30 # Number of backend server connection retries in case of a service interruptions. ATTEMPTS = 3 [Mail] # Error messages are sent to the mail configured below. SENDTO = <YOUR-MAIL-ADDRESS> HOST = mail.stepping-stone.ch PORT = 587 USERNAME = <YOUR-NOTIFICATION-EMAIL-ADDRESS> PASSWORD = <PASSWORD> FROMNAME = Provisioning daemon CA_DIR = /etc/ssl/certs SSL = starttls AUTH_METHOD = LOGIN # Additionally, you can be informed about creation, modification and deletion of services. WANTINFOMAIL = 1
Provisioning daemon prov-backup-rsnapshot module
The module specific configuration is located in /etc/Provisioning/<Service>/<Type>.conf. In the case of the prov-backup-rsnapshot module this is /etc/Provisioning/Backup/Rsnapshot.conf
. (Note: Comments starting with /* are not in the configuration file, they are only in the wiki to add some additional information)
# Copyright (C) 2013 stepping stone GmbH # Switzerland # http://www.stepping-stone.ch # support@stepping-stone.ch # # Authors: # Pat Kläy <pat.klaey@stepping-stone.ch> # # Licensed under the EUPL, Version 1.1. # # You may not use this work except in compliance with the # Licence. # You may obtain a copy of the Licence at: # # http://www.osor.eu/eupl # # Unless required by applicable law or agreed to in # writing, software distributed under the Licence is # distributed on an "AS IS" basis, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either # express or implied. # See the Licence for the specific language governing # permissions and limitations under the Licence. # /* If you want, you can override the log information from the global configuration file this might be useful for debugging */ [Global] # If true the script logs every information to the log-file. LOG_DEBUG = 1 # If true the script logs additional information to the log-file. LOG_INFO = 1 #If true the script logs warnings to the log-file. LOG_WARNING = 1 #If true the script logs errors to the log-file. LOG_ERR = 1 /* Specify the hosts fully qualified domain name. This name will be used to perform some checks and also appear in the information and error mails */ ENVIRONMENT = <FQDN> [Database] BACKEND = LDAP SERVER = ldaps://ldapm.tombstone.org PORT = 636 ADMIN_USER = cn=Manager,dc=stoney-cloud,dc=org ADMIN_PASSWORD = <PASSWORD> SERVICE_SUBTREE = ou=accounts,ou=backup,ou=services,dc=stoney-cloud,dc=org COOKIE_FILE = /etc/Provisioning/Backup/rsnapshot.cookie DEFAULT_COOKIE = rid=001,csn= SEARCH_FILTER = (&(entryCSN>=%entryCSN%)(sstProvisioningState=0)) /* Specifies the service itself. As it is the prov-backup-rsnapshot module, the SERVICE is "Backup" and the TYPE is "Rsnapshot". * The MODUS is as usual selfcare and the TRANSPORTAPI is LocalCLI. This is because the daemon is running on the same host as the * backup accounts are provisioned and the commands can be executed on this host using the cli. * For more information about MODUS and TRANSPORTAPI see https://int.stepping-stone.ch/wiki/provisioning.pl#Service_Konfiguration */ [Service] MODUS = selfcare TRANSPORTAPI = LocalCLI SERVICE = Backup TYPE = Rsnapshot SYSLOG = prov-backup-rsnapshot /* For the TRANSPORTAPI LocalCLI there is no gateway required because there is no connection to establish. So set HOST, USER and * DSA_FILE to whatever you want. Don't leave it blank, otherwise the provisioning daemon would log some error messages saying * these attributes are empty */ [Gateway] HOST = localhost USER = provisioning DSA_FILE = none /* Information about the backup itself (how to setup everything). Note that the %uid% int the RSNAPSHOT_CONFIG_FILE parameter will * be replaced by the accounts UID. The script CREATE_CHROOT_CMD was installed with the prov-backup-rsnapshot module, so do not * change this parameter. The quota parameters (SET_QUOTA_CMD, MOUNTPOINT, QUOTA_FILE, PROJECTS_FILE and PROJID_FILE) represent * the quota setup as described on http://wiki.stoney-cloud.org/index.php/stoney_backup:_Server_set-up#Quota. If you followed this * manual, you can copy-paste them into your configuration file, otherwise adapt them according to your quota setup. */ [Backup] RSNAPSHOT_CONFIG_FILE = /etc/rsnapshot/rsnapshot.conf.%uid% SET_QUOTA_CMD = /usr/sbin/xfs_quota CREATE_CHROOT_CMD = /usr/libexec/createBackupDirectory.sh MOUNTPOINT = /var QUOTA_FILE = /etc/backupSize PROJECTS_FILE = /etc/projects PROJID_FILE = /etc/projid
backup utils
Install the backup utils (multiple scripts which help you to manage and monitor your backup server and backup accounts) using the package manager. For more information about the scripts please see the stoney backup Service Software page.
emerge -va sys-apps/sst-backup-utils
Configuration
Please refer to the configuration sections for the different scripts in stoney backup Service Software.
Links
- OpenLDAP, an open source implementation of the Lightweight Directory Access Protocol.
- nss-pam-ldapd, a Name Service Switch (NSS) module that allows your LDAP server to provide user account, group, host name, alias, netgroup, and basically any other information that you would normally get from /etc flat files or NIS.
- Gentoo Leitfaden zur OpenLDAP Authentifikation.
- Centralized authentication using OpenLDAP.
- openssh-lpk_openldap.schema OpenSSH LDAP Public Keys.
- linuxquota Linux DiskQuota.
- rsnapshot, a remote filesystem snapshot utility, based on rsync.
- Jailkit, set of utilities to limit user accounts to specific files using chroot() and or specific commands. Also includes a tool to build a chroot environment.
- Busybox BusyBox combines tiny versions of many common UNIX utilities into a single small executable. Useful to reduce the number of files (and thus the complexity) when building a chroot.