<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.stoney-cloud.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Lucas</id>
	<title>stoney-cloud.org - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.stoney-cloud.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Lucas"/>
	<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/wiki/Special:Contributions/Lucas"/>
	<updated>2026-04-14T22:24:50Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.6</generator>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3165</id>
		<title>User:Lucas/Gentoo Install Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3165"/>
		<updated>2014-02-23T13:10:52Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* hack &amp;lt;code&amp;gt;/usr/local/scripts/netfilter/local/chains/vms/kvm_0231_chain.sh&amp;lt;/code&amp;gt; on host to allow gateway conns&lt;br /&gt;
* first hd is /dev/vda&lt;br /&gt;
* default gentoo handbook install with lvm setup on vda3 and one large lv_root&lt;br /&gt;
* install lvm2 so you can build a lvm initramfs&lt;br /&gt;
** if you skip this you will have tons of fun loading lvm in the initramfs shell: &amp;lt;code&amp;gt;lvm vgscan --mknodes &amp;amp;&amp;amp; lvm lvchange -a ly vg01/lv_root&amp;lt;/code&amp;gt;&lt;br /&gt;
* kernel build with: &amp;lt;code&amp;gt;genkernel --install --lvm --menuconfig all&amp;lt;/code&amp;gt; (do not use &amp;lt;code&amp;gt;--virtio&amp;lt;/code&amp;gt;, activate them in menuconfig instead, I had heaps of fun hunting down all the modules)&lt;br /&gt;
** actually &amp;lt;code&amp;gt;genkernel --install --lvm --kernel-config=/root/kernel.config all&amp;lt;/code&amp;gt; since lazy me hates using a ui&lt;br /&gt;
** the --virtio switch seems screwed due to some oldconfig changes with the &amp;lt;code&amp;gt;VIRTIO_MMIO&amp;lt;/code&amp;gt; system, but i haven&#039;t looked into that more&lt;br /&gt;
* remember to also set &amp;lt;code&amp;gt;GRUB_CMDLINE_LINUX=&amp;quot;dolvm&amp;quot;&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;/etc/default/grub&amp;lt;/code&amp;gt; (as i said before, a ton of fun)&lt;br /&gt;
* more things to install on new machines: &amp;lt;code&amp;gt;emerge dev-vcs/git vim&amp;lt;/code&amp;gt;&lt;br /&gt;
* now for puppet: &amp;lt;code&amp;gt;USE=&amp;quot;augeas vim-syntax&amp;quot; emerge puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* before using puppet: &amp;lt;code&amp;gt;emerge eix &amp;amp;&amp;amp; eix-update&amp;lt;/code&amp;gt;&lt;br /&gt;
* clone puppet tree: &amp;lt;code&amp;gt;git clone https://github.com/purplehazech/purplehazech-orcatamer.git /etc/puppet/environments/development&amp;lt;/code&amp;gt;&lt;br /&gt;
* install librarian: &amp;lt;code&amp;gt;gem19 install librarian-puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* load puppet modules: &amp;lt;code&amp;gt;cd /etc/puppet/environments/development &amp;amp;&amp;amp; librarian-puppet install&amp;lt;/code&amp;gt;&lt;br /&gt;
* workaround some TODOs: &amp;lt;code&amp;gt;ln -s /etc/puppet/environments/development/ /vagrant &amp;amp;&amp;amp; ulimit -n 2048 &amp;amp;&amp;amp; emerge dev-ruby/rgen  --autounmask-write &amp;amp;&amp;amp; dispatch-conf &amp;amp;&amp;amp; emerge dev-ruby/rgen&amp;lt;/code&amp;gt;&lt;br /&gt;
* test if puppet is useable: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;notify{&amp;quot;test&amp;quot;:}&#039; --pluginsync&amp;lt;/code&amp;gt;&lt;br /&gt;
* run puppet like so to find the first batch of stuff to fix: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync --noop&amp;lt;/code&amp;gt;&lt;br /&gt;
* let puppet rip: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync &amp;lt;/code&amp;gt;&lt;br /&gt;
* after running the last command until all the errors where fixed i can try to run in agent mode: &amp;lt;code&amp;gt;puppet agent --test --server=`hostname -f`&amp;lt;/code&amp;gt;&lt;br /&gt;
** i still need to figure out why the &amp;lt;code&amp;gt;--server&amp;lt;/code&amp;gt; flag is needed at this stage, somehow the agent is consulting DNS rather than &amp;lt;code&amp;gt;/etc/hosts&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
now for some hacking that i did to test some concepts:&lt;br /&gt;
* setup openldap tooling: emerge openldap&lt;br /&gt;
* search for machine: &amp;lt;code&amp;gt;ldapsearch -D &#039;cn=Manager,dc=stoney-cloud,dc=org&#039; -w admin &#039;(&amp;amp;(objectClass=sstVirtualizationVirtualMachine)(sstNetworkHostname=kvm-0231))&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
* open ldap port in fw: &amp;lt;code&amp;gt;ldap_pub_out=&amp;quot;10.1.130.13&amp;quot;&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;openTcpPortOut &amp;quot;${chains_out[pub]}&amp;quot; &amp;quot;$ldap_pub_out&amp;quot;         &amp;quot;636&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
** i also need to configure &amp;lt;code&amp;gt;ldaps_int_in=&amp;quot;${ip_int[vm-test-02]} ${ip_int[vm-test-03]} 192.168.140.136&amp;quot;&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;/usr/local/scripts/netfilter/local/chains/vm-test-01/vm-test-01_chain.sh&amp;lt;/code&amp;gt; for the above to work.&lt;br /&gt;
** after all the above i can still not connect from my node to the ldap server. I&#039;ll have ot get the iptables gurus on board to solve this. We need more documentation on the setup if a as simple dev should be able to change this. At some point I might even consider puppetizing th eiptables config.&lt;br /&gt;
&lt;br /&gt;
== TODOs ==&lt;br /&gt;
* refactor role and profile things into proper modules and use proper puppet:// data urls&lt;br /&gt;
* figure out why the betagarden overlay needs &amp;lt;code&amp;gt;ulimit -n 2048&amp;lt;/code&amp;gt; to clone&lt;br /&gt;
* install rgen for puppet parser future at some sensible part of bootstrapping&lt;br /&gt;
* figure out what going on here: &amp;lt;code&amp;gt;Feb 22 22:30:01 vm-test-01 ulogd[30493]: p_kvm-0231_0_in Denied dst:: IN=vmbr0 OUT=vmbr0 MAC=01:00:5e:00:00:12:00:00:5e:00:01:03:08:00 SRC=192.168.140.2 DST=224.0.0.18 LEN=56 TOS=10 PREC=0x00 TTL=255 ID=33458 DF PROTO=112 MARK=0 &amp;lt;/code&amp;gt;&lt;br /&gt;
* get rid of &amp;lt;code&amp;gt;/vargant&amp;lt;/code&amp;gt; hard-deps.&lt;br /&gt;
* make git with USE=&amp;quot;curl&amp;quot;&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3164</id>
		<title>User:Lucas/Gentoo Install Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3164"/>
		<updated>2014-02-23T12:07:35Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* hack &amp;lt;code&amp;gt;/usr/local/scripts/netfilter/local/chains/vms/kvm_0231_chain.sh&amp;lt;/code&amp;gt; on host to allow gateway conns&lt;br /&gt;
* first hd is /dev/vda&lt;br /&gt;
* default gentoo handbook install with lvm setup on vda3 and one large lv_root&lt;br /&gt;
* install lvm2 so you can build a lvm initramfs&lt;br /&gt;
** if you skip this you will have tons of fun loading lvm in the initramfs shell: &amp;lt;code&amp;gt;lvm vgscan --mknodes &amp;amp;&amp;amp; lvm lvchange -a ly vg01/lv_root&amp;lt;/code&amp;gt;&lt;br /&gt;
* kernel build with: &amp;lt;code&amp;gt;genkernel --install --lvm --menuconfig all&amp;lt;/code&amp;gt; (do not use &amp;lt;code&amp;gt;--virtio&amp;lt;/code&amp;gt;, activate them in menuconfig instead, I had heaps of fun hunting down all the modules)&lt;br /&gt;
** actually &amp;lt;code&amp;gt;genkernel --install --lvm --kernel-config=/root/kernel.config all&amp;lt;/code&amp;gt; since lazy me hates using a ui&lt;br /&gt;
** the --virtio switch seems screwed due to some oldconfig changes with the &amp;lt;code&amp;gt;VIRTIO_MMIO&amp;lt;/code&amp;gt; system, but i haven&#039;t looked into that more&lt;br /&gt;
* remember to also set &amp;lt;code&amp;gt;GRUB_CMDLINE_LINUX=&amp;quot;dolvm&amp;quot;&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;/etc/default/grub&amp;lt;/code&amp;gt; (as i said before, a ton of fun)&lt;br /&gt;
* more things to install on new machines: &amp;lt;code&amp;gt;emerge dev-vcs/git vim&amp;lt;/code&amp;gt;&lt;br /&gt;
* now for puppet: &amp;lt;code&amp;gt;USE=&amp;quot;augeas vim-syntax&amp;quot; emerge puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* before using puppet: &amp;lt;code&amp;gt;emerge eix &amp;amp;&amp;amp; eix-update&amp;lt;/code&amp;gt;&lt;br /&gt;
* clone puppet tree: &amp;lt;code&amp;gt;git clone https://github.com/purplehazech/purplehazech-orcatamer.git /etc/puppet/environments/development&amp;lt;/code&amp;gt;&lt;br /&gt;
* install librarian: &amp;lt;code&amp;gt;gem19 install librarian-puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* load puppet modules: &amp;lt;code&amp;gt;cd /etc/puppet/environments/development &amp;amp;&amp;amp; librarian-puppet install&amp;lt;/code&amp;gt;&lt;br /&gt;
* workaround some TODOs: &amp;lt;code&amp;gt;ln -s /etc/puppet/environments/development/ /vagrant &amp;amp;&amp;amp; mkdir /usr/local/portage &amp;amp;&amp;amp; touch /usr/local/portage/make.conf &amp;amp;&amp;amp; ulimit -n 2048 &amp;amp;&amp;amp; emerge sudo &amp;amp;&amp;amp; emerge dev-ruby/rgen  --autounmask-write &amp;amp;&amp;amp; dispatch-conf &amp;amp;&amp;amp; emerge dev-ruby/rgen&amp;lt;/code&amp;gt;&lt;br /&gt;
* test if puppet is useable: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;notify{&amp;quot;test&amp;quot;:}&#039; --pluginsync&amp;lt;/code&amp;gt;&lt;br /&gt;
* run puppet like so to find the first batch of stuff to fix: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync --noop&amp;lt;/code&amp;gt;&lt;br /&gt;
* let puppet rip: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync &amp;lt;/code&amp;gt;&lt;br /&gt;
* after running the last command until all the errors where fixed i can try to run in agent mode: &amp;lt;code&amp;gt;puppet agent --test --server=`hostname -f`&amp;lt;/code&amp;gt;&lt;br /&gt;
** i still need to figure out why the &amp;lt;code&amp;gt;--server&amp;lt;/code&amp;gt; flag is needed at this stage, somehow the agent is consulting DNS rather than &amp;lt;code&amp;gt;/etc/hosts&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
now for some hacking that i did to test some concepts:&lt;br /&gt;
* setup openldap tooling: emerge openldap&lt;br /&gt;
* search for machine: &amp;lt;code&amp;gt;ldapsearch -D &#039;cn=Manager,dc=stoney-cloud,dc=org&#039; -w admin &#039;(&amp;amp;(objectClass=sstVirtualizationVirtualMachine)(sstNetworkHostname=kvm-0231))&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
* open ldap port in fw: &amp;lt;code&amp;gt;ldap_pub_out=&amp;quot;10.1.130.13&amp;quot;&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;openTcpPortOut &amp;quot;${chains_out[pub]}&amp;quot; &amp;quot;$ldap_pub_out&amp;quot;         &amp;quot;636&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
** i also need to configure &amp;lt;code&amp;gt;ldaps_int_in=&amp;quot;${ip_int[vm-test-02]} ${ip_int[vm-test-03]} 192.168.140.136&amp;quot;&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;/usr/local/scripts/netfilter/local/chains/vm-test-01/vm-test-01_chain.sh&amp;lt;/code&amp;gt; for the above to work.&lt;br /&gt;
** after all the above i can still not connect from my node to the ldap server. I&#039;ll have ot get the iptables gurus on board to solve this. We need more documentation on the setup if a as simple dev should be able to change this. At some point I might even consider puppetizing th eiptables config.&lt;br /&gt;
&lt;br /&gt;
== TODOs ==&lt;br /&gt;
*  replace silly headers in orcatamer with block chars with something that most tools dont bork on (ie. some ascii art) &lt;br /&gt;
** I removed this on Puppetfile and Modulefile to get librarian to run&lt;br /&gt;
* use github https URLs through out, they are simply proxy friendlier everywhere&lt;br /&gt;
* refactor role and profile things into proper modules and use proper puppet:// data urls&lt;br /&gt;
* dont&#039; depend on /usr/local/portage/make.conf&lt;br /&gt;
* figure out why the betagarden overlay needs &amp;lt;code&amp;gt;ulimit -n 2048&amp;lt;/code&amp;gt; to clone&lt;br /&gt;
* figure out why layman-add from betagarden needs sudo&lt;br /&gt;
* install rgen for puppet parser future at some sensible part of bootstrapping&lt;br /&gt;
* figure out what going on here: &amp;lt;code&amp;gt;Feb 22 22:30:01 vm-test-01 ulogd[30493]: p_kvm-0231_0_in Denied dst:: IN=vmbr0 OUT=vmbr0 MAC=01:00:5e:00:00:12:00:00:5e:00:01:03:08:00 SRC=192.168.140.2 DST=224.0.0.18 LEN=56 TOS=10 PREC=0x00 TTL=255 ID=33458 DF PROTO=112 MARK=0 &amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3163</id>
		<title>User:Lucas/Gentoo Install Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3163"/>
		<updated>2014-02-22T21:49:03Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* hack &amp;lt;code&amp;gt;/usr/local/scripts/netfilter/local/chains/vms/kvm_0231_chain.sh&amp;lt;/code&amp;gt; on host to allow gateway conns&lt;br /&gt;
* first hd is /dev/vda&lt;br /&gt;
* default gentoo handbook install with lvm setup on vda3 and one large lv_root&lt;br /&gt;
* install lvm2 so you can build a lvm initramfs&lt;br /&gt;
** if you skip this you will have tons of fun loading lvm in the initramfs shell: &amp;lt;code&amp;gt;lvm vgscan --mknodes &amp;amp;&amp;amp; lvm lvchange -a ly vg01/lv_root&amp;lt;/code&amp;gt;&lt;br /&gt;
* kernel build with: &amp;lt;code&amp;gt;genkernel --install --lvm --menuconfig all&amp;lt;/code&amp;gt; (do not use &amp;lt;code&amp;gt;--virtio&amp;lt;/code&amp;gt;, activate them in menuconfig instead, I had heaps of fun hunting down all the modules)&lt;br /&gt;
** actually &amp;lt;code&amp;gt;genkernel --install --lvm --kernel-config=/root/kernel.config all&amp;lt;/code&amp;gt; since lazy me hates using a ui&lt;br /&gt;
** the --virtio switch seems screwed due to some oldconfig changes with the &amp;lt;code&amp;gt;VIRTIO_MMIO&amp;lt;/code&amp;gt; system, but i haven&#039;t looked into that more&lt;br /&gt;
* remember to also set &amp;lt;code&amp;gt;GRUB_CMDLINE_LINUX=&amp;quot;dolvm&amp;quot;&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;/etc/default/grub&amp;lt;/code&amp;gt; (as i said before, a ton of fun)&lt;br /&gt;
* more things to install on new machines: &amp;lt;code&amp;gt;emerge dev-vcs/git vim&amp;lt;/code&amp;gt;&lt;br /&gt;
* now for puppet: &amp;lt;code&amp;gt;USE=&amp;quot;augeas vim-syntax&amp;quot; emerge puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* before using puppet: &amp;lt;code&amp;gt;emerge eix &amp;amp;&amp;amp; eix-update&amp;lt;/code&amp;gt;&lt;br /&gt;
* clone puppet tree: &amp;lt;code&amp;gt;git clone https://github.com/purplehazech/purplehazech-orcatamer.git /etc/puppet/environments/development&amp;lt;/code&amp;gt;&lt;br /&gt;
* install librarian: &amp;lt;code&amp;gt;gem19 install librarian-puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* load puppet modules: &amp;lt;code&amp;gt;cd /etc/puppet/environments/development &amp;amp;&amp;amp; librarian-puppet install&amp;lt;/code&amp;gt;&lt;br /&gt;
* workaround some TODOs: &amp;lt;code&amp;gt;ln -s /etc/puppet/environments/development/ /vagrant &amp;amp;&amp;amp; mkdir /usr/local/portage &amp;amp;&amp;amp; touch /usr/local/portage/make.conf &amp;amp;&amp;amp; ulimit -n 2048 &amp;amp;&amp;amp; emerge sudo &amp;amp;&amp;amp; emerge dev-ruby/rgen  --autounmask-write &amp;amp;&amp;amp; dispatch-conf &amp;amp;&amp;amp; emerge dev-ruby/rgen&amp;lt;/code&amp;gt;&lt;br /&gt;
* test if puppet is useable: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;notify{&amp;quot;test&amp;quot;:}&#039; --pluginsync&amp;lt;/code&amp;gt;&lt;br /&gt;
* run puppet like so to find the first batch of stuff to fix: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync --noop&amp;lt;/code&amp;gt;&lt;br /&gt;
* let puppet rip: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync &amp;lt;/code&amp;gt;&lt;br /&gt;
* after running the last command until all the errors where fixed i can try to run in agent mode: &amp;lt;code&amp;gt;puppet agent --test --server=`hostname -f`&amp;lt;/code&amp;gt;&lt;br /&gt;
** i still need to figure out why the &amp;lt;code&amp;gt;--server&amp;lt;/code&amp;gt; flag is needed at this stage, somehow the agent is consulting DNS rather than &amp;lt;code&amp;gt;/etc/hosts&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
now for some hacking that i did to test some concepts:&lt;br /&gt;
* setup openldap tooling: emerge openldap&lt;br /&gt;
* search for machine: &amp;lt;code&amp;gt;ldapsearch -D &#039;cn=Manager,dc=stoney-cloud,dc=org&#039; -w admin &#039;(&amp;amp;(objectClass=sstVirtualizationVirtualMachine)(sstNetworkHostname=kvm-0231))&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
* open ldap port in fw: &amp;lt;code&amp;gt;ldap_pub_out=&amp;quot;10.1.130.13&amp;quot;&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;openTcpPortOut &amp;quot;${chains_out[pub]}&amp;quot; &amp;quot;$ldap_pub_out&amp;quot;         &amp;quot;636&amp;quot;&amp;lt;/code&amp;gt;&lt;br /&gt;
** i also need to configure &amp;lt;code&amp;gt;ldaps_int_in=&amp;quot;${ip_int[vm-test-02]} ${ip_int[vm-test-03]} 192.168.140.136&amp;quot;&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;/usr/local/scripts/netfilter/local/chains/vm-test-01/vm-test-01_chain.sh&amp;lt;/code&amp;gt; for the above to work.&lt;br /&gt;
&lt;br /&gt;
== TODOs ==&lt;br /&gt;
*  replace silly headers in orcatamer with block chars with something that most tools dont bork on (ie. some ascii art) &lt;br /&gt;
** I removed this on Puppetfile and Modulefile to get librarian to run&lt;br /&gt;
* use github https URLs through out, they are simply proxy friendlier everywhere&lt;br /&gt;
* refactor role and profile things into proper modules and use proper puppet:// data urls&lt;br /&gt;
* dont&#039; depend on /usr/local/portage/make.conf&lt;br /&gt;
* figure out why the betagarden overlay needs &amp;lt;code&amp;gt;ulimit -n 2048&amp;lt;/code&amp;gt; to clone&lt;br /&gt;
* figure out why layman-add from betagarden needs sudo&lt;br /&gt;
* install rgen for puppet parser future at some sensible part of bootstrapping&lt;br /&gt;
* figure out what going on here: &amp;lt;code&amp;gt;Feb 22 22:30:01 vm-test-01 ulogd[30493]: p_kvm-0231_0_in Denied dst:: IN=vmbr0 OUT=vmbr0 MAC=01:00:5e:00:00:12:00:00:5e:00:01:03:08:00 SRC=192.168.140.2 DST=224.0.0.18 LEN=56 TOS=10 PREC=0x00 TTL=255 ID=33458 DF PROTO=112 MARK=0 &amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3162</id>
		<title>User:Lucas/Gentoo Install Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3162"/>
		<updated>2014-02-22T21:31:15Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* hack &amp;lt;code&amp;gt;/usr/local/scripts/netfilter/local/chains/vms/kvm_0231_chain.sh&amp;lt;/code&amp;gt; on host to allow gateway conns&lt;br /&gt;
* first hd is /dev/vda&lt;br /&gt;
* default gentoo handbook install with lvm setup on vda3 and one large lv_root&lt;br /&gt;
* install lvm2 so you can build a lvm initramfs&lt;br /&gt;
** if you skip this you will have tons of fun loading lvm in the initramfs shell: &amp;lt;code&amp;gt;lvm vgscan --mknodes &amp;amp;&amp;amp; lvm lvchange -a ly vg01/lv_root&amp;lt;/code&amp;gt;&lt;br /&gt;
* kernel build with: &amp;lt;code&amp;gt;genkernel --install --lvm --menuconfig all&amp;lt;/code&amp;gt; (do not use &amp;lt;code&amp;gt;--virtio&amp;lt;/code&amp;gt;, activate them in menuconfig instead, I had heaps of fun hunting down all the modules)&lt;br /&gt;
** actually &amp;lt;code&amp;gt;genkernel --install --lvm --kernel-config=/root/kernel.config all&amp;lt;/code&amp;gt; since lazy me hates using a ui&lt;br /&gt;
** the --virtio switch seems screwed due to some oldconfig changes with the &amp;lt;code&amp;gt;VIRTIO_MMIO&amp;lt;/code&amp;gt; system, but i haven&#039;t looked into that more&lt;br /&gt;
* remember to also set &amp;lt;code&amp;gt;GRUB_CMDLINE_LINUX=&amp;quot;dolvm&amp;quot;&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;/etc/default/grub&amp;lt;/code&amp;gt; (as i said before, a ton of fun)&lt;br /&gt;
* more things to install on new machines: &amp;lt;code&amp;gt;emerge dev-vcs/git vim&amp;lt;/code&amp;gt;&lt;br /&gt;
* now for puppet: &amp;lt;code&amp;gt;USE=&amp;quot;augeas vim-syntax&amp;quot; emerge puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* before using puppet: &amp;lt;code&amp;gt;emerge eix &amp;amp;&amp;amp; eix-update&amp;lt;/code&amp;gt;&lt;br /&gt;
* clone puppet tree: &amp;lt;code&amp;gt;git clone https://github.com/purplehazech/purplehazech-orcatamer.git /etc/puppet/environments/development&amp;lt;/code&amp;gt;&lt;br /&gt;
* install librarian: &amp;lt;code&amp;gt;gem19 install librarian-puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* load puppet modules: &amp;lt;code&amp;gt;cd /etc/puppet/environments/development &amp;amp;&amp;amp; librarian-puppet install&amp;lt;/code&amp;gt;&lt;br /&gt;
* workaround some TODOs: &amp;lt;code&amp;gt;ln -s /etc/puppet/environments/development/ /vagrant &amp;amp;&amp;amp; mkdir /usr/local/portage &amp;amp;&amp;amp; touch /usr/local/portage/make.conf &amp;amp;&amp;amp; ulimit -n 2048 &amp;amp;&amp;amp; emerge sudo &amp;amp;&amp;amp; emerge dev-ruby/rgen  --autounmask-write &amp;amp;&amp;amp; dispatch-conf &amp;amp;&amp;amp; emerge dev-ruby/rgen&amp;lt;/code&amp;gt;&lt;br /&gt;
* test if puppet is useable: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;notify{&amp;quot;test&amp;quot;:}&#039; --pluginsync&amp;lt;/code&amp;gt;&lt;br /&gt;
* run puppet like so to find the first batch of stuff to fix: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync --noop&amp;lt;/code&amp;gt;&lt;br /&gt;
* let puppet rip: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync &amp;lt;/code&amp;gt;&lt;br /&gt;
* after running the last command until all the errors where fixed i can try to run in agent mode: &amp;lt;code&amp;gt;puppet agent --test --server=`hostname -f`&amp;lt;/code&amp;gt;&lt;br /&gt;
** i still need to figure out why the &amp;lt;code&amp;gt;--server&amp;lt;/code&amp;gt; flag is needed at this stage, somehow the agent is consulting DNS rather than &amp;lt;code&amp;gt;/etc/hosts&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
now for some hacking that i did to test some concepts:&lt;br /&gt;
* setup openldap tooling: emerge openldap&lt;br /&gt;
* search for machine: &amp;lt;code&amp;gt;ldapsearch -D &#039;cn=Manager,dc=stoney-cloud,dc=org&#039; -w admin &#039;(&amp;amp;(objectClass=sstVirtualizationVirtualMachine)(sstNetworkHostname=kvm-0231))&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== TODOs ==&lt;br /&gt;
*  replace silly headers in orcatamer with block chars with something that most tools dont bork on (ie. some ascii art) &lt;br /&gt;
** I removed this on Puppetfile and Modulefile to get librarian to run&lt;br /&gt;
* use github https URLs through out, they are simply proxy friendlier everywhere&lt;br /&gt;
* refactor role and profile things into proper modules and use proper puppet:// data urls&lt;br /&gt;
* dont&#039; depend on /usr/local/portage/make.conf&lt;br /&gt;
* figure out why the betagarden overlay needs &amp;lt;code&amp;gt;ulimit -n 2048&amp;lt;/code&amp;gt; to clone&lt;br /&gt;
* figure out why layman-add from betagarden needs sudo&lt;br /&gt;
* install rgen for puppet parser future at some sensible part of bootstrapping&lt;br /&gt;
* figure out what going on here: &amp;lt;code&amp;gt;Feb 22 22:30:01 vm-test-01 ulogd[30493]: p_kvm-0231_0_in Denied dst:: IN=vmbr0 OUT=vmbr0 MAC=01:00:5e:00:00:12:00:00:5e:00:01:03:08:00 SRC=192.168.140.2 DST=224.0.0.18 LEN=56 TOS=10 PREC=0x00 TTL=255 ID=33458 DF PROTO=112 MARK=0 &amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3161</id>
		<title>User:Lucas/Gentoo Install Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3161"/>
		<updated>2014-02-22T21:06:07Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* hack &amp;lt;code&amp;gt;/usr/local/scripts/netfilter/local/chains/vms/kvm_0231_chain.sh&amp;lt;/code&amp;gt; on host to allow gateway conns&lt;br /&gt;
* first hd is /dev/vda&lt;br /&gt;
* default gentoo handbook install with lvm setup on vda3 and one large lv_root&lt;br /&gt;
* install lvm2 so you can build a lvm initramfs&lt;br /&gt;
** if you skip this you will have tons of fun loading lvm in the initramfs shell: &amp;lt;code&amp;gt;lvm vgscan --mknodes &amp;amp;&amp;amp; lvm lvchange -a ly vg01/lv_root&amp;lt;/code&amp;gt;&lt;br /&gt;
* kernel build with: &amp;lt;code&amp;gt;genkernel --install --lvm --menuconfig all&amp;lt;/code&amp;gt; (do not use &amp;lt;code&amp;gt;--virtio&amp;lt;/code&amp;gt;, activate them in menuconfig instead, I had heaps of fun hunting down all the modules)&lt;br /&gt;
** actually &amp;lt;code&amp;gt;genkernel --install --lvm --kernel-config=/root/kernel.config all&amp;lt;/code&amp;gt; since lazy me hates using a ui&lt;br /&gt;
** the --virtio switch seems screwed due to some oldconfig changes with the &amp;lt;code&amp;gt;VIRTIO_MMIO&amp;lt;/code&amp;gt; system, but i haven&#039;t looked into that more&lt;br /&gt;
* remember to also set &amp;lt;code&amp;gt;GRUB_CMDLINE_LINUX=&amp;quot;dolvm&amp;quot;&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;/etc/default/grub&amp;lt;/code&amp;gt; (as i said before, a ton of fun)&lt;br /&gt;
* more things to install on new machines: &amp;lt;code&amp;gt;emerge dev-vcs/git vim&amp;lt;/code&amp;gt;&lt;br /&gt;
* now for puppet: &amp;lt;code&amp;gt;USE=&amp;quot;augeas vim-syntax&amp;quot; emerge puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* before using puppet: &amp;lt;code&amp;gt;emerge eix &amp;amp;&amp;amp; eix-update&amp;lt;/code&amp;gt;&lt;br /&gt;
* clone puppet tree: &amp;lt;code&amp;gt;git clone https://github.com/purplehazech/purplehazech-orcatamer.git /etc/puppet/environments/development&amp;lt;/code&amp;gt;&lt;br /&gt;
* install librarian: &amp;lt;code&amp;gt;gem19 install librarian-puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* load puppet modules: &amp;lt;code&amp;gt;cd /etc/puppet/environments/development &amp;amp;&amp;amp; librarian-puppet install&amp;lt;/code&amp;gt;&lt;br /&gt;
* workaround some TODOs: &amp;lt;code&amp;gt;ln -s /etc/puppet/environments/development/ /vagrant &amp;amp;&amp;amp; mkdir /usr/local/portage &amp;amp;&amp;amp; touch /usr/local/portage/make.conf &amp;amp;&amp;amp; ulimit -n 2048 &amp;amp;&amp;amp; emerge sudo &amp;amp;&amp;amp; emerge dev-ruby/rgen  --autounmask-write &amp;amp;&amp;amp; dispatch-conf &amp;amp;&amp;amp; emerge dev-ruby/rgen&amp;lt;/code&amp;gt;&lt;br /&gt;
* test if puppet is useable: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;notify{&amp;quot;test&amp;quot;:}&#039; --pluginsync&amp;lt;/code&amp;gt;&lt;br /&gt;
* run puppet like so to find the first batch of stuff to fix: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync --noop&amp;lt;/code&amp;gt;&lt;br /&gt;
* let puppet rip: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync &amp;lt;/code&amp;gt;&lt;br /&gt;
* after running the last command until all the errors where fixed i can try to run in agent mode: &amp;lt;code&amp;gt;puppet agent --test --server=`hostname -f`&amp;lt;/code&amp;gt;&lt;br /&gt;
** i still need to figure out why the &amp;lt;code&amp;gt;--server&amp;lt;/code&amp;gt; flag is needed at this stage, somehow the agent is consulting DNS rather than &amp;lt;code&amp;gt;/etc/hosts&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== TODOs ==&lt;br /&gt;
*  replace silly headers in orcatamer with block chars with something that most tools dont bork on (ie. some ascii art) &lt;br /&gt;
** I removed this on Puppetfile and Modulefile to get librarian to run&lt;br /&gt;
* use github https URLs through out, they are simply proxy friendlier everywhere&lt;br /&gt;
* refactor role and profile things into proper modules and use proper puppet:// data urls&lt;br /&gt;
* dont&#039; depend on /usr/local/portage/make.conf&lt;br /&gt;
* figure out why the betagarden overlay needs &amp;lt;code&amp;gt;ulimit -n 2048&amp;lt;/code&amp;gt; to clone&lt;br /&gt;
* figure out why layman-add from betagarden needs sudo&lt;br /&gt;
* install rgen for puppet parser future at some sensible part of bootstrapping&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3160</id>
		<title>User:Lucas/Gentoo Install Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3160"/>
		<updated>2014-02-22T18:37:21Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* hack &amp;lt;code&amp;gt;/usr/local/scripts/netfilter/local/chains/vms/kvm_0231_chain.sh&amp;lt;/code&amp;gt; on host to allow gateway conns&lt;br /&gt;
* first hd is /dev/vda&lt;br /&gt;
* default gentoo handbook install with lvm setup on vda3 and one large lv_root&lt;br /&gt;
* install lvm2 so you can build a lvm initramfs&lt;br /&gt;
** if you skip this you will have tons of fun loading lvm in the initramfs shell: &amp;lt;code&amp;gt;lvm vgscan --mknodes &amp;amp;&amp;amp; lvm lvchange -a ly vg01/lv_root&amp;lt;/code&amp;gt;&lt;br /&gt;
* kernel build with: &amp;lt;code&amp;gt;genkernel --install --lvm --menuconfig all&amp;lt;/code&amp;gt; (do not use &amp;lt;code&amp;gt;--virtio&amp;lt;/code&amp;gt;, activate them in menuconfig instead, I had heaps of fun hunting down all the modules)&lt;br /&gt;
** actually &amp;lt;code&amp;gt;genkernel --install --lvm --kernel-config=/root/kernel.config all&amp;lt;/code&amp;gt; since lazy me hates using a ui&lt;br /&gt;
** the --virtio switch seems screwed due to some oldconfig changes with the &amp;lt;code&amp;gt;VIRTIO_MMIO&amp;lt;/code&amp;gt; system, but i haven&#039;t looked into that more&lt;br /&gt;
* remember to also set &amp;lt;code&amp;gt;GRUB_CMDLINE_LINUX=&amp;quot;dolvm&amp;quot;&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;/etc/default/grub&amp;lt;/code&amp;gt; (as i said before, a ton of fun)&lt;br /&gt;
* more things to install on new machines: &amp;lt;code&amp;gt;emerge dev-vcs/git vim&amp;lt;/code&amp;gt;&lt;br /&gt;
* now for puppet: &amp;lt;code&amp;gt;USE=&amp;quot;augeas vim-syntax&amp;quot; emerge puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* before using puppet: &amp;lt;code&amp;gt;emerge eix &amp;amp;&amp;amp; eix-update&amp;lt;/code&amp;gt;&lt;br /&gt;
* clone puppet tree: &amp;lt;code&amp;gt;git clone https://github.com/purplehazech/purplehazech-orcatamer.git /etc/puppet/environments/development&amp;lt;/code&amp;gt;&lt;br /&gt;
* install librarian: &amp;lt;code&amp;gt;gem19 install librarian-puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* load puppet modules: &amp;lt;code&amp;gt;cd /etc/puppet/environments/development &amp;amp;&amp;amp; librarian-puppet install&amp;lt;/code&amp;gt;&lt;br /&gt;
* workaround some TODOs: &amp;lt;code&amp;gt;ln -s /etc/puppet/environments/development/ /vagrant &amp;amp;&amp;amp; mkdir /usr/local/portage &amp;amp;&amp;amp; touch /usr/local/portage/make.conf &amp;amp;&amp;amp; ulimit -n 2048 &amp;amp;&amp;amp; emerge sudo &amp;amp;&amp;amp; emerge dev-ruby/rgen  --autounmask-write &amp;amp;&amp;amp; dispatch-conf &amp;amp;&amp;amp; emerge dev-ruby/rgen&amp;lt;/code&amp;gt;&lt;br /&gt;
* test if puppet is useable: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;notify{&amp;quot;test&amp;quot;:}&#039; --pluginsync&amp;lt;/code&amp;gt;&lt;br /&gt;
* run puppet like so to find the first batch of stuff to fix: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync --noop&amp;lt;/code&amp;gt;&lt;br /&gt;
* let puppet rip: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync &amp;lt;/code&amp;gt;&lt;br /&gt;
* after running the last command until all the errors where fixed i can try to run in agent mode: &amp;lt;code&amp;gt;puppet agent --test --noop&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== TODOs ==&lt;br /&gt;
*  replace silly headers in orcatamer with block chars with something that most tools dont bork on (ie. some ascii art) &lt;br /&gt;
** I removed this on Puppetfile and Modulefile to get librarian to run&lt;br /&gt;
* use github https URLs through out, they are simply proxy friendlier everywhere&lt;br /&gt;
* refactor role and profile things into proper modules and use proper puppet:// data urls&lt;br /&gt;
* dont&#039; depend on /usr/local/portage/make.conf&lt;br /&gt;
* figure out why the betagarden overlay needs &amp;lt;code&amp;gt;ulimit -n 2048&amp;lt;/code&amp;gt; to clone&lt;br /&gt;
* figure out why layman-add from betagarden needs sudo&lt;br /&gt;
* install rgen for puppet parser future at some sensible part of bootstrapping&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3159</id>
		<title>User:Lucas/Gentoo Install Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3159"/>
		<updated>2014-02-22T18:16:37Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* hack &amp;lt;code&amp;gt;/usr/local/scripts/netfilter/local/chains/vms/kvm_0231_chain.sh&amp;lt;/code&amp;gt; on host to allow gateway conns&lt;br /&gt;
* first hd is /dev/vda&lt;br /&gt;
* default gentoo handbook install with lvm setup on vda3 and one large lv_root&lt;br /&gt;
* install lvm2 so you can build a lvm initramfs&lt;br /&gt;
** if you skip this you will have tons of fun loading lvm in the initramfs shell: &amp;lt;code&amp;gt;lvm vgscan --mknodes &amp;amp;&amp;amp; lvm lvchange -a ly vg01/lv_root&amp;lt;/code&amp;gt;&lt;br /&gt;
* kernel build with: &amp;lt;code&amp;gt;genkernel --install --lvm --menuconfig all&amp;lt;/code&amp;gt; (do not use &amp;lt;code&amp;gt;--virtio&amp;lt;/code&amp;gt;, activate them in menuconfig instead, I had heaps of fun hunting down all the modules)&lt;br /&gt;
** actually &amp;lt;code&amp;gt;genkernel --install --lvm --kernel-config=/root/kernel.config all&amp;lt;/code&amp;gt; since lazy me hates using a ui&lt;br /&gt;
** the --virtio switch seems screwed due to some oldconfig changes with the &amp;lt;code&amp;gt;VIRTIO_MMIO&amp;lt;/code&amp;gt; system, but i haven&#039;t looked into that more&lt;br /&gt;
* remember to also set &amp;lt;code&amp;gt;GRUB_CMDLINE_LINUX=&amp;quot;dolvm&amp;quot;&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;/etc/default/grub&amp;lt;/code&amp;gt; (as i said before, a ton of fun)&lt;br /&gt;
* more things to install on new machines: &amp;lt;code&amp;gt;emerge dev-vcs/git vim&amp;lt;/code&amp;gt;&lt;br /&gt;
* now for puppet: &amp;lt;code&amp;gt;USE=&amp;quot;augeas vim-syntax&amp;quot; emerge puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* before using puppet: &amp;lt;code&amp;gt;emerge eix &amp;amp;&amp;amp; eix-update&amp;lt;/code&amp;gt;&lt;br /&gt;
* clone puppet tree: &amp;lt;code&amp;gt;git clone https://github.com/purplehazech/purplehazech-orcatamer.git /etc/puppet/environments/development&amp;lt;/code&amp;gt;&lt;br /&gt;
* install librarian: &amp;lt;code&amp;gt;gem19 install librarian-puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* load puppet modules: &amp;lt;code&amp;gt;cd /etc/puppet/environments/development &amp;amp;&amp;amp; librarian-puppet install&amp;lt;/code&amp;gt;&lt;br /&gt;
* workaround some TODOs: &amp;lt;code&amp;gt;ln -s /etc/puppet/environments/development/ /vagrant &amp;amp;&amp;amp; mkdir /usr/local/portage &amp;amp;&amp;amp; touch /usr/local/portage/make.conf &amp;amp;&amp;amp; ulimit -n 2048 &amp;amp;&amp;amp; emerge sudo &amp;amp;&amp;amp; emerge dev-ruby/rgen  --autounmask-write &amp;amp;&amp;amp; dispatch-conf &amp;amp;&amp;amp; emerge dev-ruby/rgen&amp;lt;/code&amp;gt;&lt;br /&gt;
* test if puppet is useable: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;notify{&amp;quot;test&amp;quot;:}&#039; --pluginsync&amp;lt;/code&amp;gt;&lt;br /&gt;
* run puppet like so to find the first batch of stuff to fix: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync --noop&amp;lt;/code&amp;gt;&lt;br /&gt;
* let puppet rip: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync &amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== TODOs ==&lt;br /&gt;
*  replace silly headers in orcatamer with block chars with something that most tools dont bork on (ie. some ascii art) &lt;br /&gt;
** I removed this on Puppetfile and Modulefile to get librarian to run&lt;br /&gt;
* use github https URLs through out, they are simply proxy friendlier everywhere&lt;br /&gt;
* refactor role and profile things into proper modules and use proper puppet:// data urls&lt;br /&gt;
* dont&#039; depend on /usr/local/portage/make.conf&lt;br /&gt;
* figure out why the betagarden overlay needs &amp;lt;code&amp;gt;ulimit -n 2048&amp;lt;/code&amp;gt; to clone&lt;br /&gt;
* figure out why layman-add from betagarden needs sudo&lt;br /&gt;
* install rgen for puppet parser future at some sensible part of bootstrapping&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3158</id>
		<title>User:Lucas/Gentoo Install Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3158"/>
		<updated>2014-02-22T18:09:24Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* hack &amp;lt;code&amp;gt;/usr/local/scripts/netfilter/local/chains/vms/kvm_0231_chain.sh&amp;lt;/code&amp;gt; on host to allow gateway conns&lt;br /&gt;
* first hd is /dev/vda&lt;br /&gt;
* default gentoo handbook install with lvm setup on vda3 and one large lv_root&lt;br /&gt;
* install lvm2 so you can build a lvm initramfs&lt;br /&gt;
** if you skip this you will have tons of fun loading lvm in the initramfs shell: &amp;lt;code&amp;gt;lvm vgscan --mknodes &amp;amp;&amp;amp; lvm lvchange -a ly vg01/lv_root&amp;lt;/code&amp;gt;&lt;br /&gt;
* kernel build with: &amp;lt;code&amp;gt;genkernel --install --lvm --menuconfig all&amp;lt;/code&amp;gt; (do not use &amp;lt;code&amp;gt;--virtio&amp;lt;/code&amp;gt;, activate them in menuconfig instead, I had heaps of fun hunting down all the modules)&lt;br /&gt;
** actually &amp;lt;code&amp;gt;genkernel --install --lvm --kernel-config=/root/kernel.config all&amp;lt;/code&amp;gt; since lazy me hates using a ui&lt;br /&gt;
** the --virtio switch seems screwed due to some oldconfig changes with the &amp;lt;code&amp;gt;VIRTIO_MMIO&amp;lt;/code&amp;gt; system, but i haven&#039;t looked into that more&lt;br /&gt;
* remember to also set &amp;lt;code&amp;gt;GRUB_CMDLINE_LINUX=&amp;quot;dolvm&amp;quot;&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;/etc/default/grub&amp;lt;/code&amp;gt; (as i said before, a ton of fun)&lt;br /&gt;
* more things to install on new machines: &amp;lt;code&amp;gt;emerge dev-vcs/git vim&amp;lt;/code&amp;gt;&lt;br /&gt;
* now for puppet: &amp;lt;code&amp;gt;USE=&amp;quot;augeas vim-syntax&amp;quot; emerge puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* before using puppet: &amp;lt;code&amp;gt;emerge eix &amp;amp;&amp;amp; eix-update&amp;lt;/code&amp;gt;&lt;br /&gt;
* clone puppet tree: &amp;lt;code&amp;gt;git clone https://github.com/purplehazech/purplehazech-orcatamer.git /etc/puppet/environments/development&amp;lt;/code&amp;gt;&lt;br /&gt;
* install librarian: &amp;lt;code&amp;gt;gem19 install librarian-puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* load puppet modules: &amp;lt;code&amp;gt;cd /etc/puppet/environments/development &amp;amp;&amp;amp; librarian-puppet install&amp;lt;/code&amp;gt;&lt;br /&gt;
* workaround some TODOs: &amp;lt;code&amp;gt;ln -s /etc/puppet/environments/development/ /vagrant &amp;amp;&amp;amp; mkdir /usr/local/portage &amp;amp;&amp;amp; touch /usr/local/portage/make.conf &amp;amp;&amp;amp; ulimit -n 2048 &amp;amp;&amp;amp; emerge sudo &amp;amp;&amp;amp; emerge dev-ruby/rgen  --autounmask-write &amp;amp;&amp;amp; dispatch-conf &amp;amp;&amp;amp; emerge dev-ruby/rgen&amp;lt;/code&amp;gt;&lt;br /&gt;
* test if puppet is useable: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;notify{&amp;quot;test&amp;quot;:}&#039; --pluginsync&amp;lt;/code&amp;gt;&lt;br /&gt;
* run puppet like so to find the first batch of stuff to fix: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/developmen&lt;br /&gt;
t/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync --noop&amp;lt;/code&amp;gt;&lt;br /&gt;
* let puppet rip: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/developmen&lt;br /&gt;
t/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync &amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== TODOs ==&lt;br /&gt;
*  replace silly headers in orcatamer with block chars with something that most tools dont bork on (ie. some ascii art) &lt;br /&gt;
** I removed this on Puppetfile and Modulefile to get librarian to run&lt;br /&gt;
* use github https URLs through out, they are simply proxy friendlier everywhere&lt;br /&gt;
* refactor role and profile things into proper modules and use proper puppet:// data urls&lt;br /&gt;
* dont&#039; depend on /usr/local/portage/make.conf&lt;br /&gt;
* figure out why the betagarden overlay needs &amp;lt;code&amp;gt;ulimit -n 2048&amp;lt;/code&amp;gt; to clone&lt;br /&gt;
* figure out why layman-add from betagarden needs sudo&lt;br /&gt;
* install rgen for puppet parser future at some sensible part of bootstrapping&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3157</id>
		<title>User:Lucas/Gentoo Install Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3157"/>
		<updated>2014-02-22T18:07:30Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* hack &amp;lt;code&amp;gt;/usr/local/scripts/netfilter/local/chains/vms/kvm_0231_chain.sh&amp;lt;/code&amp;gt; on host to allow gateway conns&lt;br /&gt;
* first hd is /dev/vda&lt;br /&gt;
* default gentoo handbook install with lvm setup on vda3 and one large lv_root&lt;br /&gt;
* install lvm2 so you can build a lvm initramfs&lt;br /&gt;
** if you skip this you will have tons of fun loading lvm in the initramfs shell: &amp;lt;code&amp;gt;lvm vgscan --mknodes &amp;amp;&amp;amp; lvm lvchange -a ly vg01/lv_root&amp;lt;/code&amp;gt;&lt;br /&gt;
* kernel build with: &amp;lt;code&amp;gt;genkernel --install --lvm --menuconfig all&amp;lt;/code&amp;gt; (do not use &amp;lt;code&amp;gt;--virtio&amp;lt;/code&amp;gt;, activate them in menuconfig instead, I had heaps of fun hunting down all the modules)&lt;br /&gt;
** actually &amp;lt;code&amp;gt;genkernel --install --lvm --kernel-config=/root/kernel.config all&amp;lt;/code&amp;gt; since lazy me hates using a ui&lt;br /&gt;
** the --virtio switch seems screwed due to some oldconfig changes with the &amp;lt;code&amp;gt;VIRTIO_MMIO&amp;lt;/code&amp;gt; system, but i haven&#039;t looked into that more&lt;br /&gt;
* remember to also set &amp;lt;code&amp;gt;GRUB_CMDLINE_LINUX=&amp;quot;dolvm&amp;quot;&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;/etc/default/grub&amp;lt;/code&amp;gt; (as i said before, a ton of fun)&lt;br /&gt;
* more things to install on new machines: &amp;lt;code&amp;gt;emerge dev-vcs/git vim&amp;lt;/code&amp;gt;&lt;br /&gt;
* now for puppet: &amp;lt;code&amp;gt;USE=&amp;quot;augeas vim-syntax&amp;quot; emerge puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* before using puppet: &amp;lt;code&amp;gt;emerge eix &amp;amp;&amp;amp; eix-update&amp;lt;/code&amp;gt;&lt;br /&gt;
* clone puppet tree: &amp;lt;code&amp;gt;git clone https://github.com/purplehazech/purplehazech-orcatamer.git /etc/puppet/environments/development&amp;lt;/code&amp;gt;&lt;br /&gt;
* install librarian: &amp;lt;code&amp;gt;gem19 install librarian-puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* load puppet modules: &amp;lt;code&amp;gt;cd /etc/puppet/environments/development &amp;amp;&amp;amp; librarian-puppet install&amp;lt;/code&amp;gt;&lt;br /&gt;
* workaround some TODOs: &amp;lt;code&amp;gt;ln -s /etc/puppet/environments/development/ /vagrant &amp;amp;&amp;amp; mkdir /usr/local/portage &amp;amp;&amp;amp; touch /usr/local/portage/make.conf &amp;amp;&amp;amp; ulimit -n 2048 &amp;amp;&amp;amp; emerge sudo&amp;lt;/code&amp;gt;&lt;br /&gt;
* test if puppet is useable: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;notify{&amp;quot;test&amp;quot;:}&#039; --pluginsync&amp;lt;/code&amp;gt;&lt;br /&gt;
* run puppet like so to find the first batch of stuff to fix: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/developmen&lt;br /&gt;
t/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync --noop&amp;lt;/code&amp;gt;&lt;br /&gt;
* let puppet rip: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/developmen&lt;br /&gt;
t/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync &amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== TODOs ==&lt;br /&gt;
*  replace silly headers in orcatamer with block chars with something that most tools dont bork on (ie. some ascii art) &lt;br /&gt;
** I removed this on Puppetfile and Modulefile to get librarian to run&lt;br /&gt;
* use github https URLs through out, they are simply proxy friendlier everywhere&lt;br /&gt;
* refactor role and profile things into proper modules and use proper puppet:// data urls&lt;br /&gt;
* dont&#039; depend on /usr/local/portage/make.conf&lt;br /&gt;
* figure out why the betagarden overlay needs &amp;lt;code&amp;gt;ulimit -n 2048&amp;lt;/code&amp;gt; to clone&lt;br /&gt;
* figure out why layman-add from betagarden needs sudo&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3156</id>
		<title>User:Lucas/Gentoo Install Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3156"/>
		<updated>2014-02-22T18:05:18Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* hack &amp;lt;code&amp;gt;/usr/local/scripts/netfilter/local/chains/vms/kvm_0231_chain.sh&amp;lt;/code&amp;gt; on host to allow gateway conns&lt;br /&gt;
* first hd is /dev/vda&lt;br /&gt;
* default gentoo handbook install with lvm setup on vda3 and one large lv_root&lt;br /&gt;
* install lvm2 so you can build a lvm initramfs&lt;br /&gt;
** if you skip this you will have tons of fun loading lvm in the initramfs shell: &amp;lt;code&amp;gt;lvm vgscan --mknodes &amp;amp;&amp;amp; lvm lvchange -a ly vg01/lv_root&amp;lt;/code&amp;gt;&lt;br /&gt;
* kernel build with: &amp;lt;code&amp;gt;genkernel --install --lvm --menuconfig all&amp;lt;/code&amp;gt; (do not use &amp;lt;code&amp;gt;--virtio&amp;lt;/code&amp;gt;, activate them in menuconfig instead, I had heaps of fun hunting down all the modules)&lt;br /&gt;
** actually &amp;lt;code&amp;gt;genkernel --install --lvm --kernel-config=/root/kernel.config all&amp;lt;/code&amp;gt; since lazy me hates using a ui&lt;br /&gt;
** the --virtio switch seems screwed due to some oldconfig changes with the &amp;lt;code&amp;gt;VIRTIO_MMIO&amp;lt;/code&amp;gt; system, but i haven&#039;t looked into that more&lt;br /&gt;
* remember to also set &amp;lt;code&amp;gt;GRUB_CMDLINE_LINUX=&amp;quot;dolvm&amp;quot;&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;/etc/default/grub&amp;lt;/code&amp;gt; (as i said before, a ton of fun)&lt;br /&gt;
* more things to install on new machines: &amp;lt;code&amp;gt;emerge dev-vcs/git vim&amp;lt;/code&amp;gt;&lt;br /&gt;
* now for puppet: &amp;lt;code&amp;gt;USE=&amp;quot;augeas vim-syntax&amp;quot; emerge puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* before using puppet: &amp;lt;code&amp;gt;emerge eix &amp;amp;&amp;amp; eix-update&amp;lt;/code&amp;gt;&lt;br /&gt;
* clone puppet tree: &amp;lt;code&amp;gt;git clone https://github.com/purplehazech/purplehazech-orcatamer.git /etc/puppet/environments/development&amp;lt;/code&amp;gt;&lt;br /&gt;
* install librarian: &amp;lt;code&amp;gt;gem19 install librarian-puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* load puppet modules: &amp;lt;code&amp;gt;cd /etc/puppet/environments/development &amp;amp;&amp;amp; librarian-puppet install&amp;lt;/code&amp;gt;&lt;br /&gt;
* workaround some TODOs: &amp;lt;code&amp;gt;ln -s /etc/puppet/environments/development/ /vagrant &amp;amp;&amp;amp; mkdir /usr/local/portage &amp;amp;&amp;amp; touch /usr/local/portage/make.conf &amp;amp;&amp;amp; ulimit -n 2048&amp;lt;/code&amp;gt;&lt;br /&gt;
* test if puppet is useable: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;notify{&amp;quot;test&amp;quot;:}&#039; --pluginsync&amp;lt;/code&amp;gt;&lt;br /&gt;
* run puppet like so to find the first batch of stuff to fix: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/developmen&lt;br /&gt;
t/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync --noop&amp;lt;/code&amp;gt;&lt;br /&gt;
* let puppet rip: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/developmen&lt;br /&gt;
t/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync &amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== TODOs ==&lt;br /&gt;
*  replace silly headers in orcatamer with block chars with something that most tools dont bork on (ie. some ascii art) &lt;br /&gt;
** I removed this on Puppetfile and Modulefile to get librarian to run&lt;br /&gt;
* use github https URLs through out, they are simply proxy friendlier everywhere&lt;br /&gt;
* refactor role and profile things into proper modules and use proper puppet:// data urls&lt;br /&gt;
* dont&#039; depend on /usr/local/portage/make.conf&lt;br /&gt;
* figure out why the betagarden overlay needs &amp;lt;code&amp;gt;ulimit -n 2048&amp;lt;/clone&amp;gt; to clone&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3155</id>
		<title>User:Lucas/Gentoo Install Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3155"/>
		<updated>2014-02-22T17:47:31Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* hack &amp;lt;code&amp;gt;/usr/local/scripts/netfilter/local/chains/vms/kvm_0231_chain.sh&amp;lt;/code&amp;gt; on host to allow gateway conns&lt;br /&gt;
* first hd is /dev/vda&lt;br /&gt;
* default gentoo handbook install with lvm setup on vda3 and one large lv_root&lt;br /&gt;
* install lvm2 so you can build a lvm initramfs&lt;br /&gt;
** if you skip this you will have tons of fun loading lvm in the initramfs shell: &amp;lt;code&amp;gt;lvm vgscan --mknodes &amp;amp;&amp;amp; lvm lvchange -a ly vg01/lv_root&amp;lt;/code&amp;gt;&lt;br /&gt;
* kernel build with: &amp;lt;code&amp;gt;genkernel --install --lvm --menuconfig all&amp;lt;/code&amp;gt; (do not use &amp;lt;code&amp;gt;--virtio&amp;lt;/code&amp;gt;, activate them in menuconfig instead, I had heaps of fun hunting down all the modules)&lt;br /&gt;
** actually &amp;lt;code&amp;gt;genkernel --install --lvm --kernel-config=/root/kernel.config all&amp;lt;/code&amp;gt; since lazy me hates using a ui&lt;br /&gt;
** the --virtio switch seems screwed due to some oldconfig changes with the &amp;lt;code&amp;gt;VIRTIO_MMIO&amp;lt;/code&amp;gt; system, but i haven&#039;t looked into that more&lt;br /&gt;
* remember to also set &amp;lt;code&amp;gt;GRUB_CMDLINE_LINUX=&amp;quot;dolvm&amp;quot;&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;/etc/default/grub&amp;lt;/code&amp;gt; (as i said before, a ton of fun)&lt;br /&gt;
* more things to install on new machines: &amp;lt;code&amp;gt;emerge dev-vcs/git vim&amp;lt;/code&amp;gt;&lt;br /&gt;
* now for puppet: &amp;lt;code&amp;gt;USE=&amp;quot;augeas vim-syntax&amp;quot; emerge puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* before using puppet: &amp;lt;code&amp;gt;emerge eix &amp;amp;&amp;amp; eix-update&amp;lt;/code&amp;gt;&lt;br /&gt;
* clone puppet tree: &amp;lt;code&amp;gt;git clone https://github.com/purplehazech/purplehazech-orcatamer.git /etc/puppet/environments/development&amp;lt;/code&amp;gt;&lt;br /&gt;
* install librarian: &amp;lt;code&amp;gt;gem19 install librarian-puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* load puppet modules: &amp;lt;code&amp;gt;cd /etc/puppet/environments/development &amp;amp;&amp;amp; librarian-puppet install&amp;lt;/code&amp;gt;&lt;br /&gt;
* workaround some TODOs: &amp;lt;code&amp;gt;ln -s /etc/puppet/environments/development/ /vagrant &amp;amp;&amp;amp; mkdir /usr/local/portage &amp;amp;&amp;amp; touch /usr/local/portage/make.conf&amp;lt;/code&amp;gt;&lt;br /&gt;
* test if puppet is useable: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;notify{&amp;quot;test&amp;quot;:}&#039; --pluginsync&amp;lt;/code&amp;gt;&lt;br /&gt;
* run puppet like so to find the first batch of stuff to fix: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/developmen&lt;br /&gt;
t/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync --noop&amp;lt;/code&amp;gt;&lt;br /&gt;
* let puppet rip: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/developmen&lt;br /&gt;
t/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync &amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== TODOs ==&lt;br /&gt;
*  replace silly headers in orcatamer with block chars with something that most tools dont bork on (ie. some ascii art) &lt;br /&gt;
** I removed this on Puppetfile and Modulefile to get librarian to run&lt;br /&gt;
* use github https URLs through out, they are simply proxy friendlier everywhere&lt;br /&gt;
* refactor role and profile things into proper modules and use proper puppet:// data urls&lt;br /&gt;
* dont&#039; depend on /usr/local/portage/make.conf&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3154</id>
		<title>User:Lucas/Gentoo Install Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3154"/>
		<updated>2014-02-22T17:40:44Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* hack &amp;lt;code&amp;gt;/usr/local/scripts/netfilter/local/chains/vms/kvm_0231_chain.sh&amp;lt;/code&amp;gt; on host to allow gateway conns&lt;br /&gt;
* first hd is /dev/vda&lt;br /&gt;
* default gentoo handbook install with lvm setup on vda3 and one large lv_root&lt;br /&gt;
* install lvm2 so you can build a lvm initramfs&lt;br /&gt;
** if you skip this you will have tons of fun loading lvm in the initramfs shell: &amp;lt;code&amp;gt;lvm vgscan --mknodes &amp;amp;&amp;amp; lvm lvchange -a ly vg01/lv_root&amp;lt;/code&amp;gt;&lt;br /&gt;
* kernel build with: &amp;lt;code&amp;gt;genkernel --install --lvm --menuconfig all&amp;lt;/code&amp;gt; (do not use &amp;lt;code&amp;gt;--virtio&amp;lt;/code&amp;gt;, activate them in menuconfig instead, I had heaps of fun hunting down all the modules)&lt;br /&gt;
** actually &amp;lt;code&amp;gt;genkernel --install --lvm --kernel-config=/root/kernel.config all&amp;lt;/code&amp;gt; since lazy me hates using a ui&lt;br /&gt;
** the --virtio switch seems screwed due to some oldconfig changes with the &amp;lt;code&amp;gt;VIRTIO_MMIO&amp;lt;/code&amp;gt; system, but i haven&#039;t looked into that more&lt;br /&gt;
* remember to also set &amp;lt;code&amp;gt;GRUB_CMDLINE_LINUX=&amp;quot;dolvm&amp;quot;&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;/etc/default/grub&amp;lt;/code&amp;gt; (as i said before, a ton of fun)&lt;br /&gt;
* more things to install on new machines: &amp;lt;code&amp;gt;emerge dev-vcs/git vim&amp;lt;/code&amp;gt;&lt;br /&gt;
* now for puppet: &amp;lt;code&amp;gt;USE=&amp;quot;augeas vim-syntax&amp;quot; emerge puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* before using puppet: &amp;lt;code&amp;gt;emerge eix &amp;amp;&amp;amp; eix-update&amp;lt;/code&amp;gt;&lt;br /&gt;
* clone puppet tree: &amp;lt;code&amp;gt;git clone https://github.com/purplehazech/purplehazech-orcatamer.git /etc/puppet/environments/development&amp;lt;/code&amp;gt;&lt;br /&gt;
* install librarian: &amp;lt;code&amp;gt;gem19 install librarian-puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* load puppet modules: &amp;lt;code&amp;gt;cd /etc/puppet/environments/development &amp;amp;&amp;amp; librarian-puppet install&amp;lt;/code&amp;gt;&lt;br /&gt;
* workaround some TODOs: &amp;lt;code&amp;gt;ln -s /etc/puppet/environments/development/ /vagrant&amp;lt;/code&amp;gt;&lt;br /&gt;
* test if puppet is useable: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/development/manifests/ -e &#039;notify{&amp;quot;test&amp;quot;:}&#039; --pluginsync&amp;lt;/code&amp;gt;&lt;br /&gt;
* run puppet like so to find the first batch of stuff to fix: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/developmen&lt;br /&gt;
t/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync --noop&amp;lt;/code&amp;gt;&lt;br /&gt;
* let puppet rip: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/developmen&lt;br /&gt;
t/manifests/ -e &#039;include ::role::puppet::master&#039; --pluginsync &amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== TODOs ==&lt;br /&gt;
*  replace silly headers in orcatamer with block chars with something that most tools dont bork on (ie. some ascii art) &lt;br /&gt;
** I removed this on Puppetfile and Modulefile to get librarian to run&lt;br /&gt;
* use github https URLs through out, they are simply proxy friendlier everywhere&lt;br /&gt;
* refactor role and profile things into proper modules and use proper puppet:// data urls&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3153</id>
		<title>User:Lucas/Gentoo Install Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3153"/>
		<updated>2014-02-22T17:37:04Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* hack &amp;lt;code&amp;gt;/usr/local/scripts/netfilter/local/chains/vms/kvm_0231_chain.sh&amp;lt;/code&amp;gt; on host to allow gateway conns&lt;br /&gt;
* first hd is /dev/vda&lt;br /&gt;
* default gentoo handbook install with lvm setup on vda3 and one large lv_root&lt;br /&gt;
* install lvm2 so you can build a lvm initramfs&lt;br /&gt;
** if you skip this you will have tons of fun loading lvm in the initramfs shell: &amp;lt;code&amp;gt;lvm vgscan --mknodes &amp;amp;&amp;amp; lvm lvchange -a ly vg01/lv_root&amp;lt;/code&amp;gt;&lt;br /&gt;
* kernel build with: &amp;lt;code&amp;gt;genkernel --install --lvm --menuconfig all&amp;lt;/code&amp;gt; (do not use &amp;lt;code&amp;gt;--virtio&amp;lt;/code&amp;gt;, activate them in menuconfig instead, I had heaps of fun hunting down all the modules)&lt;br /&gt;
** actually &amp;lt;code&amp;gt;genkernel --install --lvm --kernel-config=/root/kernel.config all&amp;lt;/code&amp;gt; since lazy me hates using a ui&lt;br /&gt;
** the --virtio switch seems screwed due to some oldconfig changes with the &amp;lt;code&amp;gt;VIRTIO_MMIO&amp;lt;/code&amp;gt; system, but i haven&#039;t looked into that more&lt;br /&gt;
* remember to also set &amp;lt;code&amp;gt;GRUB_CMDLINE_LINUX=&amp;quot;dolvm&amp;quot;&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;/etc/default/grub&amp;lt;/code&amp;gt; (as i said before, a ton of fun)&lt;br /&gt;
* more things to install on new machines: &amp;lt;code&amp;gt;emerge dev-vcs/git vim&amp;lt;/code&amp;gt;&lt;br /&gt;
* now for puppet: &amp;lt;code&amp;gt;USE=&amp;quot;augeas vim-syntax&amp;quot; emerge puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* before using puppet: &amp;lt;code&amp;gt;emerge eix &amp;amp;&amp;amp; eix-update&amp;lt;/code&amp;gt;&lt;br /&gt;
* clone puppet tree: &amp;lt;code&amp;gt;git clone https://github.com/purplehazech/purplehazech-orcatamer.git /etc/puppet/environments/development&amp;lt;/code&amp;gt;&lt;br /&gt;
* install librarian: &amp;lt;code&amp;gt;gem19 install librarian-puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* load puppet modules: &amp;lt;code&amp;gt;cd /etc/puppet/environments/development &amp;amp;&amp;amp; librarian-puppet install&amp;lt;/code&amp;gt;&lt;br /&gt;
* workaround some TODOs: &amp;lt;code&amp;gt;ln -s /etc/puppet/environments/development/ /vagrant&amp;lt;/code&amp;gt;&lt;br /&gt;
* run puppet like so to find the first batch of stuff to fix: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/developmen&lt;br /&gt;
t/manifests/ -e &#039;include ::role::puppet::master&#039; --noop&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== TODOs ==&lt;br /&gt;
*  replace silly headers in orcatamer with block chars with something that most tools dont bork on (ie. some ascii art) &lt;br /&gt;
** I removed this on Puppetfile and Modulefile to get librarian to run&lt;br /&gt;
* use github https URLs through out, they are simply proxy friendlier everywhere&lt;br /&gt;
* refactor role and profile things into proper modules and use proper puppet:// data urls&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3152</id>
		<title>User:Lucas/Gentoo Install Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3152"/>
		<updated>2014-02-22T17:35:29Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* hack &amp;lt;code&amp;gt;/usr/local/scripts/netfilter/local/chains/vms/kvm_0231_chain.sh&amp;lt;/code&amp;gt; on host to allow gateway conns&lt;br /&gt;
* first hd is /dev/vda&lt;br /&gt;
* default gentoo handbook install with lvm setup on vda3 and one large lv_root&lt;br /&gt;
* install lvm2 so you can build a lvm initramfs&lt;br /&gt;
** if you skip this you will have tons of fun loading lvm in the initramfs shell: &amp;lt;code&amp;gt;lvm vgscan --mknodes &amp;amp;&amp;amp; lvm lvchange -a ly vg01/lv_root&amp;lt;/code&amp;gt;&lt;br /&gt;
* kernel build with: &amp;lt;code&amp;gt;genkernel --install --lvm --menuconfig all&amp;lt;/code&amp;gt; (do not use &amp;lt;code&amp;gt;--virtio&amp;lt;/code&amp;gt;, activate them in menuconfig instead, I had heaps of fun hunting down all the modules)&lt;br /&gt;
** actually &amp;lt;code&amp;gt;genkernel --install --lvm --kernel-config=/root/kernel.config all&amp;lt;/code&amp;gt; since lazy me hates using a ui&lt;br /&gt;
** the --virtio switch seems screwed due to some oldconfig changes with the &amp;lt;code&amp;gt;VIRTIO_MMIO&amp;lt;/code&amp;gt; system, but i haven&#039;t looked into that more&lt;br /&gt;
* remember to also set &amp;lt;code&amp;gt;GRUB_CMDLINE_LINUX=&amp;quot;dolvm&amp;quot;&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;/etc/default/grub&amp;lt;/code&amp;gt; (as i said before, a ton of fun)&lt;br /&gt;
* more things to install on new machines: &amp;lt;code&amp;gt;emerge dev-vcs/git vim&amp;lt;/code&amp;gt;&lt;br /&gt;
* now for puppet: &amp;lt;code&amp;gt;USE=&amp;quot;augeas vim-syntax&amp;quot; emerge puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* before using puppet: &amp;lt;code&amp;gt;emerge eix &amp;amp;&amp;amp; eix-update&amp;lt;/code&amp;gt;&lt;br /&gt;
* clone puppet tree: &amp;lt;code&amp;gt;git clone https://github.com/purplehazech/purplehazech-orcatamer.git /etc/puppet/environments/development&amp;lt;/code&amp;gt;&lt;br /&gt;
* install librarian: &amp;lt;code&amp;gt;gem19 install librarian-puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
* load puppet modules: &amp;lt;code&amp;gt;cd /etc/puppet/environments/development &amp;amp;&amp;amp; librarian-puppet install&amp;lt;/code&amp;gt;&lt;br /&gt;
* run puppet like so to find the first batch of stuff to fix: &amp;lt;code&amp;gt;puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/developmen&lt;br /&gt;
t/manifests/ -e &#039;include ::role::puppet::master&#039; --noop&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== TODOs ==&lt;br /&gt;
*  replace silly headers in orcatamer with block chars with something that most tools dont bork on (ie. some ascii art) &lt;br /&gt;
** I removed this on Puppetfile and Modulefile to get librarian to run&lt;br /&gt;
* use github https URLs through out, they are simply proxy friendlier everywhere&lt;br /&gt;
* refactor role and profile things into proper modules and use proper puppet:// data urls&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3151</id>
		<title>User:Lucas/Gentoo Install Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3151"/>
		<updated>2014-02-22T17:33:10Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* hack /usr/local/scripts/netfilter/local/chains/vms/kvm_0231_chain.sh on host to allow gateway conns&lt;br /&gt;
* first hd is /dev/vda&lt;br /&gt;
* default gentoo handbook install with lvm setup on vda3 and one large lv_root&lt;br /&gt;
* install lvm2 so you can build a lvm initramfs&lt;br /&gt;
** if you skip this you will have tons of fun loading lvm in the initramfs shell: lvm vgscan --mknodes &amp;amp;&amp;amp; lvm lvchange -a ly vg01/lv_root&lt;br /&gt;
* kernel build with: genkernel --install --lvm --menuconfig all (do not use --virtio, activate them in menuconfig instead, I had heaps of fun hunting down all the modules)&lt;br /&gt;
** actually genkernel --install --lvm --kernel-config=/root/kernel.config since lazy me hates using a ui&lt;br /&gt;
** the --virtio switch seems screwed due to some oldconfig changes with the VIRTIO_MMIO system, but i haven&#039;t looked into that more&lt;br /&gt;
* remember to also set GRUB_CMDLINE_LINUX=&amp;quot;dolvm&amp;quot; in /etc/default/grub (as i said before, a ton of fun)&lt;br /&gt;
* more things to install on new machines: emerge dev-vcs/git vim&lt;br /&gt;
* now for puppet: USE=&amp;quot;augeas vim-syntax&amp;quot; emerge puppet&lt;br /&gt;
* before using puppet: emerge eix &amp;amp;&amp;amp; eix-update&lt;br /&gt;
* clone puppet tree: git clone https://github.com/purplehazech/purplehazech-orcatamer.git /etc/puppet/environments/development&lt;br /&gt;
* install librarian: gem19 install librarian-puppet&lt;br /&gt;
* load puppet modules: cd /etc/puppet/environments/development &amp;amp;&amp;amp; librarian-puppet install&lt;br /&gt;
* run puppet like so to find the first batch of stuff to fix: puppet apply --environment=development --modulepath=/etc/puppet/environments/development/modules/:/etc/puppet/environments/developmen&lt;br /&gt;
t/manifests/ -e &#039;include ::role::puppet::master&#039; --noop&lt;br /&gt;
&lt;br /&gt;
== TODOs ==&lt;br /&gt;
* [ ] replace silly headers with block chars with something that most tools dont bork on (ie. some ascii art) &lt;br /&gt;
** I removed this on Puppetfile and Modulefile to get librarian to run&lt;br /&gt;
* use github https URLs through out, they are simply proxy friendlier everywhere&lt;br /&gt;
* refactor role and profile things into proper modules and use proper puppet:// data urls&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3150</id>
		<title>User:Lucas/Gentoo Install Notes</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas/Gentoo_Install_Notes&amp;diff=3150"/>
		<updated>2014-02-22T17:13:19Z</updated>

		<summary type="html">&lt;p&gt;Lucas: Created page with &amp;quot;* hack /usr/local/scripts/netfilter/local/chains/vms/kvm_0231_chain.sh on host to allow gateway conns * first hd is /dev/vda * default gentoo handbook install with lvm setup o...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* hack /usr/local/scripts/netfilter/local/chains/vms/kvm_0231_chain.sh on host to allow gateway conns&lt;br /&gt;
* first hd is /dev/vda&lt;br /&gt;
* default gentoo handbook install with lvm setup on vda3 and one large lv_root&lt;br /&gt;
* install lvm2 so you can build a lvm initramfs&lt;br /&gt;
** if you skip this you will have tons of fun loading lvm in the initramfs shell: lvm vgscan --mknodes &amp;amp;&amp;amp; lvm lvchange -a ly vg01/lv_root&lt;br /&gt;
* kernel build with: genkernel --install --lvm --menuconfig all (do not use --virtio, activate them in menuconfig instead, I had heaps of fun hunting down all the modules)&lt;br /&gt;
** actually genkernel --install --lvm --kernel-config=/root/kernel.config since lazy me hates using a ui&lt;br /&gt;
** the --virtio switch seems screwed due to some oldconfig changes with the VIRTIO_MMIO system, but i haven&#039;t looked into that more&lt;br /&gt;
* remember to also set GRUB_CMDLINE_LINUX=&amp;quot;dolvm&amp;quot; in /etc/default/grub (as i said before, a ton of fun)&lt;br /&gt;
* more things to install on new machines: emerge dev-vcs/git vim&lt;br /&gt;
* now for puppet: USE=&amp;quot;augeas vim-syntax&amp;quot; emerge puppet&lt;br /&gt;
* before using puppet: emerge eix &amp;amp;&amp;amp; eix-update&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas&amp;diff=3149</id>
		<title>User:Lucas</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas&amp;diff=3149"/>
		<updated>2014-02-22T13:33:55Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Itsa Mee :)&lt;br /&gt;
== Things ==&lt;br /&gt;
* [[User:Lucas/Gentoo Install Notes]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=3036</id>
		<title>Gentoo Infrastructure</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=3036"/>
		<updated>2014-02-06T17:27:05Z</updated>

		<summary type="html">&lt;p&gt;Lucas: /* build orchestration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This article describes how we plan on using gentoo as an infrastructure backbone for creating a complete and modern IT architecture.&lt;br /&gt;
&lt;br /&gt;
== Glossary ==&lt;br /&gt;
@TODO We need to clean up some terms already (for instance the portage vs puppet profile thing) A [[:Category:Glossary|glossary]] should help us define term more closely (and stick to the definitions).&lt;br /&gt;
&lt;br /&gt;
; portage profile&lt;br /&gt;
: A profile in gentoo portage. Defines either a system or application stack for portage.&lt;br /&gt;
; portage build profile&lt;br /&gt;
: A profile in gentoo portage. Based of a system profile but used during the build phase of the binary packages used in the final deploy.&lt;br /&gt;
; puppet profile&lt;br /&gt;
: A puppet profile contains the implementation logic of how to install and configure an aspect of a system.&lt;br /&gt;
; stack&lt;br /&gt;
: A stack contains a complete and deployable product that may be provisioned and used. Stack have very simple inheritance letting the admin create stack trees based on each other. For instance a Ruby on Rails stack will be based of of a ruby stack which is based off a linux stack.&lt;br /&gt;
&lt;br /&gt;
= Required components =&lt;br /&gt;
* Build host(s) for binary packages&lt;br /&gt;
* HTTP server for serving binary packages and distfiles (required by the ebuilds)&lt;br /&gt;
* Git clone of official portage tree&lt;br /&gt;
* Overlay(s)&lt;br /&gt;
* Own portage profile(s)&lt;br /&gt;
* rsync or Git server for serving the Overlay and the portage profiles&lt;br /&gt;
* Stage3 building system&lt;br /&gt;
* Puppet for configuration management and software installation&lt;br /&gt;
* Git version control for everything (overlays, portage profiles, puppet manifests and scripts/code)&lt;br /&gt;
* Install host (PXE boot / TFTP / DHCP)&lt;br /&gt;
** emc/puppetlabs [https://github.com/puppetlabs/Razor razor] can do this but needs some work for gentoo &lt;br /&gt;
* Automatic base installation script&lt;br /&gt;
** also in the scope of razor&lt;br /&gt;
* Separation of development, staging and production environments&lt;br /&gt;
** tagged and managed in git&lt;br /&gt;
* PKI environment (with dedicated sub CAs) for X509 certificates (used for Puppet, server and client certs etc.)&lt;br /&gt;
* git web interface (make dotfiles and frozen clones accessible to power-users)&lt;br /&gt;
* Central authentication service&lt;br /&gt;
* DNS, DHCP and NTP services&lt;br /&gt;
* Monitoring and alarming system&lt;br /&gt;
* Logging&lt;br /&gt;
* versioning for everything (if it is a committable file, use semver on its repo)&lt;br /&gt;
&lt;br /&gt;
== Binary package requirements ==&lt;br /&gt;
* Ability to build and install binary packages with the same version but different USE flags. For example, MySQL server package (&amp;lt;code&amp;gt;-minimal&amp;lt;/code&amp;gt; and MySQL client &amp;amp; libs package &amp;lt;code&amp;gt;minimal&amp;lt;/code&amp;gt;)&lt;br /&gt;
** don&#039;t go there: this imposes a significant amount of maintenance work and may still break. Rather provide large enough base sets and accept that some packages install too much (you can still disable them at runtime) and build the few deviations from the rule on the servers from source --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:39, 3 January 2014 (CET)&lt;br /&gt;
*** Yes, we need to and can go there :-) I agree with you, that we should do this only if necessary, apache for example can be built once and has the ability to turn features (module loading) on/off via its configuration. Other software does not provide such run-time configuration which results in unwanted server-software and dependencies on the installed hosts (&amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for example). I clearly do not want to have a dedicated build environment for each of those packages, I would rather see a build env, called minimal for example, which is used to build all those database packages with only lib and clients enabled (use the same env for PostgreSQL, OpenLDAP, MySQL etc.). As stated before, the whole build process needs to be automated, so I don&#039;t see a considerable increase of maintenance work coming up here. The dependency problem is mitigated through the fact that we have a frozen portage tree for all our build envs and therefore use the same versions everywhere. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 12:04, 6 January 2014 (CET)&lt;br /&gt;
*** Yes and no on this one. We clearly need to keep the list of packages that require this at bare minimum. &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for instance doesn&#039;t warrant this, we just won&#039;t start the server on non server nodes. Easy as cake. The server code and it&#039;s deps wont do any harm on say a desktop or other server box. Even though I can&#039;t think of example, I do believe we will be needing this possibility when we encounter packages that need to be built using different profiles for different use cases, things like having a php with-curlwrappers vs one with the curl module sans curlwrappers. The important point I take from this is that creating new profiles with small deviations from our default must be very easy (ie. not much work). Basically we need the infras support for n different build profiles to be fully automated and well documented. [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 19:52, 9 January 2014 (CET)&lt;br /&gt;
**** The &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; is definitely a good example, I don&#039;t want to install and maintain MySQL, Apache, PHP, snmpd (including all the deps) etc. on hosts which just need a Zabbix agent. I would also like to pragmatically avoid unused deps, in order to minimize reverse-updates and security updates (which must be provided nonetheless if the software is in use or not). --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 13:20, 10 January 2014 (CET)&lt;br /&gt;
* Providing binary packages for different major (and sometimes minor) versions, for example: &amp;lt;code&amp;gt;dev-db/mysql-5.X.Y&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;dev-db/mysql-6.X.Y&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Provide binary packages for pre-compiled Linux kernels and modules (not just a binary package of &amp;lt;code&amp;gt;sys-kernel/gentoo-sources&amp;lt;/code&amp;gt;)&lt;br /&gt;
** This makes it possible to build stage4 images from binary packages. &lt;br /&gt;
** Most likely there will be separate packages for servers and desktops built with different genkernel configs.&lt;br /&gt;
* Handle reverse dependency updates and ABI changes&lt;br /&gt;
&lt;br /&gt;
== Build host requirements ==&lt;br /&gt;
* Build binary package for all required software&lt;br /&gt;
* Support for multiple environments (development, staging and production)&lt;br /&gt;
* Support for multiple architectures (such as x86, amd64 etc.)&lt;br /&gt;
* Support for multiple build profiles&lt;br /&gt;
** system (or base) profile, such as desktop or server (stage3) (all the packages contained within the &amp;lt;code&amp;gt;/etc/portage/make.profile&amp;lt;/code&amp;gt; or via &amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;)&lt;br /&gt;
** application profiles, such as php5-app, django-app etc.)&lt;br /&gt;
** simple inheritance is used for things like python-app -&amp;gt; django-app&lt;br /&gt;
** stacks consist of one system profile and multiple application profiles&lt;br /&gt;
** don&#039;t do this: Gentoo itself has only a few profiles and even there issues arise when combining them (for example desktop + selinux-hardened) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:40, 3 January 2014 (CET)&lt;br /&gt;
*** Those are build-profiles (for example chroots or some sort of overlay-fs) not Gentoo (portage) profiles, we definitely need to clarify those terms ;) --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 20:01, 5 January 2014 (CET)&lt;br /&gt;
* All build profiles will use a system profile as their base profile&lt;br /&gt;
* Ability to update an existing build profile, without the need to build it from scratch&lt;br /&gt;
* Ability to do fully automated clean builds (ie. for new archs or new stacks)&lt;br /&gt;
* Ability to automatically update all development profiles on a predefined frequency such as daily, weekly or monthly an be notified about build failures&lt;br /&gt;
** [http://jenkins-ci.org/ jenkins ci] can do this using one jenkins master and a least one build slave per architecture.&lt;br /&gt;
** Other options would be [https://github.com/travis-ci/travis-ci travis ci] (not ready for in-house use) or [http://cruisecontrol.sourceforge.net/ cruise control]&lt;br /&gt;
** Rabe already has a jenkins instance: [http://intranet.rabe.ch/jenkins/]. The instance [[Jenkins-01]] is more or less modern and should be easy to reintegrate with puppet.&lt;br /&gt;
* Each build profile stores the built binary packages under a per-defined directory which will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Application build profiles stores only the extra packages within the above directory, packages included in a base profile won&#039;t be duplicated.&lt;br /&gt;
* Old or no longer supported packages will be removed automatically&lt;br /&gt;
* Build a stage 3 tarball, which can be used for the automatic installation via PXE/TFTP.&lt;br /&gt;
** must be able to build a stage tarball for each of the available environment-arch-system profile combinations&lt;br /&gt;
* Handle reverse dependency updates and ABI changes (aka &amp;lt;code&amp;gt;revdep-rebuild&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Handle perl and python (maybe more) dependency updates (aka &amp;lt;code&amp;gt;perl-cleaner&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;python-updater&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Ability to build kernel and modules&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone requirements ==&lt;br /&gt;
* The official portage tree needs to be cloned via Git, which basically enables one to:&lt;br /&gt;
** keep the control over portage tree updates&lt;br /&gt;
** provide an old version of the tree&lt;br /&gt;
** cherry pick updates&lt;br /&gt;
***  this should be avoided at all cost since it can lead to various sorts of breakages (ebuild &amp;lt;-&amp;gt; ebuild, ebuild &amp;lt;-&amp;gt; eclass, ebuild &amp;lt;-&amp;gt; profile, eclass &amp;lt;-&amp;gt; profile interaction) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:24, 3 January 2014 (CET)&lt;br /&gt;
**** Yes, I agree. Nonetheless, we need the &#039;&#039;possibility&#039;&#039; to do cherry picking, for example to react on zero-day exploits. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 19:53, 5 January 2014 (CET)&lt;br /&gt;
* Support for a development, staging and production branch&lt;br /&gt;
** Ability to automatically sync from upstream&lt;br /&gt;
** Easy merge support from one branch to the next &#039;&#039;higher&#039;&#039; one (staging -&amp;gt; production)&lt;br /&gt;
* Notification support for new [http://www.gentoo.org/security/en/glsa/index.xml GLSAs] which affect packages within the cloned trees.&lt;br /&gt;
** Either via automatic update and merge of &amp;lt;code&amp;gt;/usr/portage/metadata/glsa&amp;lt;/code&amp;gt; or via external mechanisms such as consulting the [http://www.gentoo.org/rdf/en/glsa-index.rdf RDF feed].&lt;br /&gt;
** Having an inventory by collecting puppet facts allows to check for security updates in a central location --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:31, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage overlay requirements ==&lt;br /&gt;
* One Git based portage [http://www.gentoo.org/proj/en/overlays/userguide.xml overlay]&lt;br /&gt;
** Contains own [[#Portage_profile_requirements|portage profiles]]&lt;br /&gt;
** Contains own or modified ebuilds or legacy ones removed from the official tree&lt;br /&gt;
* Support for development, staging and production environment (via Git branches)&lt;br /&gt;
* [http://layman.sourceforge.net/ Layman] compatibility&lt;br /&gt;
** Portage has now direct repository support (as has cave/paludis) and layman may be omitted --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:32, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage profile requirements ==&lt;br /&gt;
* Multiple [http://wiki.gentoo.org/wiki/Profile Portage profiles] stored within the [[#Overlay_requirements|overlay]].&lt;br /&gt;
** One for base, desktop and server (maybe more in the future, such as streambox)&lt;br /&gt;
*** desktop and server both inherit from the base profile which serves as the lowest common denominator.&lt;br /&gt;
* Support for multiple architectures (such as x86 and amd64)&lt;br /&gt;
** Avoid definition duplications via parent profile inheriting.&lt;br /&gt;
* All the profiles have an official Gentoo profile as their master&lt;br /&gt;
* Profiles include only packages belonging to a base system, not an application stack (those will be managed via puppet recipes)&lt;br /&gt;
* Profiles can be used to unmask packages required but not belonging to the base system&lt;br /&gt;
* Profiles sets all the default values for the client&#039;s [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html &amp;lt;code&amp;gt;make.conf&amp;lt;/code&amp;gt;], such as USE flags, BINHOSTS, GENTOO_MIRRORS, CFLAGS, CHOST etc.&lt;br /&gt;
** &#039;&#039;&#039;Warning&#039;&#039;&#039;: many such variables are not incremental and therefore need duplication of Gentoo base profile variables (requiring that someone tracks changes in those variables) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:29, 3 January 2014 (CET)&lt;br /&gt;
* keep the profiles (and the inheritance structure) as simple as possible, rather duplicate than inherit for small deviations to avoid inheritence issues --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:33, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Package host requirements ==&lt;br /&gt;
* Serving files via HTTPS&lt;br /&gt;
** Binary packages for all the clients (&amp;lt;code&amp;gt;PORTAGE_BINHOST&amp;lt;/code&amp;gt;), which were built by the [[#Build_host_requirements|build host]]&lt;br /&gt;
*** Binary packages will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
*** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== File mirror host requirements ==&lt;br /&gt;
* Hosts all the files required to build a package (&amp;lt;code&amp;gt;GENTOO_MIRRORS=mirror.example.com/public/gentoo/distfiles&amp;lt;/code&amp;gt;)&lt;br /&gt;
** Acts as a caching mirror for already downloaded packages from an official mirror&lt;br /&gt;
**  Serves fetch-restricted files (&amp;lt;code&amp;gt;dev-java/oracle-jdk-bin&amp;lt;/code&amp;gt; for example), to authorized clients&lt;br /&gt;
* Files are served via HTTPS&lt;br /&gt;
* Distinguishes between three groups of files&lt;br /&gt;
** &#039;&#039;&#039;public&#039;&#039;&#039;: Files which are available to all clients (theoretically even to the entire internet)&lt;br /&gt;
** &#039;&#039;&#039;site-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure (for example those which would put us into [http://www.bettercallsaul.com/ legal troubles] if available to the public)&lt;br /&gt;
** &#039;&#039;&#039;stack-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure and the software stack group (private files of a specific customer) &lt;br /&gt;
* Provides an easy way to let an administrator manually upload new files, for example via WebDAV-CGI, SFTP or a similar mechanism.&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== Puppet requirements ==&lt;br /&gt;
* moved to [[stoney_orchestra:_Requirements]], included below for reference.&lt;br /&gt;
&lt;br /&gt;
{{:stoney orchestra: Requirements}}&lt;br /&gt;
&lt;br /&gt;
== Install host requirements ==&lt;br /&gt;
* Ability to install physical and virtual machines&lt;br /&gt;
* Distinguish machines by their Ethernet MAC address&lt;br /&gt;
* Provide a PXE/TFTP boot mechanism&lt;br /&gt;
* Partition and format the (virtual) harddisks&lt;br /&gt;
* Install a stage3 image which was built by the build host&lt;br /&gt;
* Bootstrap puppet, enabling it to take over the individual installation and customization.&lt;br /&gt;
* Group hosts into&lt;br /&gt;
** environments (development, staging and production)&lt;br /&gt;
** architectures (such as x86, amd64 etc.)&lt;br /&gt;
** portage profiles (system profiles such as desktop and server)&lt;br /&gt;
** &amp;lt;s&amp;gt;stacks (comprising a complete product as a service with the underlying infrastructure)&amp;lt;/s&amp;gt; this is the task of Puppet --[[Benutzer:Chaf|Chaf]] ([[Benutzer Diskussion:Chaf|Diskussion]]) 09:42, 19. Dez. 2013 (CET)&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure requirements ==&lt;br /&gt;
* Local certificate authority for signing [http://en.wikipedia.org/wiki/X509 X.509] certificates.&lt;br /&gt;
* Master certificate authority root certificate which is only used to sign Sub-CA certificates&lt;br /&gt;
* Sub certificate authorities used for various cases such as&lt;br /&gt;
** Puppet certificates [http://docs.puppetlabs.com/puppet/3/reference/config_ssl_external_ca.html]&lt;br /&gt;
** User certificates&lt;br /&gt;
** Client certificates&lt;br /&gt;
** Host certificates&lt;br /&gt;
* Ability to sign, revoke and extend certificates&lt;br /&gt;
* Publish certificate revocation status either via [http://en.wikipedia.org/wiki/Certificate_revocation_list CRL] and/or [http://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol OCSP]&lt;br /&gt;
** CRL is not worth the hassle due to it not defining how often the CRL must be consulted. Since we are in the same physical net OCSP should be far superior here (thank to its live checking support). On the other hand puppet does not do OCSP yet (redmine: [http://projects.puppetlabs.com/issues/10111 #110111]) so we might need to implement both or implement OCSP as well as develop our own automated revocation for puppet.&lt;br /&gt;
* Choose DNs below &amp;lt;code&amp;gt;dc=rabe,dc=ch&amp;lt;/code&amp;gt;&lt;br /&gt;
* register a PEN-OID as issued by IANA if custom schema work is required&lt;br /&gt;
** Use a @rabe email when requesting a PEN at [http://pen.iana.org/pen/PenApplication.page IANA], last time the @purplehaze.ch was a problem!&lt;br /&gt;
* Some of the aforementioned sub-CAs might be implemented as robot CAs with a self service interface (ie for authorized users).&lt;br /&gt;
* Consider using [http://en.wikipedia.org/wiki/Certificate_Management_Protocol CMP] or [http://en.wikipedia.org/wiki/Certificate_Management_over_CMS CMC] as an API to signing, revoking et. al.&lt;br /&gt;
** Since the underlying RFCs of both these protocols are rather new they are not yet broadly supported.&lt;br /&gt;
* Keep local root CA offline!&lt;br /&gt;
** Maybe use an old netbook as root CA :P&lt;br /&gt;
* Support GPG keys for signing packages&lt;br /&gt;
&lt;br /&gt;
== Git hosting requirements ==&lt;br /&gt;
* Public repositories hosted on [http://www.github.com GitHub] (mainly) under the [https://github.com/organizations/radiorabe radiorabe organization] (almost anything which doesn&#039;t leak sensitive informations)&lt;br /&gt;
* Private repositories hosted on the internal infrastructure&lt;br /&gt;
** Accessible via https and a web interface&lt;br /&gt;
** contains some repos with uber-private data the gets compartmentalized even further (ie. hiera datafiles in different repos)&lt;br /&gt;
* One repository per component&lt;br /&gt;
* Daily backup of all repositories&lt;br /&gt;
* Branches for development, staging and production&lt;br /&gt;
** New features are added to the development branch only and later merged up to staging and production&lt;br /&gt;
* Must support pull-requests so we can implement a review process (when pulling through the envs)&lt;br /&gt;
** Sing-Offing might also be required&lt;br /&gt;
* Adhere to [http://semver.org/ Semantic Versioning] for version/release tags.&lt;br /&gt;
** Tag releases as &amp;lt;code&amp;gt;vX.Y.Z&amp;lt;/code&amp;gt; those will be automatically appear on GitHub as downloadable tarballs, which can be referenced within the corresponding ebuilds.&lt;br /&gt;
** Hit 1.0.0 as soon as code lands on production or earlier&lt;br /&gt;
** Commit .lock files when reaching 1.0.0 where applicable (Gemfile.lock, composer.lock) or earlier if needed&lt;br /&gt;
* Must be able to trigger remote events (ie. update master through mcollective after code was promoted to production in a PR)&lt;br /&gt;
* Support the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model&lt;br /&gt;
&lt;br /&gt;
== Messaging requirements ==&lt;br /&gt;
* I&#039;m talking AMPQ, JMS, STOMP, 0MQ and the likes&lt;br /&gt;
** not sure if we need something in this space for the infra&lt;br /&gt;
** it could facilitate comms between components&lt;br /&gt;
** stuff like mcollective and RadioDNS need something in this space&lt;br /&gt;
&lt;br /&gt;
== Monitoring, logging and alarming system requirements ==&lt;br /&gt;
@TODO&lt;br /&gt;
* centralized logging is used throughout&lt;br /&gt;
** with tools that help find and fix problems and do post mortems&lt;br /&gt;
* all systems are always monitored by a full monitoring suite&lt;br /&gt;
* the monitoring suite must support alarming users through multiple paths&lt;br /&gt;
** alarming should include a fallback strategy and a way to acknowledge alarms&lt;br /&gt;
** it must have a easy way to configure scheduled maintenance either before or while the maintenance is undergoing&lt;br /&gt;
* monitoring, logging and alarming are all automatically configured during regular provisioning of machines&lt;br /&gt;
* alerting uses jabber by default with fallbacks to email and sms-through-gsm depending on the site.&lt;br /&gt;
&lt;br /&gt;
= Implementation proposal =&lt;br /&gt;
== Build farm proposal ==&lt;br /&gt;
The build farm consists of a system of multiple vms to build binary packages for multiple environments, architectures and build profiles.&lt;br /&gt;
&lt;br /&gt;
* Git webhook on internal gitlab install pushes changes to jenkins master.&lt;br /&gt;
* Jenkins master dishes out jobs to jenkins slave machines for needed architecture and build profile.&lt;br /&gt;
* Jenkins slaves only get used once and wipe/reprovision themselves after master has stored build artefacts.&lt;br /&gt;
* We have build-slave templates available for each architecture/build profile combo.&lt;br /&gt;
* Upon use those get provisioned to the needed environment using puppet.&lt;br /&gt;
* All of this is set up using puppet and fully automated, even building of new build-slave templates and the whole releng on those.&lt;br /&gt;
* The build farm also keeps old templates and stable boxes on hold so it can use them to build differentials.&lt;br /&gt;
* Artefacts slaves will be producing:&lt;br /&gt;
** &amp;quot;vagrant&amp;quot;-style boot boxes&lt;br /&gt;
** full binpkg repos for a given env/arch/build profile combo&lt;br /&gt;
** stage3 balls for each arch/build profile&lt;br /&gt;
** stage4 balls for each environment&lt;br /&gt;
** build logs&lt;br /&gt;
** &amp;lt;code&amp;gt;/var/db/pkg&amp;lt;/code&amp;gt;&lt;br /&gt;
** puppet report data&lt;br /&gt;
** test results and code analysis results&lt;br /&gt;
* When we come to continuos deployment the jenkins master will also be able to trigger puppet when merges to master happen.&lt;br /&gt;
* This rolls out releases to the sub-system that was signed off by a merge to a master branch (see branching strategy in git proposal).&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
&lt;br /&gt;
==== build orchestration ====&lt;br /&gt;
* [http://mesos.apache.org/ Apache Mesos] cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. Can run for instance Jenkins.&lt;br /&gt;
&lt;br /&gt;
==== package building ====&lt;br /&gt;
* [http://www.chromium.org/chromium-os/developer-guide/chromite-shell-quick-start chromite] build utility from chromium os ([https://chromium.googlesource.com/chromiumos/chromite/ source repo])&lt;br /&gt;
** as far as I recall chromium os does highly parallel building making their build really fast with a slight trade of in long termn stability (ie. build might fail due to dependencies being built out of oder), &lt;br /&gt;
** the [http://www.chromium.org/chromium-os/developer-guide chromium os developer guide] might also be of interest, among other things it shows that google do split the build into a package building part and an image creation part.&lt;br /&gt;
* [https://wiki.sabayon.org/?title=En:Entropy entropy] is sabayons portage replacement, it focuses on binaries due to sabayon being a binary distribution&lt;br /&gt;
** their [https://github.com/Sabayon/build build system &amp;quot;Matter&amp;quot;] might be of interest, it seems to automate large parts of tracking gentoo portage with its tinderbox subsystem&lt;br /&gt;
** sabayon has &amp;lt;code&amp;gt;kernel-switcher&amp;lt;/code&amp;gt; for updating kernels&lt;br /&gt;
** kernel ebuilds live [https://github.com/Sabayon/sabayon-distro/tree/master/sys-kernel/linux-sabayon here] and probably rely on the [https://github.com/Sabayon/sabayon-distro/blob/master/eclass/sabayon-kernel.eclass sabayon-kernel eclass].&lt;br /&gt;
&lt;br /&gt;
==== &amp;quot;stage4&amp;quot;/box/iso building ====&lt;br /&gt;
* [http://packer.io packer.io] can be used to build stage4 (containing a kernel) images and seems to work for gentoo. Packer often gets used to build Vagrant boxes.&lt;br /&gt;
** [https://github.com/pierreozoux/packer-warehouse/blob/master/var-files/gentoo/generate_latest.sh gentoo script from packer-warehouse] used with packer to create a minimal gentoo vagrant box&lt;br /&gt;
** currently packer and packer-warehouse do not seem capable of building gentoo machines out of the box, I tested this with osx/virtualbox using gentoo stage3 and portage snapshots [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
* [https://github.com/jedi4ever/veewee veewee] vagrant box builder (builds stage4 images in a manner similar to packer&lt;br /&gt;
** has support for a massive amount of guest os types&lt;br /&gt;
*** installs puppet/chef using gem due to the oldish versions in gentoo (and probably elsewhere)&lt;br /&gt;
** supports kvm and others as host os&lt;br /&gt;
** while testing with osx/virtualbox I was able to build and export a vagrant box from gentoo stage3 and portage snapshots without any hiccups [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
** is in dire need of DRY: [https://github.com/jedi4ever/veewee/pull/690] to make it worth forking&lt;br /&gt;
* [http://blinkeye.ch/dokuwiki/doku.php/projects/mkstage4 mkstage4]&lt;br /&gt;
** aimed at creating backup stage4 tarballs of gentoo systems&lt;br /&gt;
** written in bash&lt;br /&gt;
** pretty simple, might come in handy as automation tool&lt;br /&gt;
&lt;br /&gt;
==== kernel ====&lt;br /&gt;
&lt;br /&gt;
* at the moment we build tarballs for the kernel+initramfs and the modules using &amp;lt;code&amp;gt;genkernel&amp;lt;/code&amp;gt; and have a separate ebuild which installs them&lt;br /&gt;
* ideally we would like to have an ebuild which takes the kernel sources (like the ebuild for &amp;lt;code&amp;gt;sys-kernel/gentoo-source&amp;lt;/code&amp;gt; does), builds it according to some default configuration or a user configuration if available (&amp;lt;code&amp;gt;savedconfig.eclass&amp;lt;/code&amp;gt;) and then installs the kernel and the modules as well as some minimal headers+configuration to build other packages requiring the sources to be present&lt;br /&gt;
* TODO: check whether dracut has some advantages regarding module loading over genkernel-generated initramfs&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone proposal ==&lt;br /&gt;
== Portage overlay proposal ==&lt;br /&gt;
== Portage profile proposal ==&lt;br /&gt;
== Package and file mirror proposal ==&lt;br /&gt;
== Puppet proposal ==&lt;br /&gt;
* Adhere to Craig Dunns [http://www.craigdunn.org/2012/05/239/ architecture] [http://www.slideshare.net/PuppetLabs/roles-talk]&lt;br /&gt;
** on the system level (ie for each bar-metal or virtual machine)&lt;br /&gt;
*** roles contains the business view (ie. [https://github.com/radiorabe/puppet/blob/master/role/manifests/puppet/master.pp role::puppet::master])&lt;br /&gt;
*** profiles the implementation (such as [https://github.com/radiorabe/puppet/blob/master/profile/manifests/puppet/master.pp profile::puppet::master])&lt;br /&gt;
** on the architecture level (ie. in the cloud-fabric)&lt;br /&gt;
*** roles contains the business view (ie. role::cloud-storage, role::product1)&lt;br /&gt;
*** profiles contain the implementation (ie profile::storage-cluster, profile::storage-webinterface-farm)&lt;br /&gt;
* Keep profiles, roles (as per craig) and Puppetfile in [https://github.com/radiorabe/puppet github.com/radiorabe/puppet]&lt;br /&gt;
** This is where we keep feature/*, develop and master (ie staging) branches&lt;br /&gt;
** An internal clone then contains all these + production (what exactly is in prodution, ie. our release schedule is considered sensitive in this implementation)&lt;br /&gt;
** This lets us use the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model with almost no changes (the one change being us gating stuff into production on the closed clone)&lt;br /&gt;
** github may use hooks to push content to our internal git when they happen&lt;br /&gt;
* All other modules need their own repo and must be published to the puppet module forge&lt;br /&gt;
* Use librarian-puppet (or r10k) for composing the final puppet envs&lt;br /&gt;
** r10k eschews git submodule support we used in puppet-syslogng but has support for multiple envs out of the box&lt;br /&gt;
** librarian-puppet would need to be run once per environment to achieve what r10k does&lt;br /&gt;
* provide develop, master and production branches from private repo as puppet environments on master&lt;br /&gt;
&lt;br /&gt;
== Install host proposal ==&lt;br /&gt;
* use the existing server on [[tftp-01]] on the RaBe infra as a shortcut&lt;br /&gt;
** replace that instance with one native to the infra when it is ready for that&lt;br /&gt;
* iPXE [http://ipxe.org/]&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
* Tools that run puppet on freshly installed machines (and also do some provisioning)&lt;br /&gt;
** [https://forge.puppetlabs.com/puppetlabs/razor puppetlabs razor] bare metal/cloud provisioning tool&lt;br /&gt;
** [http://www.vagrantup.com/ vagrant] cloud provisioning aimed at provisioning developer boxes (with virtualbox). Has 3rd party support for various cloud systems. Vagrant might be interesting for creating dev clouds. I&#039;ve seen this being used on production sites.&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure proposal ==&lt;br /&gt;
* write [[certificate policy]] (in german!)&lt;br /&gt;
* hold a key ceremony for the root and level 1&lt;br /&gt;
** offline ceremony on an old netbook with centos or similar (not debian, probably not gentoo to make this happen soonish)&lt;br /&gt;
** Sign RaBe root cert and level 1 intermediate cert&lt;br /&gt;
** store root cert key on 2 sdcards and as 1 printout somewhere safely&lt;br /&gt;
** store level 1 intermediate key on sdcards for use by admins&lt;br /&gt;
* use level 1 intermediate key to sign level 2 cas as needed&lt;br /&gt;
** level 2 robot ca key for puppet (managed by &amp;lt;code&amp;gt;puppet ca&amp;lt;/code&amp;gt;)&lt;br /&gt;
** level 2 ca for client certs&lt;br /&gt;
** level 2 ca for host certs&lt;br /&gt;
** more level 2 certs&lt;br /&gt;
* use OpenSSL as default software for PKI&lt;br /&gt;
** ssl has the largest userbase which should make it easier on new admins&lt;br /&gt;
** features that openssl does not implement get used as soon as openssl catches up (ie. [http://cmpforopenssl.sourceforge.net/‎ CMP])&lt;br /&gt;
&lt;br /&gt;
== git hosting proposal ==&lt;br /&gt;
&lt;br /&gt;
* adhere to git-flow for all the things. Automate said usage as far as possible.&lt;br /&gt;
{|- class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=4 | git-flow branching &lt;br /&gt;
|-&lt;br /&gt;
! Branch&lt;br /&gt;
! Environment&lt;br /&gt;
! Merge from&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt;&lt;br /&gt;
| production&lt;br /&gt;
| &amp;lt;code&amp;gt;release/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;hotfix/&amp;lt;/code&amp;gt;&lt;br /&gt;
| Released code with a &amp;lt;code&amp;gt;git tag&amp;lt;/code&amp;gt; for each merge.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;release/v0.0.0&amp;lt;/code&amp;gt;&lt;br /&gt;
| staging&lt;br /&gt;
| &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;&lt;br /&gt;
| Contains final releasing work like updating versioning and changelog. This is where we keep semver concerns in check if they where not taken care of already.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;hotfix/v0.0.0&amp;lt;/code&amp;gt;&lt;br /&gt;
| staging&lt;br /&gt;
| &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt;&lt;br /&gt;
| Only for critically urgent fixes. In most cases doing a release from &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt; is preferred.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;&lt;br /&gt;
| development&lt;br /&gt;
| &amp;lt;code&amp;gt;feature/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt;&lt;br /&gt;
| Only feature branches that are ready for production should get merged here. &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt; gets merged here after each merge to it. Merging is done with pull requests and review.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;feature/featurename&amp;lt;/code&amp;gt;&lt;br /&gt;
| development&lt;br /&gt;
| &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;&lt;br /&gt;
| New features get implemented here until they are considered ready for production and merged to &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;support/v0.0.0&amp;lt;/code&amp;gt;&lt;br /&gt;
| LTS&lt;br /&gt;
| &lt;br /&gt;
| Marked experimental in most implementations and unused for now.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* Install gitlab on a vm and integrate external mirrors from github and ldap users from stoney-ldap.&lt;br /&gt;
** keep repo of public mirrors in hieradata so we can configure them from puppet.&lt;br /&gt;
** each organisation in stoney-ldap automatically gets a private project in gitlab.&lt;br /&gt;
* Configure web hook intrastructure and integrate with continuous integration system.&lt;br /&gt;
* Make continuous integration show feedback back in gitlab.&lt;br /&gt;
** check for &amp;lt;code&amp;gt;git annotate&amp;lt;/code&amp;gt; support or use img badges.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;On organization projects in gitlab&#039;&#039;&#039;&lt;br /&gt;
* Each project comes with default repos. &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Repo&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
| Set up using a template, contains a Puppetfile and Puppetfile.lock and a hieradata directory.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;role&amp;lt;/code&amp;gt;&lt;br /&gt;
| Read only copy of global role module for reference.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;profile&amp;lt;/code&amp;gt;&lt;br /&gt;
| Read only copy of global profile module for reference.&lt;br /&gt;
|}&lt;br /&gt;
* Everything in the latter two modules is configurable through hieradata in the first repo.&lt;br /&gt;
* The default setup automatically updates &amp;lt;code&amp;gt;role&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;profile&amp;lt;/code&amp;gt; when they get new merges.&lt;br /&gt;
* A software agent (ci) regularly clones &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;, does a full build and pushes the results back to &amp;lt;code&amp;gt;feature/tinderbox&amp;lt;/code&amp;gt;&lt;br /&gt;
* This agent autmatically creates pull requests if tinderbox builds did not fail.&lt;br /&gt;
* Org leaders may then merge these PRs and bake them into a local release.&lt;br /&gt;
* Some kind of UI helps them do this without much technical knowledge.&lt;br /&gt;
* More repos may be added by the customer.&lt;br /&gt;
* project organizations are private, per customer.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [http://gitlab.org/ gitlab] seems nice even though is is ruby on rails under the hood&lt;br /&gt;
* [https://github.com/sag47/gitlab-mirrors gitlab-mirrors] is a companion app to gitlab for adding readonly mirror repos to gitlab. We might consider hacking it to not use &amp;lt;code&amp;gt;git remote prune&amp;lt;/code&amp;gt;.&lt;br /&gt;
* [http://www.javacodegeeks.com/2014/01/git-flow-with-jenkins-and-gitlab.html git-flow with jenkins and gitlab]&lt;br /&gt;
* [https://wiki.jenkins-ci.org/display/JENKINS/Gitlab+Hook+Plugin gitlab hook for jenkins]&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Binary_package_guide Gentoo Binary Package Guide]&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Preserve-libs Gentoo preserve-libs]&lt;br /&gt;
* [http://swift.siphos.be/aglara/ A Gentoo Linux Advanced Reference Architecture]&lt;br /&gt;
* [http://www.gentoo.org/proj/en/gentoo-alt/prefix/ Gentoo Prefix]&lt;br /&gt;
* man pages&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/portage.5.html portage(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/emerge.1.html emerge(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html make.conf(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.1.html ebuild(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.5.html ebuild(5)]&lt;br /&gt;
&lt;br /&gt;
[[Category: Infrastructure]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_orchestra:_Roadmap&amp;diff=3014</id>
		<title>stoney orchestra: Roadmap</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_orchestra:_Roadmap&amp;diff=3014"/>
		<updated>2014-02-02T21:24:41Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== backlog ==&lt;br /&gt;
&lt;br /&gt;
=== unsorted ===&lt;br /&gt;
These might be rather epic and need breaking down until they are sensible user-stories.&lt;br /&gt;
&lt;br /&gt;
* investigate, document and implement hooks and git-scripts needed for continuous-(integration|development|.*)&lt;br /&gt;
* evaluate and decide between puppet-librarian, r10k or byoc solution for applying Puppetfile/Modulefile&lt;br /&gt;
* develop and establish &amp;quot;frontend&amp;quot; tooling like git-* scripts or puppet-rake tasks for devs and admins&lt;br /&gt;
* use mcollective for orchestration when provisioning new services thru puppet&lt;br /&gt;
* early bootstrap of puppet master during stoney cloud install&lt;br /&gt;
&lt;br /&gt;
== links ==&lt;br /&gt;
&lt;br /&gt;
* Architecture&lt;br /&gt;
** [[Gentoo_Infrastructure#Puppet_proposal]]&lt;br /&gt;
** [http://www.craigdunn.org/2012/05/239/ Craig Dunn&#039;s Blog: Designing Puppet – Roles and Profiles]&lt;br /&gt;
* Puppet Modules&lt;br /&gt;
** [http://forge.puppetlabs.com/gentoo/portage gentoo/portage puppet module]&lt;br /&gt;
** [http://forge.puppetlabs.com/purplehazech/syslogng syslogng module]&lt;br /&gt;
** [http://forge.puppetlabs.com/purplehazech/ccache ccache module] ;)&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney orchestra]][[Category:Roadmap]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Main_Page&amp;diff=3013</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Main_Page&amp;diff=3013"/>
		<updated>2014-02-02T21:18:56Z</updated>

		<summary type="html">&lt;p&gt;Lucas: make the infrastructure part be more marketing like and contain links to moar&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;float:left; width:67%&amp;quot;&amp;gt;&lt;br /&gt;
The stoney cloud wiki acts as a collection of all aspects of a high availability cloud infrastructure. It is roughly divided into the following sections:&lt;br /&gt;
* &#039;&#039;&#039;Infrastructure&#039;&#039;&#039;: The basis of the whole ecosystem. &lt;br /&gt;
* &#039;&#039;&#039;stoney cloud&#039;&#039;&#039;: The actual cloud with a simple multi-tenant web based interface.&lt;br /&gt;
* &#039;&#039;&#039;Self-Service Modules&#039;&#039;&#039;: Extension modules to expand the functionality of the stoney cloud.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right; width:33%&amp;quot;&amp;gt;&lt;br /&gt;
== Quick Links ==&lt;br /&gt;
* [http://packages.stoney-cloud.org/stoney-cloud/pre-releases/1.2/iso/ stoney cloud: Download]&lt;br /&gt;
* [[stoney cloud: Demo-System Installation]]&lt;br /&gt;
* [[stoney cloud: Single-Node Installation]]&lt;br /&gt;
* [[stoney cloud: Multi-Node Installation]]&lt;br /&gt;
* [[stoney cloud: Upgrade]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both; height:0px; line-height:0px;&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:left; width:33%&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Infrastructure ==&lt;br /&gt;
The stoney cloud builds upon the [http://www.gentoo.org/ Gentoo] Linux Distribution and is dependent on infrastructure projects like:&lt;br /&gt;
* Build Server&lt;br /&gt;
* Binary Package Server&lt;br /&gt;
* Mirror Server&lt;br /&gt;
* Puppet Server&lt;br /&gt;
While this infrastructure will serve stoney cloud in many ways, it may also be used to build cloud independent systems. We want you to be able to deploy desktops or more specialized gear using the same stack the infrastructure projects use to build services and manage machines for the cloud.&lt;br /&gt;
&lt;br /&gt;
Currently some [[Gentoo Infrastructure|requirements and proposals]] have been imported from the RaBe Wiki and are being fleshed out and merged into this wiki proper.&lt;br /&gt;
== stoney cloud Wiki Write Access ==&lt;br /&gt;
If you would like write access to the stoney cloud Wiki, please send us a [mailto:support@stoney-cloud.org?Subject=stoney%20cloud%20Wiki%3A%20Request%20Account&amp;amp;body=Dear%20stoney%20cloud%20Support%20Team%0A%0AI%20hereby%20request%20a%20write%20access%20account%20to%20the%20stoney%20cloud%20Wiki.%0A%0AName%3A%20%0ASurname%3A%20%0AE-Mail%3A%0A%0AMany%20thanks%20in%20advance! request].&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:left; width:34%;&amp;quot;&amp;gt;&lt;br /&gt;
== stoney cloud ==&lt;br /&gt;
The stoney cloud is an expandable multi-tenant web based Open Source Cloud management solution with service providers as it&#039;s target audience.&lt;br /&gt;
&lt;br /&gt;
*[[:Category:stoney core|stoney core]]: Main framework responsible for shared functionality:&lt;br /&gt;
** [[:Category:REST_API|REST API]], which serves as a data and business logic abstraction layer and uses JSON as the primary data interchange format.&lt;br /&gt;
** User management, rights and roles.&lt;br /&gt;
** A consistent look and feel between modules.&lt;br /&gt;
** Internationalization.&lt;br /&gt;
&lt;br /&gt;
*[[:Category:stoney conductor|stoney conductor]]: Responsible for:&lt;br /&gt;
** Storage allocation&lt;br /&gt;
** Network configuration&lt;br /&gt;
** Virtual machine profiles&lt;br /&gt;
** Virtual machine templates&lt;br /&gt;
** Virtual machines&lt;br /&gt;
** Virtual machine snapshots&lt;br /&gt;
** Virtual machine full backups&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right; width:33%&amp;quot;&amp;gt;&lt;br /&gt;
== Self-Service Modules ==&lt;br /&gt;
=== Existing ===&lt;br /&gt;
* [[:Category:stoney backup|stoney backup]]: On-line backup service for desktops, servers and virtual machines.&lt;br /&gt;
&lt;br /&gt;
=== Work in progress ===&lt;br /&gt;
* [[:Category:stoney vm|stoney vm]]: Simplified sub set of the [[:Category:stoney conductor|stoney conductor]] functionality.&lt;br /&gt;
* [[:Category:stoney mail|stoney mail]]: Mail service with optional group-ware (based on Open-Xchange).&lt;br /&gt;
&lt;br /&gt;
=== Planned ===&lt;br /&gt;
* [[:Category:stoney monitor|stoney monitor]]: Monitoring (with Zabbix).&lt;br /&gt;
* [[:Category:stoney orchestra|stoney orchestra]]: Configuration Management (with Puppet).&lt;br /&gt;
* [[:Category:stoney box|stoney box]]: An on-line storage service (will support WebDAV via HTTPS and SFTP, later CIFS as well).&lt;br /&gt;
* [[:Category:stoney web|stoney web]]: Web &amp;amp; Database hosting service (based on Apache and MariaDB).&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear:both; height:0px; line-height:0px;&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
__NOEDITSECTION__&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Hiera_Example&amp;diff=3009</id>
		<title>Hiera Example</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Hiera_Example&amp;diff=3009"/>
		<updated>2014-02-01T15:24:02Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
---&lt;br /&gt;
:backends:&lt;br /&gt;
  - ldap&lt;br /&gt;
  - yaml&lt;br /&gt;
  - json&lt;br /&gt;
:ldap:&lt;br /&gt;
  :url:ldaps://ldap.stoney-cloud.org:636/&lt;br /&gt;
  :binddn:cn=Manager,dc=stoney-cloud,dc=org&lt;br /&gt;
  :bindpw:secret&lt;br /&gt;
  :basedn:dc=stoney-cloud,dc=org&lt;br /&gt;
:yaml:&lt;br /&gt;
  :datadir: /etc/puppet/hieradata&lt;br /&gt;
:json:&lt;br /&gt;
  :datadir: /etc/puppet/hieradata&lt;br /&gt;
:hierarchy:&lt;br /&gt;
  - &amp;quot;ou=virtual machines,ou=services?sub?(&amp;amp;(sstNetworkHostName=%{::hostname})(sstNetworkDomainName=%{::domainname}))&amp;quot;&lt;br /&gt;
  - &amp;quot;ou=software stack,ou=configuration?sub?(uid=%{::rzUid})&amp;quot;&lt;br /&gt;
  - &amp;quot;%{::clientcert}&amp;quot;&lt;br /&gt;
  - &amp;quot;%{::custom_location}&amp;quot;&lt;br /&gt;
  - common&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notes:&lt;br /&gt;
* This is an example of how a hiera config file might look with an mock ldap backend. The backend in question still needs to be found or written.&lt;br /&gt;
* mapping from a DN to directory structure would be nice, so we would rather have to write: &#039;&#039;ou=virtual machines/ou=services&#039;&#039; instead to be compatible with the already existing yaml/json backends or something different entirely&lt;br /&gt;
* this needs to take into consideration that puppet expects keys to be defined in a way to enable implicit parameter injection in parameterized classes&lt;br /&gt;
* existing ldap backends for hiera: [https://github.com/hunner/hiera-ldap], [http://forge.ircam.fr/p/hiera-ldap-backend/]&lt;br /&gt;
* we should probably aim at integrating this in hiera-2 with regards to ARM-8 and the already imlemented ARM-9&lt;br /&gt;
* I&#039;m not convinced that we should not just grab all this data from the [[stoney_core:_REST_API]] and use that as an integration point for puppet.&lt;br /&gt;
&lt;br /&gt;
[[Category: Documentation]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Hiera_Example&amp;diff=3008</id>
		<title>Hiera Example</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Hiera_Example&amp;diff=3008"/>
		<updated>2014-02-01T15:14:50Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
---&lt;br /&gt;
:backends:&lt;br /&gt;
  - ldap&lt;br /&gt;
  - yaml&lt;br /&gt;
  - json&lt;br /&gt;
:ldap:&lt;br /&gt;
  :url:ldaps://ldap.stoney-cloud.org:636/&lt;br /&gt;
  :binddn:cn=Manager,dc=stoney-cloud,dc=org&lt;br /&gt;
  :bindpw:secret&lt;br /&gt;
  :basedn:dc=stoney-cloud,dc=org&lt;br /&gt;
:yaml:&lt;br /&gt;
  :datadir: /etc/puppet/hieradata&lt;br /&gt;
:json:&lt;br /&gt;
  :datadir: /etc/puppet/hieradata&lt;br /&gt;
:hierarchy:&lt;br /&gt;
  - &amp;quot;ou=virtual machines,ou=services?sub?(&amp;amp;(sstNetworkHostName=%{::hostname})(sstNetworkDomainName=%{::domainname}))&amp;quot;&lt;br /&gt;
  - &amp;quot;ou=software stack,ou=configuration?sub?(uid=%{::rzUid})&amp;quot;&lt;br /&gt;
  - &amp;quot;%{::clientcert}&amp;quot;&lt;br /&gt;
  - &amp;quot;%{::custom_location}&amp;quot;&lt;br /&gt;
  - common&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notes:&lt;br /&gt;
* This is an example of how a hiera config file might look with an mock ldap backend. The backend in question still needs to be found or written.&lt;br /&gt;
* mapping from a DN to directory structure would be nice, so we would rather have to write: &#039;&#039;ou=virtual machines/ou=services&#039;&#039; instead to be compatible with the already existing yaml/json backends or something different entirely&lt;br /&gt;
* this needs to take into consideration that puppet expects keys to be defined in a way to enable implicit parameter injection in parameterized classes&lt;br /&gt;
* existing ldap backends for hiera: [https://github.com/hunner/hiera-ldap], [http://forge.ircam.fr/p/hiera-ldap-backend/]&lt;br /&gt;
* we should probably aim at integrating this in hiera-2 with regards to ARM-8 and the already imlemented ARM-9&lt;br /&gt;
* I&#039;m not convienced that we should not just grab all this data from the [[stoney_core:_REST_API]] and use that as an integration point fpr puppet.&lt;br /&gt;
&lt;br /&gt;
[[Category: Documentation]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_orchestra:_Requirements&amp;diff=3007</id>
		<title>stoney orchestra: Requirements</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_orchestra:_Requirements&amp;diff=3007"/>
		<updated>2014-02-01T11:54:34Z</updated>

		<summary type="html">&lt;p&gt;Lucas: /* Requirements */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;&lt;br /&gt;
== Overview ==&lt;br /&gt;
== Requirements ==&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Version controlled via Git&lt;br /&gt;
* ENC and hiera support with data from ldap&lt;br /&gt;
* Puppet recipes for &lt;br /&gt;
** installing, updating, removing and (re-)configuring specific software belonging to an application stack (see [[#Build_host_requirements|build host]]).&lt;br /&gt;
** (re-)configuring software belonging to a system stack&lt;br /&gt;
** Updating the system stack (&amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;) aka system update.&lt;br /&gt;
** installing, updating and removing of kernel packages (including the handling of the ensuing reboot)&lt;br /&gt;
* use best-of-breed tools like hiera and augeas (this might mean targeting 3.3.x due to module data support in [https://github.com/puppetlabs/armatures/blob/master/arm-9.data_in_modules/index.md ARM-9])&lt;br /&gt;
* Use a sane prexisting puppet architecture concept&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
[[Category:stoney orchestra]]&lt;br /&gt;
[[Category:Requirements]]&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=3003</id>
		<title>Gentoo Infrastructure</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=3003"/>
		<updated>2014-01-31T20:09:17Z</updated>

		<summary type="html">&lt;p&gt;Lucas: /* Build farm proposal */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This article describes how we plan on using gentoo as an infrastructure backbone for creating a complete and modern IT architecture.&lt;br /&gt;
&lt;br /&gt;
== Glossary ==&lt;br /&gt;
@TODO We need to clean up some terms already (for instance the portage vs puppet profile thing) A [[:Category:Glossary|glossary]] should help us define term more closely (and stick to the definitions).&lt;br /&gt;
&lt;br /&gt;
; portage profile&lt;br /&gt;
: A profile in gentoo portage. Defines either a system or application stack for portage.&lt;br /&gt;
; portage build profile&lt;br /&gt;
: A profile in gentoo portage. Based of a system profile but used during the build phase of the binary packages used in the final deploy.&lt;br /&gt;
; puppet profile&lt;br /&gt;
: A puppet profile contains the implementation logic of how to install and configure an aspect of a system.&lt;br /&gt;
; stack&lt;br /&gt;
: A stack contains a complete and deployable product that may be provisioned and used. Stack have very simple inheritance letting the admin create stack trees based on each other. For instance a Ruby on Rails stack will be based of of a ruby stack which is based off a linux stack.&lt;br /&gt;
&lt;br /&gt;
= Required components =&lt;br /&gt;
* Build host(s) for binary packages&lt;br /&gt;
* HTTP server for serving binary packages and distfiles (required by the ebuilds)&lt;br /&gt;
* Git clone of official portage tree&lt;br /&gt;
* Overlay(s)&lt;br /&gt;
* Own portage profile(s)&lt;br /&gt;
* rsync or Git server for serving the Overlay and the portage profiles&lt;br /&gt;
* Stage3 building system&lt;br /&gt;
* Puppet for configuration management and software installation&lt;br /&gt;
* Git version control for everything (overlays, portage profiles, puppet manifests and scripts/code)&lt;br /&gt;
* Install host (PXE boot / TFTP / DHCP)&lt;br /&gt;
** emc/puppetlabs [https://github.com/puppetlabs/Razor razor] can do this but needs some work for gentoo &lt;br /&gt;
* Automatic base installation script&lt;br /&gt;
** also in the scope of razor&lt;br /&gt;
* Separation of development, staging and production environments&lt;br /&gt;
** tagged and managed in git&lt;br /&gt;
* PKI environment (with dedicated sub CAs) for X509 certificates (used for Puppet, server and client certs etc.)&lt;br /&gt;
* git web interface (make dotfiles and frozen clones accessible to power-users)&lt;br /&gt;
* Central authentication service&lt;br /&gt;
* DNS, DHCP and NTP services&lt;br /&gt;
* Monitoring and alarming system&lt;br /&gt;
* Logging&lt;br /&gt;
* versioning for everything (if it is a committable file, use semver on its repo)&lt;br /&gt;
&lt;br /&gt;
== Binary package requirements ==&lt;br /&gt;
* Ability to build and install binary packages with the same version but different USE flags. For example, MySQL server package (&amp;lt;code&amp;gt;-minimal&amp;lt;/code&amp;gt; and MySQL client &amp;amp; libs package &amp;lt;code&amp;gt;minimal&amp;lt;/code&amp;gt;)&lt;br /&gt;
** don&#039;t go there: this imposes a significant amount of maintenance work and may still break. Rather provide large enough base sets and accept that some packages install too much (you can still disable them at runtime) and build the few deviations from the rule on the servers from source --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:39, 3 January 2014 (CET)&lt;br /&gt;
*** Yes, we need to and can go there :-) I agree with you, that we should do this only if necessary, apache for example can be built once and has the ability to turn features (module loading) on/off via its configuration. Other software does not provide such run-time configuration which results in unwanted server-software and dependencies on the installed hosts (&amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for example). I clearly do not want to have a dedicated build environment for each of those packages, I would rather see a build env, called minimal for example, which is used to build all those database packages with only lib and clients enabled (use the same env for PostgreSQL, OpenLDAP, MySQL etc.). As stated before, the whole build process needs to be automated, so I don&#039;t see a considerable increase of maintenance work coming up here. The dependency problem is mitigated through the fact that we have a frozen portage tree for all our build envs and therefore use the same versions everywhere. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 12:04, 6 January 2014 (CET)&lt;br /&gt;
*** Yes and no on this one. We clearly need to keep the list of packages that require this at bare minimum. &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for instance doesn&#039;t warrant this, we just won&#039;t start the server on non server nodes. Easy as cake. The server code and it&#039;s deps wont do any harm on say a desktop or other server box. Even though I can&#039;t think of example, I do believe we will be needing this possibility when we encounter packages that need to be built using different profiles for different use cases, things like having a php with-curlwrappers vs one with the curl module sans curlwrappers. The important point I take from this is that creating new profiles with small deviations from our default must be very easy (ie. not much work). Basically we need the infras support for n different build profiles to be fully automated and well documented. [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 19:52, 9 January 2014 (CET)&lt;br /&gt;
**** The &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; is definitely a good example, I don&#039;t want to install and maintain MySQL, Apache, PHP, snmpd (including all the deps) etc. on hosts which just need a Zabbix agent. I would also like to pragmatically avoid unused deps, in order to minimize reverse-updates and security updates (which must be provided nonetheless if the software is in use or not). --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 13:20, 10 January 2014 (CET)&lt;br /&gt;
* Providing binary packages for different major (and sometimes minor) versions, for example: &amp;lt;code&amp;gt;dev-db/mysql-5.X.Y&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;dev-db/mysql-6.X.Y&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Provide binary packages for pre-compiled Linux kernels and modules (not just a binary package of &amp;lt;code&amp;gt;sys-kernel/gentoo-sources&amp;lt;/code&amp;gt;)&lt;br /&gt;
** This makes it possible to build stage4 images from binary packages. &lt;br /&gt;
** Most likely there will be separate packages for servers and desktops built with different genkernel configs.&lt;br /&gt;
* Handle reverse dependency updates and ABI changes&lt;br /&gt;
&lt;br /&gt;
== Build host requirements ==&lt;br /&gt;
* Build binary package for all required software&lt;br /&gt;
* Support for multiple environments (development, staging and production)&lt;br /&gt;
* Support for multiple architectures (such as x86, amd64 etc.)&lt;br /&gt;
* Support for multiple build profiles&lt;br /&gt;
** system (or base) profile, such as desktop or server (stage3) (all the packages contained within the &amp;lt;code&amp;gt;/etc/portage/make.profile&amp;lt;/code&amp;gt; or via &amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;)&lt;br /&gt;
** application profiles, such as php5-app, django-app etc.)&lt;br /&gt;
** simple inheritance is used for things like python-app -&amp;gt; django-app&lt;br /&gt;
** stacks consist of one system profile and multiple application profiles&lt;br /&gt;
** don&#039;t do this: Gentoo itself has only a few profiles and even there issues arise when combining them (for example desktop + selinux-hardened) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:40, 3 January 2014 (CET)&lt;br /&gt;
*** Those are build-profiles (for example chroots or some sort of overlay-fs) not Gentoo (portage) profiles, we definitely need to clarify those terms ;) --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 20:01, 5 January 2014 (CET)&lt;br /&gt;
* All build profiles will use a system profile as their base profile&lt;br /&gt;
* Ability to update an existing build profile, without the need to build it from scratch&lt;br /&gt;
* Ability to do fully automated clean builds (ie. for new archs or new stacks)&lt;br /&gt;
* Ability to automatically update all development profiles on a predefined frequency such as daily, weekly or monthly an be notified about build failures&lt;br /&gt;
** [http://jenkins-ci.org/ jenkins ci] can do this using one jenkins master and a least one build slave per architecture.&lt;br /&gt;
** Other options would be [https://github.com/travis-ci/travis-ci travis ci] (not ready for in-house use) or [http://cruisecontrol.sourceforge.net/ cruise control]&lt;br /&gt;
** Rabe already has a jenkins instance: [http://intranet.rabe.ch/jenkins/]. The instance [[Jenkins-01]] is more or less modern and should be easy to reintegrate with puppet.&lt;br /&gt;
* Each build profile stores the built binary packages under a per-defined directory which will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Application build profiles stores only the extra packages within the above directory, packages included in a base profile won&#039;t be duplicated.&lt;br /&gt;
* Old or no longer supported packages will be removed automatically&lt;br /&gt;
* Build a stage 3 tarball, which can be used for the automatic installation via PXE/TFTP.&lt;br /&gt;
** must be able to build a stage tarball for each of the available environment-arch-system profile combinations&lt;br /&gt;
* Handle reverse dependency updates and ABI changes (aka &amp;lt;code&amp;gt;revdep-rebuild&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Handle perl and python (maybe more) dependency updates (aka &amp;lt;code&amp;gt;perl-cleaner&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;python-updater&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Ability to build kernel and modules&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone requirements ==&lt;br /&gt;
* The official portage tree needs to be cloned via Git, which basically enables one to:&lt;br /&gt;
** keep the control over portage tree updates&lt;br /&gt;
** provide an old version of the tree&lt;br /&gt;
** cherry pick updates&lt;br /&gt;
***  this should be avoided at all cost since it can lead to various sorts of breakages (ebuild &amp;lt;-&amp;gt; ebuild, ebuild &amp;lt;-&amp;gt; eclass, ebuild &amp;lt;-&amp;gt; profile, eclass &amp;lt;-&amp;gt; profile interaction) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:24, 3 January 2014 (CET)&lt;br /&gt;
**** Yes, I agree. Nonetheless, we need the &#039;&#039;possibility&#039;&#039; to do cherry picking, for example to react on zero-day exploits. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 19:53, 5 January 2014 (CET)&lt;br /&gt;
* Support for a development, staging and production branch&lt;br /&gt;
** Ability to automatically sync from upstream&lt;br /&gt;
** Easy merge support from one branch to the next &#039;&#039;higher&#039;&#039; one (staging -&amp;gt; production)&lt;br /&gt;
* Notification support for new [http://www.gentoo.org/security/en/glsa/index.xml GLSAs] which affect packages within the cloned trees.&lt;br /&gt;
** Either via automatic update and merge of &amp;lt;code&amp;gt;/usr/portage/metadata/glsa&amp;lt;/code&amp;gt; or via external mechanisms such as consulting the [http://www.gentoo.org/rdf/en/glsa-index.rdf RDF feed].&lt;br /&gt;
** Having an inventory by collecting puppet facts allows to check for security updates in a central location --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:31, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage overlay requirements ==&lt;br /&gt;
* One Git based portage [http://www.gentoo.org/proj/en/overlays/userguide.xml overlay]&lt;br /&gt;
** Contains own [[#Portage_profile_requirements|portage profiles]]&lt;br /&gt;
** Contains own or modified ebuilds or legacy ones removed from the official tree&lt;br /&gt;
* Support for development, staging and production environment (via Git branches)&lt;br /&gt;
* [http://layman.sourceforge.net/ Layman] compatibility&lt;br /&gt;
** Portage has now direct repository support (as has cave/paludis) and layman may be omitted --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:32, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage profile requirements ==&lt;br /&gt;
* Multiple [http://wiki.gentoo.org/wiki/Profile Portage profiles] stored within the [[#Overlay_requirements|overlay]].&lt;br /&gt;
** One for base, desktop and server (maybe more in the future, such as streambox)&lt;br /&gt;
*** desktop and server both inherit from the base profile which serves as the lowest common denominator.&lt;br /&gt;
* Support for multiple architectures (such as x86 and amd64)&lt;br /&gt;
** Avoid definition duplications via parent profile inheriting.&lt;br /&gt;
* All the profiles have an official Gentoo profile as their master&lt;br /&gt;
* Profiles include only packages belonging to a base system, not an application stack (those will be managed via puppet recipes)&lt;br /&gt;
* Profiles can be used to unmask packages required but not belonging to the base system&lt;br /&gt;
* Profiles sets all the default values for the client&#039;s [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html &amp;lt;code&amp;gt;make.conf&amp;lt;/code&amp;gt;], such as USE flags, BINHOSTS, GENTOO_MIRRORS, CFLAGS, CHOST etc.&lt;br /&gt;
** &#039;&#039;&#039;Warning&#039;&#039;&#039;: many such variables are not incremental and therefore need duplication of Gentoo base profile variables (requiring that someone tracks changes in those variables) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:29, 3 January 2014 (CET)&lt;br /&gt;
* keep the profiles (and the inheritance structure) as simple as possible, rather duplicate than inherit for small deviations to avoid inheritence issues --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:33, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Package host requirements ==&lt;br /&gt;
* Serving files via HTTPS&lt;br /&gt;
** Binary packages for all the clients (&amp;lt;code&amp;gt;PORTAGE_BINHOST&amp;lt;/code&amp;gt;), which were built by the [[#Build_host_requirements|build host]]&lt;br /&gt;
*** Binary packages will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
*** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== File mirror host requirements ==&lt;br /&gt;
* Hosts all the files required to build a package (&amp;lt;code&amp;gt;GENTOO_MIRRORS=mirror.example.com/public/gentoo/distfiles&amp;lt;/code&amp;gt;)&lt;br /&gt;
** Acts as a caching mirror for already downloaded packages from an official mirror&lt;br /&gt;
**  Serves fetch-restricted files (&amp;lt;code&amp;gt;dev-java/oracle-jdk-bin&amp;lt;/code&amp;gt; for example), to authorized clients&lt;br /&gt;
* Files are served via HTTPS&lt;br /&gt;
* Distinguishes between three groups of files&lt;br /&gt;
** &#039;&#039;&#039;public&#039;&#039;&#039;: Files which are available to all clients (theoretically even to the entire internet)&lt;br /&gt;
** &#039;&#039;&#039;site-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure (for example those which would put us into [http://www.bettercallsaul.com/ legal troubles] if available to the public)&lt;br /&gt;
** &#039;&#039;&#039;stack-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure and the software stack group (private files of a specific customer) &lt;br /&gt;
* Provides an easy way to let an administrator manually upload new files, for example via WebDAV-CGI, SFTP or a similar mechanism.&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== Puppet requirements ==&lt;br /&gt;
* moved to [[stoney_orchestra:_Requirements]], included below for reference.&lt;br /&gt;
&lt;br /&gt;
{{:stoney orchestra: Requirements}}&lt;br /&gt;
&lt;br /&gt;
== Install host requirements ==&lt;br /&gt;
* Ability to install physical and virtual machines&lt;br /&gt;
* Distinguish machines by their Ethernet MAC address&lt;br /&gt;
* Provide a PXE/TFTP boot mechanism&lt;br /&gt;
* Partition and format the (virtual) harddisks&lt;br /&gt;
* Install a stage3 image which was built by the build host&lt;br /&gt;
* Bootstrap puppet, enabling it to take over the individual installation and customization.&lt;br /&gt;
* Group hosts into&lt;br /&gt;
** environments (development, staging and production)&lt;br /&gt;
** architectures (such as x86, amd64 etc.)&lt;br /&gt;
** portage profiles (system profiles such as desktop and server)&lt;br /&gt;
** &amp;lt;s&amp;gt;stacks (comprising a complete product as a service with the underlying infrastructure)&amp;lt;/s&amp;gt; this is the task of Puppet --[[Benutzer:Chaf|Chaf]] ([[Benutzer Diskussion:Chaf|Diskussion]]) 09:42, 19. Dez. 2013 (CET)&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure requirements ==&lt;br /&gt;
* Local certificate authority for signing [http://en.wikipedia.org/wiki/X509 X.509] certificates.&lt;br /&gt;
* Master certificate authority root certificate which is only used to sign Sub-CA certificates&lt;br /&gt;
* Sub certificate authorities used for various cases such as&lt;br /&gt;
** Puppet certificates [http://docs.puppetlabs.com/puppet/3/reference/config_ssl_external_ca.html]&lt;br /&gt;
** User certificates&lt;br /&gt;
** Client certificates&lt;br /&gt;
** Host certificates&lt;br /&gt;
* Ability to sign, revoke and extend certificates&lt;br /&gt;
* Publish certificate revocation status either via [http://en.wikipedia.org/wiki/Certificate_revocation_list CRL] and/or [http://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol OCSP]&lt;br /&gt;
** CRL is not worth the hassle due to it not defining how often the CRL must be consulted. Since we are in the same physical net OCSP should be far superior here (thank to its live checking support). On the other hand puppet does not do OCSP yet (redmine: [http://projects.puppetlabs.com/issues/10111 #110111]) so we might need to implement both or implement OCSP as well as develop our own automated revocation for puppet.&lt;br /&gt;
* Choose DNs below &amp;lt;code&amp;gt;dc=rabe,dc=ch&amp;lt;/code&amp;gt;&lt;br /&gt;
* register a PEN-OID as issued by IANA if custom schema work is required&lt;br /&gt;
** Use a @rabe email when requesting a PEN at [http://pen.iana.org/pen/PenApplication.page IANA], last time the @purplehaze.ch was a problem!&lt;br /&gt;
* Some of the aforementioned sub-CAs might be implemented as robot CAs with a self service interface (ie for authorized users).&lt;br /&gt;
* Consider using [http://en.wikipedia.org/wiki/Certificate_Management_Protocol CMP] or [http://en.wikipedia.org/wiki/Certificate_Management_over_CMS CMC] as an API to signing, revoking et. al.&lt;br /&gt;
** Since the underlying RFCs of both these protocols are rather new they are not yet broadly supported.&lt;br /&gt;
* Keep local root CA offline!&lt;br /&gt;
** Maybe use an old netbook as root CA :P&lt;br /&gt;
* Support GPG keys for signing packages&lt;br /&gt;
&lt;br /&gt;
== Git hosting requirements ==&lt;br /&gt;
* Public repositories hosted on [http://www.github.com GitHub] (mainly) under the [https://github.com/organizations/radiorabe radiorabe organization] (almost anything which doesn&#039;t leak sensitive informations)&lt;br /&gt;
* Private repositories hosted on the internal infrastructure&lt;br /&gt;
** Accessible via https and a web interface&lt;br /&gt;
** contains some repos with uber-private data the gets compartmentalized even further (ie. hiera datafiles in different repos)&lt;br /&gt;
* One repository per component&lt;br /&gt;
* Daily backup of all repositories&lt;br /&gt;
* Branches for development, staging and production&lt;br /&gt;
** New features are added to the development branch only and later merged up to staging and production&lt;br /&gt;
* Must support pull-requests so we can implement a review process (when pulling through the envs)&lt;br /&gt;
** Sing-Offing might also be required&lt;br /&gt;
* Adhere to [http://semver.org/ Semantic Versioning] for version/release tags.&lt;br /&gt;
** Tag releases as &amp;lt;code&amp;gt;vX.Y.Z&amp;lt;/code&amp;gt; those will be automatically appear on GitHub as downloadable tarballs, which can be referenced within the corresponding ebuilds.&lt;br /&gt;
** Hit 1.0.0 as soon as code lands on production or earlier&lt;br /&gt;
** Commit .lock files when reaching 1.0.0 where applicable (Gemfile.lock, composer.lock) or earlier if needed&lt;br /&gt;
* Must be able to trigger remote events (ie. update master through mcollective after code was promoted to production in a PR)&lt;br /&gt;
* Support the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model&lt;br /&gt;
&lt;br /&gt;
== Messaging requirements ==&lt;br /&gt;
* I&#039;m talking AMPQ, JMS, STOMP, 0MQ and the likes&lt;br /&gt;
** not sure if we need something in this space for the infra&lt;br /&gt;
** it could facilitate comms between components&lt;br /&gt;
** stuff like mcollective and RadioDNS need something in this space&lt;br /&gt;
&lt;br /&gt;
== Monitoring, logging and alarming system requirements ==&lt;br /&gt;
@TODO&lt;br /&gt;
* centralized logging is used throughout&lt;br /&gt;
** with tools that help find and fix problems and do post mortems&lt;br /&gt;
* all systems are always monitored by a full monitoring suite&lt;br /&gt;
* the monitoring suite must support alarming users through multiple paths&lt;br /&gt;
** alarming should include a fallback strategy and a way to acknowledge alarms&lt;br /&gt;
** it must have a easy way to configure scheduled maintenance either before or while the maintenance is undergoing&lt;br /&gt;
* monitoring, logging and alarming are all automatically configured during regular provisioning of machines&lt;br /&gt;
* alerting uses jabber by default with fallbacks to email and sms-through-gsm depending on the site.&lt;br /&gt;
&lt;br /&gt;
= Implementation proposal =&lt;br /&gt;
== Build farm proposal ==&lt;br /&gt;
The build farm consists of a system of multiple vms to build binary packages for multiple environments, architectures and build profiles.&lt;br /&gt;
&lt;br /&gt;
* Git webhook on internal gitlab install pushes changes to jenkins master.&lt;br /&gt;
* Jenkins master dishes out jobs to jenkins slave machines for needed architecture and build profile.&lt;br /&gt;
* Jenkins slaves only get used once and wipe/reprovision themselves after master has stored build artefacts.&lt;br /&gt;
* We have build-slave templates available for each architecture/build profile combo.&lt;br /&gt;
* Upon use those get provisioned to the needed environment using puppet.&lt;br /&gt;
* All of this is set up using puppet and fully automated, even building of new build-slave templates and the whole releng on those.&lt;br /&gt;
* The build farm also keeps old templates and stable boxes on hold so it can use them to build differentials.&lt;br /&gt;
* Artefacts slaves will be producing:&lt;br /&gt;
** &amp;quot;vagrant&amp;quot;-style boot boxes&lt;br /&gt;
** full binpkg repos for a given env/arch/build profile combo&lt;br /&gt;
** stage3 balls for each arch/build profile&lt;br /&gt;
** stage4 balls for each environment&lt;br /&gt;
** build logs&lt;br /&gt;
** &amp;lt;code&amp;gt;/var/db/pkg&amp;lt;/code&amp;gt;&lt;br /&gt;
** puppet report data&lt;br /&gt;
** test results and code analysis results&lt;br /&gt;
* When we come to continuos deployment the jenkins master will also be able to trigger puppet when merges to master happen.&lt;br /&gt;
* This rolls out releases to the sub-system that was signed off by a merge to a master branch (see branching strategy in git proposal).&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
&lt;br /&gt;
==== build orchestration ====&lt;br /&gt;
&lt;br /&gt;
==== package building ====&lt;br /&gt;
* [http://www.chromium.org/chromium-os/developer-guide/chromite-shell-quick-start chromite] build utility from chromium os ([https://chromium.googlesource.com/chromiumos/chromite/ source repo])&lt;br /&gt;
** as far as I recall chromium os does highly parallel building making their build really fast with a slight trade of in long termn stability (ie. build might fail due to dependencies being built out of oder), &lt;br /&gt;
** the [http://www.chromium.org/chromium-os/developer-guide chromium os developer guide] might also be of interest, among other things it shows that google do split the build into a package building part and an image creation part.&lt;br /&gt;
* [https://wiki.sabayon.org/?title=En:Entropy entropy] is sabayons portage replacement, it focuses on binaries due to sabayon being a binary distribution&lt;br /&gt;
** their [https://github.com/Sabayon/build build system &amp;quot;Matter&amp;quot;] might be of interest, it seems to automate large parts of tracking gentoo portage with its tinderbox subsystem&lt;br /&gt;
** sabayon has &amp;lt;code&amp;gt;kernel-switcher&amp;lt;/code&amp;gt; for updating kernels&lt;br /&gt;
** kernel ebuilds live [https://github.com/Sabayon/sabayon-distro/tree/master/sys-kernel/linux-sabayon here] and probably rely on the [https://github.com/Sabayon/sabayon-distro/blob/master/eclass/sabayon-kernel.eclass sabayon-kernel eclass].&lt;br /&gt;
&lt;br /&gt;
==== &amp;quot;stage4&amp;quot;/box/iso building ====&lt;br /&gt;
* [http://packer.io packer.io] can be used to build stage4 (containing a kernel) images and seems to work for gentoo. Packer often gets used to build Vagrant boxes.&lt;br /&gt;
** [https://github.com/pierreozoux/packer-warehouse/blob/master/var-files/gentoo/generate_latest.sh gentoo script from packer-warehouse] used with packer to create a minimal gentoo vagrant box&lt;br /&gt;
** currently packer and packer-warehouse do not seem capable of building gentoo machines out of the box, I tested this with osx/virtualbox using gentoo stage3 and portage snapshots [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
* [https://github.com/jedi4ever/veewee veewee] vagrant box builder (builds stage4 images in a manner similar to packer&lt;br /&gt;
** has support for a massive amount of guest os types&lt;br /&gt;
*** installs puppet/chef using gem due to the oldish versions in gentoo (and probably elsewhere)&lt;br /&gt;
** supports kvm and others as host os&lt;br /&gt;
** while testing with osx/virtualbox I was able to build and export a vagrant box from gentoo stage3 and portage snapshots without any hiccups [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
** is in dire need of DRY: [https://github.com/jedi4ever/veewee/pull/690] to make it worth forking&lt;br /&gt;
* [http://blinkeye.ch/dokuwiki/doku.php/projects/mkstage4 mkstage4]&lt;br /&gt;
** aimed at creating backup stage4 tarballs of gentoo systems&lt;br /&gt;
** written in bash&lt;br /&gt;
** pretty simple, might come in handy as automation tool&lt;br /&gt;
&lt;br /&gt;
==== kernel ====&lt;br /&gt;
&lt;br /&gt;
* at the moment we build tarballs for the kernel+initramfs and the modules using &amp;lt;code&amp;gt;genkernel&amp;lt;/code&amp;gt; and have a separate ebuild which installs them&lt;br /&gt;
* ideally we would like to have an ebuild which takes the kernel sources (like the ebuild for &amp;lt;code&amp;gt;sys-kernel/gentoo-source&amp;lt;/code&amp;gt; does), builds it according to some default configuration or a user configuration if available (&amp;lt;code&amp;gt;savedconfig.eclass&amp;lt;/code&amp;gt;) and then installs the kernel and the modules as well as some minimal headers+configuration to build other packages requiring the sources to be present&lt;br /&gt;
* TODO: check whether dracut has some advantages regarding module loading over genkernel-generated initramfs&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone proposal ==&lt;br /&gt;
== Portage overlay proposal ==&lt;br /&gt;
== Portage profile proposal ==&lt;br /&gt;
== Package and file mirror proposal ==&lt;br /&gt;
== Puppet proposal ==&lt;br /&gt;
* Adhere to Craig Dunns [http://www.craigdunn.org/2012/05/239/ architecture] [http://www.slideshare.net/PuppetLabs/roles-talk]&lt;br /&gt;
** on the system level (ie for each bar-metal or virtual machine)&lt;br /&gt;
*** roles contains the business view (ie. [https://github.com/radiorabe/puppet/blob/master/role/manifests/puppet/master.pp role::puppet::master])&lt;br /&gt;
*** profiles the implementation (such as [https://github.com/radiorabe/puppet/blob/master/profile/manifests/puppet/master.pp profile::puppet::master])&lt;br /&gt;
** on the architecture level (ie. in the cloud-fabric)&lt;br /&gt;
*** roles contains the business view (ie. role::cloud-storage, role::product1)&lt;br /&gt;
*** profiles contain the implementation (ie profile::storage-cluster, profile::storage-webinterface-farm)&lt;br /&gt;
* Keep profiles, roles (as per craig) and Puppetfile in [https://github.com/radiorabe/puppet github.com/radiorabe/puppet]&lt;br /&gt;
** This is where we keep feature/*, develop and master (ie staging) branches&lt;br /&gt;
** An internal clone then contains all these + production (what exactly is in prodution, ie. our release schedule is considered sensitive in this implementation)&lt;br /&gt;
** This lets us use the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model with almost no changes (the one change being us gating stuff into production on the closed clone)&lt;br /&gt;
** github may use hooks to push content to our internal git when they happen&lt;br /&gt;
* All other modules need their own repo and must be published to the puppet module forge&lt;br /&gt;
* Use librarian-puppet (or r10k) for composing the final puppet envs&lt;br /&gt;
** r10k eschews git submodule support we used in puppet-syslogng but has support for multiple envs out of the box&lt;br /&gt;
** librarian-puppet would need to be run once per environment to achieve what r10k does&lt;br /&gt;
* provide develop, master and production branches from private repo as puppet environments on master&lt;br /&gt;
&lt;br /&gt;
== Install host proposal ==&lt;br /&gt;
* use the existing server on [[tftp-01]] on the RaBe infra as a shortcut&lt;br /&gt;
** replace that instance with one native to the infra when it is ready for that&lt;br /&gt;
* iPXE [http://ipxe.org/]&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
* Tools that run puppet on freshly installed machines (and also do some provisioning)&lt;br /&gt;
** [https://forge.puppetlabs.com/puppetlabs/razor puppetlabs razor] bare metal/cloud provisioning tool&lt;br /&gt;
** [http://www.vagrantup.com/ vagrant] cloud provisioning aimed at provisioning developer boxes (with virtualbox). Has 3rd party support for various cloud systems. Vagrant might be interesting for creating dev clouds. I&#039;ve seen this being used on production sites.&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure proposal ==&lt;br /&gt;
* write [[certificate policy]] (in german!)&lt;br /&gt;
* hold a key ceremony for the root and level 1&lt;br /&gt;
** offline ceremony on an old netbook with centos or similar (not debian, probably not gentoo to make this happen soonish)&lt;br /&gt;
** Sign RaBe root cert and level 1 intermediate cert&lt;br /&gt;
** store root cert key on 2 sdcards and as 1 printout somewhere safely&lt;br /&gt;
** store level 1 intermediate key on sdcards for use by admins&lt;br /&gt;
* use level 1 intermediate key to sign level 2 cas as needed&lt;br /&gt;
** level 2 robot ca key for puppet (managed by &amp;lt;code&amp;gt;puppet ca&amp;lt;/code&amp;gt;)&lt;br /&gt;
** level 2 ca for client certs&lt;br /&gt;
** level 2 ca for host certs&lt;br /&gt;
** more level 2 certs&lt;br /&gt;
* use OpenSSL as default software for PKI&lt;br /&gt;
** ssl has the largest userbase which should make it easier on new admins&lt;br /&gt;
** features that openssl does not implement get used as soon as openssl catches up (ie. [http://cmpforopenssl.sourceforge.net/‎ CMP])&lt;br /&gt;
&lt;br /&gt;
== git hosting proposal ==&lt;br /&gt;
&lt;br /&gt;
* adhere to git-flow for all the things. Automate said usage as far as possible.&lt;br /&gt;
{|- class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=4 | git-flow branching &lt;br /&gt;
|-&lt;br /&gt;
! Branch&lt;br /&gt;
! Environment&lt;br /&gt;
! Merge from&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt;&lt;br /&gt;
| production&lt;br /&gt;
| &amp;lt;code&amp;gt;release/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;hotfix/&amp;lt;/code&amp;gt;&lt;br /&gt;
| Released code with a &amp;lt;code&amp;gt;git tag&amp;lt;/code&amp;gt; for each merge.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;release/v0.0.0&amp;lt;/code&amp;gt;&lt;br /&gt;
| staging&lt;br /&gt;
| &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;&lt;br /&gt;
| Contains final releasing work like updating versioning and changelog. This is where we keep semver concerns in check if they where not taken care of already.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;hotfix/v0.0.0&amp;lt;/code&amp;gt;&lt;br /&gt;
| staging&lt;br /&gt;
| &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt;&lt;br /&gt;
| Only for critically urgent fixes. In most cases doing a release from &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt; is preferred.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;&lt;br /&gt;
| development&lt;br /&gt;
| &amp;lt;code&amp;gt;feature/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt;&lt;br /&gt;
| Only feature branches that are ready for production should get merged here. &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt; gets merged here after each merge to it. Merging is done with pull requests and review.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;feature/featurename&amp;lt;/code&amp;gt;&lt;br /&gt;
| development&lt;br /&gt;
| &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;&lt;br /&gt;
| New features get implemented here until they are considered ready for production and merged to &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;support/v0.0.0&amp;lt;/code&amp;gt;&lt;br /&gt;
| LTS&lt;br /&gt;
| &lt;br /&gt;
| Marked experimental in most implementations and unused for now.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* Install gitlab on a vm and integrate external mirrors from github and ldap users from stoney-ldap.&lt;br /&gt;
** keep repo of public mirrors in hieradata so we can configure them from puppet.&lt;br /&gt;
** each organisation in stoney-ldap automatically gets a private project in gitlab.&lt;br /&gt;
* Configure web hook intrastructure and integrate with continuous integration system.&lt;br /&gt;
* Make continuous integration show feedback back in gitlab.&lt;br /&gt;
** check for &amp;lt;code&amp;gt;git annotate&amp;lt;/code&amp;gt; support or use img badges.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;On organization projects in gitlab&#039;&#039;&#039;&lt;br /&gt;
* Each project comes with default repos. &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Repo&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
| Set up using a template, contains a Puppetfile and Puppetfile.lock and a hieradata directory.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;role&amp;lt;/code&amp;gt;&lt;br /&gt;
| Read only copy of global role module for reference.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;profile&amp;lt;/code&amp;gt;&lt;br /&gt;
| Read only copy of global profile module for reference.&lt;br /&gt;
|}&lt;br /&gt;
* Everything in the latter two modules is configurable through hieradata in the first repo.&lt;br /&gt;
* The default setup automatically updates &amp;lt;code&amp;gt;role&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;profile&amp;lt;/code&amp;gt; when they get new merges.&lt;br /&gt;
* A software agent (ci) regularly clones &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;, does a full build and pushes the results back to &amp;lt;code&amp;gt;feature/tinderbox&amp;lt;/code&amp;gt;&lt;br /&gt;
* This agent autmatically creates pull requests if tinderbox builds did not fail.&lt;br /&gt;
* Org leaders may then merge these PRs and bake them into a local release.&lt;br /&gt;
* Some kind of UI helps them do this without much technical knowledge.&lt;br /&gt;
* More repos may be added by the customer.&lt;br /&gt;
* project organizations are private, per customer.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [http://gitlab.org/ gitlab] seems nice even though is is ruby on rails under the hood&lt;br /&gt;
* [https://github.com/sag47/gitlab-mirrors gitlab-mirrors] is a companion app to gitlab for adding readonly mirror repos to gitlab. We might consider hacking it to not use &amp;lt;code&amp;gt;git remote prune&amp;lt;/code&amp;gt;.&lt;br /&gt;
* [http://www.javacodegeeks.com/2014/01/git-flow-with-jenkins-and-gitlab.html git-flow with jenkins and gitlab]&lt;br /&gt;
* [https://wiki.jenkins-ci.org/display/JENKINS/Gitlab+Hook+Plugin gitlab hook for jenkins]&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Binary_package_guide Gentoo Binary Package Guide]&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Preserve-libs Gentoo preserve-libs]&lt;br /&gt;
* [http://swift.siphos.be/aglara/ A Gentoo Linux Advanced Reference Architecture]&lt;br /&gt;
* [http://www.gentoo.org/proj/en/gentoo-alt/prefix/ Gentoo Prefix]&lt;br /&gt;
* man pages&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/portage.5.html portage(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/emerge.1.html emerge(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html make.conf(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.1.html ebuild(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.5.html ebuild(5)]&lt;br /&gt;
&lt;br /&gt;
[[Category: Infrastructure]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=3002</id>
		<title>Gentoo Infrastructure</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=3002"/>
		<updated>2014-01-31T20:07:04Z</updated>

		<summary type="html">&lt;p&gt;Lucas: /* Build farm proposal */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This article describes how we plan on using gentoo as an infrastructure backbone for creating a complete and modern IT architecture.&lt;br /&gt;
&lt;br /&gt;
== Glossary ==&lt;br /&gt;
@TODO We need to clean up some terms already (for instance the portage vs puppet profile thing) A [[:Category:Glossary|glossary]] should help us define term more closely (and stick to the definitions).&lt;br /&gt;
&lt;br /&gt;
; portage profile&lt;br /&gt;
: A profile in gentoo portage. Defines either a system or application stack for portage.&lt;br /&gt;
; portage build profile&lt;br /&gt;
: A profile in gentoo portage. Based of a system profile but used during the build phase of the binary packages used in the final deploy.&lt;br /&gt;
; puppet profile&lt;br /&gt;
: A puppet profile contains the implementation logic of how to install and configure an aspect of a system.&lt;br /&gt;
; stack&lt;br /&gt;
: A stack contains a complete and deployable product that may be provisioned and used. Stack have very simple inheritance letting the admin create stack trees based on each other. For instance a Ruby on Rails stack will be based of of a ruby stack which is based off a linux stack.&lt;br /&gt;
&lt;br /&gt;
= Required components =&lt;br /&gt;
* Build host(s) for binary packages&lt;br /&gt;
* HTTP server for serving binary packages and distfiles (required by the ebuilds)&lt;br /&gt;
* Git clone of official portage tree&lt;br /&gt;
* Overlay(s)&lt;br /&gt;
* Own portage profile(s)&lt;br /&gt;
* rsync or Git server for serving the Overlay and the portage profiles&lt;br /&gt;
* Stage3 building system&lt;br /&gt;
* Puppet for configuration management and software installation&lt;br /&gt;
* Git version control for everything (overlays, portage profiles, puppet manifests and scripts/code)&lt;br /&gt;
* Install host (PXE boot / TFTP / DHCP)&lt;br /&gt;
** emc/puppetlabs [https://github.com/puppetlabs/Razor razor] can do this but needs some work for gentoo &lt;br /&gt;
* Automatic base installation script&lt;br /&gt;
** also in the scope of razor&lt;br /&gt;
* Separation of development, staging and production environments&lt;br /&gt;
** tagged and managed in git&lt;br /&gt;
* PKI environment (with dedicated sub CAs) for X509 certificates (used for Puppet, server and client certs etc.)&lt;br /&gt;
* git web interface (make dotfiles and frozen clones accessible to power-users)&lt;br /&gt;
* Central authentication service&lt;br /&gt;
* DNS, DHCP and NTP services&lt;br /&gt;
* Monitoring and alarming system&lt;br /&gt;
* Logging&lt;br /&gt;
* versioning for everything (if it is a committable file, use semver on its repo)&lt;br /&gt;
&lt;br /&gt;
== Binary package requirements ==&lt;br /&gt;
* Ability to build and install binary packages with the same version but different USE flags. For example, MySQL server package (&amp;lt;code&amp;gt;-minimal&amp;lt;/code&amp;gt; and MySQL client &amp;amp; libs package &amp;lt;code&amp;gt;minimal&amp;lt;/code&amp;gt;)&lt;br /&gt;
** don&#039;t go there: this imposes a significant amount of maintenance work and may still break. Rather provide large enough base sets and accept that some packages install too much (you can still disable them at runtime) and build the few deviations from the rule on the servers from source --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:39, 3 January 2014 (CET)&lt;br /&gt;
*** Yes, we need to and can go there :-) I agree with you, that we should do this only if necessary, apache for example can be built once and has the ability to turn features (module loading) on/off via its configuration. Other software does not provide such run-time configuration which results in unwanted server-software and dependencies on the installed hosts (&amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for example). I clearly do not want to have a dedicated build environment for each of those packages, I would rather see a build env, called minimal for example, which is used to build all those database packages with only lib and clients enabled (use the same env for PostgreSQL, OpenLDAP, MySQL etc.). As stated before, the whole build process needs to be automated, so I don&#039;t see a considerable increase of maintenance work coming up here. The dependency problem is mitigated through the fact that we have a frozen portage tree for all our build envs and therefore use the same versions everywhere. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 12:04, 6 January 2014 (CET)&lt;br /&gt;
*** Yes and no on this one. We clearly need to keep the list of packages that require this at bare minimum. &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for instance doesn&#039;t warrant this, we just won&#039;t start the server on non server nodes. Easy as cake. The server code and it&#039;s deps wont do any harm on say a desktop or other server box. Even though I can&#039;t think of example, I do believe we will be needing this possibility when we encounter packages that need to be built using different profiles for different use cases, things like having a php with-curlwrappers vs one with the curl module sans curlwrappers. The important point I take from this is that creating new profiles with small deviations from our default must be very easy (ie. not much work). Basically we need the infras support for n different build profiles to be fully automated and well documented. [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 19:52, 9 January 2014 (CET)&lt;br /&gt;
**** The &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; is definitely a good example, I don&#039;t want to install and maintain MySQL, Apache, PHP, snmpd (including all the deps) etc. on hosts which just need a Zabbix agent. I would also like to pragmatically avoid unused deps, in order to minimize reverse-updates and security updates (which must be provided nonetheless if the software is in use or not). --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 13:20, 10 January 2014 (CET)&lt;br /&gt;
* Providing binary packages for different major (and sometimes minor) versions, for example: &amp;lt;code&amp;gt;dev-db/mysql-5.X.Y&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;dev-db/mysql-6.X.Y&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Provide binary packages for pre-compiled Linux kernels and modules (not just a binary package of &amp;lt;code&amp;gt;sys-kernel/gentoo-sources&amp;lt;/code&amp;gt;)&lt;br /&gt;
** This makes it possible to build stage4 images from binary packages. &lt;br /&gt;
** Most likely there will be separate packages for servers and desktops built with different genkernel configs.&lt;br /&gt;
* Handle reverse dependency updates and ABI changes&lt;br /&gt;
&lt;br /&gt;
== Build host requirements ==&lt;br /&gt;
* Build binary package for all required software&lt;br /&gt;
* Support for multiple environments (development, staging and production)&lt;br /&gt;
* Support for multiple architectures (such as x86, amd64 etc.)&lt;br /&gt;
* Support for multiple build profiles&lt;br /&gt;
** system (or base) profile, such as desktop or server (stage3) (all the packages contained within the &amp;lt;code&amp;gt;/etc/portage/make.profile&amp;lt;/code&amp;gt; or via &amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;)&lt;br /&gt;
** application profiles, such as php5-app, django-app etc.)&lt;br /&gt;
** simple inheritance is used for things like python-app -&amp;gt; django-app&lt;br /&gt;
** stacks consist of one system profile and multiple application profiles&lt;br /&gt;
** don&#039;t do this: Gentoo itself has only a few profiles and even there issues arise when combining them (for example desktop + selinux-hardened) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:40, 3 January 2014 (CET)&lt;br /&gt;
*** Those are build-profiles (for example chroots or some sort of overlay-fs) not Gentoo (portage) profiles, we definitely need to clarify those terms ;) --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 20:01, 5 January 2014 (CET)&lt;br /&gt;
* All build profiles will use a system profile as their base profile&lt;br /&gt;
* Ability to update an existing build profile, without the need to build it from scratch&lt;br /&gt;
* Ability to do fully automated clean builds (ie. for new archs or new stacks)&lt;br /&gt;
* Ability to automatically update all development profiles on a predefined frequency such as daily, weekly or monthly an be notified about build failures&lt;br /&gt;
** [http://jenkins-ci.org/ jenkins ci] can do this using one jenkins master and a least one build slave per architecture.&lt;br /&gt;
** Other options would be [https://github.com/travis-ci/travis-ci travis ci] (not ready for in-house use) or [http://cruisecontrol.sourceforge.net/ cruise control]&lt;br /&gt;
** Rabe already has a jenkins instance: [http://intranet.rabe.ch/jenkins/]. The instance [[Jenkins-01]] is more or less modern and should be easy to reintegrate with puppet.&lt;br /&gt;
* Each build profile stores the built binary packages under a per-defined directory which will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Application build profiles stores only the extra packages within the above directory, packages included in a base profile won&#039;t be duplicated.&lt;br /&gt;
* Old or no longer supported packages will be removed automatically&lt;br /&gt;
* Build a stage 3 tarball, which can be used for the automatic installation via PXE/TFTP.&lt;br /&gt;
** must be able to build a stage tarball for each of the available environment-arch-system profile combinations&lt;br /&gt;
* Handle reverse dependency updates and ABI changes (aka &amp;lt;code&amp;gt;revdep-rebuild&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Handle perl and python (maybe more) dependency updates (aka &amp;lt;code&amp;gt;perl-cleaner&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;python-updater&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Ability to build kernel and modules&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone requirements ==&lt;br /&gt;
* The official portage tree needs to be cloned via Git, which basically enables one to:&lt;br /&gt;
** keep the control over portage tree updates&lt;br /&gt;
** provide an old version of the tree&lt;br /&gt;
** cherry pick updates&lt;br /&gt;
***  this should be avoided at all cost since it can lead to various sorts of breakages (ebuild &amp;lt;-&amp;gt; ebuild, ebuild &amp;lt;-&amp;gt; eclass, ebuild &amp;lt;-&amp;gt; profile, eclass &amp;lt;-&amp;gt; profile interaction) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:24, 3 January 2014 (CET)&lt;br /&gt;
**** Yes, I agree. Nonetheless, we need the &#039;&#039;possibility&#039;&#039; to do cherry picking, for example to react on zero-day exploits. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 19:53, 5 January 2014 (CET)&lt;br /&gt;
* Support for a development, staging and production branch&lt;br /&gt;
** Ability to automatically sync from upstream&lt;br /&gt;
** Easy merge support from one branch to the next &#039;&#039;higher&#039;&#039; one (staging -&amp;gt; production)&lt;br /&gt;
* Notification support for new [http://www.gentoo.org/security/en/glsa/index.xml GLSAs] which affect packages within the cloned trees.&lt;br /&gt;
** Either via automatic update and merge of &amp;lt;code&amp;gt;/usr/portage/metadata/glsa&amp;lt;/code&amp;gt; or via external mechanisms such as consulting the [http://www.gentoo.org/rdf/en/glsa-index.rdf RDF feed].&lt;br /&gt;
** Having an inventory by collecting puppet facts allows to check for security updates in a central location --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:31, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage overlay requirements ==&lt;br /&gt;
* One Git based portage [http://www.gentoo.org/proj/en/overlays/userguide.xml overlay]&lt;br /&gt;
** Contains own [[#Portage_profile_requirements|portage profiles]]&lt;br /&gt;
** Contains own or modified ebuilds or legacy ones removed from the official tree&lt;br /&gt;
* Support for development, staging and production environment (via Git branches)&lt;br /&gt;
* [http://layman.sourceforge.net/ Layman] compatibility&lt;br /&gt;
** Portage has now direct repository support (as has cave/paludis) and layman may be omitted --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:32, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage profile requirements ==&lt;br /&gt;
* Multiple [http://wiki.gentoo.org/wiki/Profile Portage profiles] stored within the [[#Overlay_requirements|overlay]].&lt;br /&gt;
** One for base, desktop and server (maybe more in the future, such as streambox)&lt;br /&gt;
*** desktop and server both inherit from the base profile which serves as the lowest common denominator.&lt;br /&gt;
* Support for multiple architectures (such as x86 and amd64)&lt;br /&gt;
** Avoid definition duplications via parent profile inheriting.&lt;br /&gt;
* All the profiles have an official Gentoo profile as their master&lt;br /&gt;
* Profiles include only packages belonging to a base system, not an application stack (those will be managed via puppet recipes)&lt;br /&gt;
* Profiles can be used to unmask packages required but not belonging to the base system&lt;br /&gt;
* Profiles sets all the default values for the client&#039;s [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html &amp;lt;code&amp;gt;make.conf&amp;lt;/code&amp;gt;], such as USE flags, BINHOSTS, GENTOO_MIRRORS, CFLAGS, CHOST etc.&lt;br /&gt;
** &#039;&#039;&#039;Warning&#039;&#039;&#039;: many such variables are not incremental and therefore need duplication of Gentoo base profile variables (requiring that someone tracks changes in those variables) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:29, 3 January 2014 (CET)&lt;br /&gt;
* keep the profiles (and the inheritance structure) as simple as possible, rather duplicate than inherit for small deviations to avoid inheritence issues --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:33, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Package host requirements ==&lt;br /&gt;
* Serving files via HTTPS&lt;br /&gt;
** Binary packages for all the clients (&amp;lt;code&amp;gt;PORTAGE_BINHOST&amp;lt;/code&amp;gt;), which were built by the [[#Build_host_requirements|build host]]&lt;br /&gt;
*** Binary packages will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
*** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== File mirror host requirements ==&lt;br /&gt;
* Hosts all the files required to build a package (&amp;lt;code&amp;gt;GENTOO_MIRRORS=mirror.example.com/public/gentoo/distfiles&amp;lt;/code&amp;gt;)&lt;br /&gt;
** Acts as a caching mirror for already downloaded packages from an official mirror&lt;br /&gt;
**  Serves fetch-restricted files (&amp;lt;code&amp;gt;dev-java/oracle-jdk-bin&amp;lt;/code&amp;gt; for example), to authorized clients&lt;br /&gt;
* Files are served via HTTPS&lt;br /&gt;
* Distinguishes between three groups of files&lt;br /&gt;
** &#039;&#039;&#039;public&#039;&#039;&#039;: Files which are available to all clients (theoretically even to the entire internet)&lt;br /&gt;
** &#039;&#039;&#039;site-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure (for example those which would put us into [http://www.bettercallsaul.com/ legal troubles] if available to the public)&lt;br /&gt;
** &#039;&#039;&#039;stack-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure and the software stack group (private files of a specific customer) &lt;br /&gt;
* Provides an easy way to let an administrator manually upload new files, for example via WebDAV-CGI, SFTP or a similar mechanism.&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== Puppet requirements ==&lt;br /&gt;
* moved to [[stoney_orchestra:_Requirements]], included below for reference.&lt;br /&gt;
&lt;br /&gt;
{{:stoney orchestra: Requirements}}&lt;br /&gt;
&lt;br /&gt;
== Install host requirements ==&lt;br /&gt;
* Ability to install physical and virtual machines&lt;br /&gt;
* Distinguish machines by their Ethernet MAC address&lt;br /&gt;
* Provide a PXE/TFTP boot mechanism&lt;br /&gt;
* Partition and format the (virtual) harddisks&lt;br /&gt;
* Install a stage3 image which was built by the build host&lt;br /&gt;
* Bootstrap puppet, enabling it to take over the individual installation and customization.&lt;br /&gt;
* Group hosts into&lt;br /&gt;
** environments (development, staging and production)&lt;br /&gt;
** architectures (such as x86, amd64 etc.)&lt;br /&gt;
** portage profiles (system profiles such as desktop and server)&lt;br /&gt;
** &amp;lt;s&amp;gt;stacks (comprising a complete product as a service with the underlying infrastructure)&amp;lt;/s&amp;gt; this is the task of Puppet --[[Benutzer:Chaf|Chaf]] ([[Benutzer Diskussion:Chaf|Diskussion]]) 09:42, 19. Dez. 2013 (CET)&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure requirements ==&lt;br /&gt;
* Local certificate authority for signing [http://en.wikipedia.org/wiki/X509 X.509] certificates.&lt;br /&gt;
* Master certificate authority root certificate which is only used to sign Sub-CA certificates&lt;br /&gt;
* Sub certificate authorities used for various cases such as&lt;br /&gt;
** Puppet certificates [http://docs.puppetlabs.com/puppet/3/reference/config_ssl_external_ca.html]&lt;br /&gt;
** User certificates&lt;br /&gt;
** Client certificates&lt;br /&gt;
** Host certificates&lt;br /&gt;
* Ability to sign, revoke and extend certificates&lt;br /&gt;
* Publish certificate revocation status either via [http://en.wikipedia.org/wiki/Certificate_revocation_list CRL] and/or [http://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol OCSP]&lt;br /&gt;
** CRL is not worth the hassle due to it not defining how often the CRL must be consulted. Since we are in the same physical net OCSP should be far superior here (thank to its live checking support). On the other hand puppet does not do OCSP yet (redmine: [http://projects.puppetlabs.com/issues/10111 #110111]) so we might need to implement both or implement OCSP as well as develop our own automated revocation for puppet.&lt;br /&gt;
* Choose DNs below &amp;lt;code&amp;gt;dc=rabe,dc=ch&amp;lt;/code&amp;gt;&lt;br /&gt;
* register a PEN-OID as issued by IANA if custom schema work is required&lt;br /&gt;
** Use a @rabe email when requesting a PEN at [http://pen.iana.org/pen/PenApplication.page IANA], last time the @purplehaze.ch was a problem!&lt;br /&gt;
* Some of the aforementioned sub-CAs might be implemented as robot CAs with a self service interface (ie for authorized users).&lt;br /&gt;
* Consider using [http://en.wikipedia.org/wiki/Certificate_Management_Protocol CMP] or [http://en.wikipedia.org/wiki/Certificate_Management_over_CMS CMC] as an API to signing, revoking et. al.&lt;br /&gt;
** Since the underlying RFCs of both these protocols are rather new they are not yet broadly supported.&lt;br /&gt;
* Keep local root CA offline!&lt;br /&gt;
** Maybe use an old netbook as root CA :P&lt;br /&gt;
* Support GPG keys for signing packages&lt;br /&gt;
&lt;br /&gt;
== Git hosting requirements ==&lt;br /&gt;
* Public repositories hosted on [http://www.github.com GitHub] (mainly) under the [https://github.com/organizations/radiorabe radiorabe organization] (almost anything which doesn&#039;t leak sensitive informations)&lt;br /&gt;
* Private repositories hosted on the internal infrastructure&lt;br /&gt;
** Accessible via https and a web interface&lt;br /&gt;
** contains some repos with uber-private data the gets compartmentalized even further (ie. hiera datafiles in different repos)&lt;br /&gt;
* One repository per component&lt;br /&gt;
* Daily backup of all repositories&lt;br /&gt;
* Branches for development, staging and production&lt;br /&gt;
** New features are added to the development branch only and later merged up to staging and production&lt;br /&gt;
* Must support pull-requests so we can implement a review process (when pulling through the envs)&lt;br /&gt;
** Sing-Offing might also be required&lt;br /&gt;
* Adhere to [http://semver.org/ Semantic Versioning] for version/release tags.&lt;br /&gt;
** Tag releases as &amp;lt;code&amp;gt;vX.Y.Z&amp;lt;/code&amp;gt; those will be automatically appear on GitHub as downloadable tarballs, which can be referenced within the corresponding ebuilds.&lt;br /&gt;
** Hit 1.0.0 as soon as code lands on production or earlier&lt;br /&gt;
** Commit .lock files when reaching 1.0.0 where applicable (Gemfile.lock, composer.lock) or earlier if needed&lt;br /&gt;
* Must be able to trigger remote events (ie. update master through mcollective after code was promoted to production in a PR)&lt;br /&gt;
* Support the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model&lt;br /&gt;
&lt;br /&gt;
== Messaging requirements ==&lt;br /&gt;
* I&#039;m talking AMPQ, JMS, STOMP, 0MQ and the likes&lt;br /&gt;
** not sure if we need something in this space for the infra&lt;br /&gt;
** it could facilitate comms between components&lt;br /&gt;
** stuff like mcollective and RadioDNS need something in this space&lt;br /&gt;
&lt;br /&gt;
== Monitoring, logging and alarming system requirements ==&lt;br /&gt;
@TODO&lt;br /&gt;
* centralized logging is used throughout&lt;br /&gt;
** with tools that help find and fix problems and do post mortems&lt;br /&gt;
* all systems are always monitored by a full monitoring suite&lt;br /&gt;
* the monitoring suite must support alarming users through multiple paths&lt;br /&gt;
** alarming should include a fallback strategy and a way to acknowledge alarms&lt;br /&gt;
** it must have a easy way to configure scheduled maintenance either before or while the maintenance is undergoing&lt;br /&gt;
* monitoring, logging and alarming are all automatically configured during regular provisioning of machines&lt;br /&gt;
* alerting uses jabber by default with fallbacks to email and sms-through-gsm depending on the site.&lt;br /&gt;
&lt;br /&gt;
= Implementation proposal =&lt;br /&gt;
== Build farm proposal ==&lt;br /&gt;
The build farm consists of a system of multiple vms to build binary packages for multiple environments, architectures and build profiles.&lt;br /&gt;
&lt;br /&gt;
* Git webhook on internal gitlab install pushes changes to jenkins master.&lt;br /&gt;
* Jenkins master dishes out jobs to jenkins slave machines for needed architecture and build profile.&lt;br /&gt;
* Jenkins slaves only get used once and wipe/reprovision themselves after master has stored build artefacts.&lt;br /&gt;
* We have build-slave templates available for each architecture/build profile combo.&lt;br /&gt;
* Upon use those get provisioned to the needed environment using puppet.&lt;br /&gt;
* All of this is set up using puppet and fully automated, even building of new build-slave templates and the whole releng on those.&lt;br /&gt;
* The build farm also keeps old templates and stable boxes on hold so it can use them to build differentials.&lt;br /&gt;
* Artefacts slaves will be producing:&lt;br /&gt;
** &amp;quot;vagrant&amp;quot;-style boot boxes&lt;br /&gt;
** full binpkg repos for a given env/arch/build profile combo&lt;br /&gt;
** stage3 balls for each arch/build profile&lt;br /&gt;
** stage4 balls for each environment&lt;br /&gt;
** build logs&lt;br /&gt;
** &amp;lt;code&amp;gt;/var/db/pkg&amp;lt;/code&amp;gt;&lt;br /&gt;
** puppet report data&lt;br /&gt;
** test results and code analysis results&lt;br /&gt;
* When we come to continuos deployment the jenkins master will also be able to trigger puppet when merges to master happen. Thus rolling out releases to the sub-system that was signed off by a merge to a master branch.&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
&lt;br /&gt;
==== build orchestration ====&lt;br /&gt;
&lt;br /&gt;
==== package building ====&lt;br /&gt;
* [http://www.chromium.org/chromium-os/developer-guide/chromite-shell-quick-start chromite] build utility from chromium os ([https://chromium.googlesource.com/chromiumos/chromite/ source repo])&lt;br /&gt;
** as far as I recall chromium os does highly parallel building making their build really fast with a slight trade of in long termn stability (ie. build might fail due to dependencies being built out of oder), &lt;br /&gt;
** the [http://www.chromium.org/chromium-os/developer-guide chromium os developer guide] might also be of interest, among other things it shows that google do split the build into a package building part and an image creation part.&lt;br /&gt;
* [https://wiki.sabayon.org/?title=En:Entropy entropy] is sabayons portage replacement, it focuses on binaries due to sabayon being a binary distribution&lt;br /&gt;
** their [https://github.com/Sabayon/build build system &amp;quot;Matter&amp;quot;] might be of interest, it seems to automate large parts of tracking gentoo portage with its tinderbox subsystem&lt;br /&gt;
** sabayon has &amp;lt;code&amp;gt;kernel-switcher&amp;lt;/code&amp;gt; for updating kernels&lt;br /&gt;
** kernel ebuilds live [https://github.com/Sabayon/sabayon-distro/tree/master/sys-kernel/linux-sabayon here] and probably rely on the [https://github.com/Sabayon/sabayon-distro/blob/master/eclass/sabayon-kernel.eclass sabayon-kernel eclass].&lt;br /&gt;
&lt;br /&gt;
==== &amp;quot;stage4&amp;quot;/box/iso building ====&lt;br /&gt;
* [http://packer.io packer.io] can be used to build stage4 (containing a kernel) images and seems to work for gentoo. Packer often gets used to build Vagrant boxes.&lt;br /&gt;
** [https://github.com/pierreozoux/packer-warehouse/blob/master/var-files/gentoo/generate_latest.sh gentoo script from packer-warehouse] used with packer to create a minimal gentoo vagrant box&lt;br /&gt;
** currently packer and packer-warehouse do not seem capable of building gentoo machines out of the box, I tested this with osx/virtualbox using gentoo stage3 and portage snapshots [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
* [https://github.com/jedi4ever/veewee veewee] vagrant box builder (builds stage4 images in a manner similar to packer&lt;br /&gt;
** has support for a massive amount of guest os types&lt;br /&gt;
*** installs puppet/chef using gem due to the oldish versions in gentoo (and probably elsewhere)&lt;br /&gt;
** supports kvm and others as host os&lt;br /&gt;
** while testing with osx/virtualbox I was able to build and export a vagrant box from gentoo stage3 and portage snapshots without any hiccups [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
** is in dire need of DRY: [https://github.com/jedi4ever/veewee/pull/690] to make it worth forking&lt;br /&gt;
* [http://blinkeye.ch/dokuwiki/doku.php/projects/mkstage4 mkstage4]&lt;br /&gt;
** aimed at creating backup stage4 tarballs of gentoo systems&lt;br /&gt;
** written in bash&lt;br /&gt;
** pretty simple, might come in handy as automation tool&lt;br /&gt;
&lt;br /&gt;
==== kernel ====&lt;br /&gt;
&lt;br /&gt;
* at the moment we build tarballs for the kernel+initramfs and the modules using &amp;lt;code&amp;gt;genkernel&amp;lt;/code&amp;gt; and have a separate ebuild which installs them&lt;br /&gt;
* ideally we would like to have an ebuild which takes the kernel sources (like the ebuild for &amp;lt;code&amp;gt;sys-kernel/gentoo-source&amp;lt;/code&amp;gt; does), builds it according to some default configuration or a user configuration if available (&amp;lt;code&amp;gt;savedconfig.eclass&amp;lt;/code&amp;gt;) and then installs the kernel and the modules as well as some minimal headers+configuration to build other packages requiring the sources to be present&lt;br /&gt;
* TODO: check whether dracut has some advantages regarding module loading over genkernel-generated initramfs&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone proposal ==&lt;br /&gt;
== Portage overlay proposal ==&lt;br /&gt;
== Portage profile proposal ==&lt;br /&gt;
== Package and file mirror proposal ==&lt;br /&gt;
== Puppet proposal ==&lt;br /&gt;
* Adhere to Craig Dunns [http://www.craigdunn.org/2012/05/239/ architecture] [http://www.slideshare.net/PuppetLabs/roles-talk]&lt;br /&gt;
** on the system level (ie for each bar-metal or virtual machine)&lt;br /&gt;
*** roles contains the business view (ie. [https://github.com/radiorabe/puppet/blob/master/role/manifests/puppet/master.pp role::puppet::master])&lt;br /&gt;
*** profiles the implementation (such as [https://github.com/radiorabe/puppet/blob/master/profile/manifests/puppet/master.pp profile::puppet::master])&lt;br /&gt;
** on the architecture level (ie. in the cloud-fabric)&lt;br /&gt;
*** roles contains the business view (ie. role::cloud-storage, role::product1)&lt;br /&gt;
*** profiles contain the implementation (ie profile::storage-cluster, profile::storage-webinterface-farm)&lt;br /&gt;
* Keep profiles, roles (as per craig) and Puppetfile in [https://github.com/radiorabe/puppet github.com/radiorabe/puppet]&lt;br /&gt;
** This is where we keep feature/*, develop and master (ie staging) branches&lt;br /&gt;
** An internal clone then contains all these + production (what exactly is in prodution, ie. our release schedule is considered sensitive in this implementation)&lt;br /&gt;
** This lets us use the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model with almost no changes (the one change being us gating stuff into production on the closed clone)&lt;br /&gt;
** github may use hooks to push content to our internal git when they happen&lt;br /&gt;
* All other modules need their own repo and must be published to the puppet module forge&lt;br /&gt;
* Use librarian-puppet (or r10k) for composing the final puppet envs&lt;br /&gt;
** r10k eschews git submodule support we used in puppet-syslogng but has support for multiple envs out of the box&lt;br /&gt;
** librarian-puppet would need to be run once per environment to achieve what r10k does&lt;br /&gt;
* provide develop, master and production branches from private repo as puppet environments on master&lt;br /&gt;
&lt;br /&gt;
== Install host proposal ==&lt;br /&gt;
* use the existing server on [[tftp-01]] on the RaBe infra as a shortcut&lt;br /&gt;
** replace that instance with one native to the infra when it is ready for that&lt;br /&gt;
* iPXE [http://ipxe.org/]&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
* Tools that run puppet on freshly installed machines (and also do some provisioning)&lt;br /&gt;
** [https://forge.puppetlabs.com/puppetlabs/razor puppetlabs razor] bare metal/cloud provisioning tool&lt;br /&gt;
** [http://www.vagrantup.com/ vagrant] cloud provisioning aimed at provisioning developer boxes (with virtualbox). Has 3rd party support for various cloud systems. Vagrant might be interesting for creating dev clouds. I&#039;ve seen this being used on production sites.&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure proposal ==&lt;br /&gt;
* write [[certificate policy]] (in german!)&lt;br /&gt;
* hold a key ceremony for the root and level 1&lt;br /&gt;
** offline ceremony on an old netbook with centos or similar (not debian, probably not gentoo to make this happen soonish)&lt;br /&gt;
** Sign RaBe root cert and level 1 intermediate cert&lt;br /&gt;
** store root cert key on 2 sdcards and as 1 printout somewhere safely&lt;br /&gt;
** store level 1 intermediate key on sdcards for use by admins&lt;br /&gt;
* use level 1 intermediate key to sign level 2 cas as needed&lt;br /&gt;
** level 2 robot ca key for puppet (managed by &amp;lt;code&amp;gt;puppet ca&amp;lt;/code&amp;gt;)&lt;br /&gt;
** level 2 ca for client certs&lt;br /&gt;
** level 2 ca for host certs&lt;br /&gt;
** more level 2 certs&lt;br /&gt;
* use OpenSSL as default software for PKI&lt;br /&gt;
** ssl has the largest userbase which should make it easier on new admins&lt;br /&gt;
** features that openssl does not implement get used as soon as openssl catches up (ie. [http://cmpforopenssl.sourceforge.net/‎ CMP])&lt;br /&gt;
&lt;br /&gt;
== git hosting proposal ==&lt;br /&gt;
&lt;br /&gt;
* adhere to git-flow for all the things. Automate said usage as far as possible.&lt;br /&gt;
{|- class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=4 | git-flow branching &lt;br /&gt;
|-&lt;br /&gt;
! Branch&lt;br /&gt;
! Environment&lt;br /&gt;
! Merge from&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt;&lt;br /&gt;
| production&lt;br /&gt;
| &amp;lt;code&amp;gt;release/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;hotfix/&amp;lt;/code&amp;gt;&lt;br /&gt;
| Released code with a &amp;lt;code&amp;gt;git tag&amp;lt;/code&amp;gt; for each merge.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;release/v0.0.0&amp;lt;/code&amp;gt;&lt;br /&gt;
| staging&lt;br /&gt;
| &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;&lt;br /&gt;
| Contains final releasing work like updating versioning and changelog. This is where we keep semver concerns in check if they where not taken care of already.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;hotfix/v0.0.0&amp;lt;/code&amp;gt;&lt;br /&gt;
| staging&lt;br /&gt;
| &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt;&lt;br /&gt;
| Only for critically urgent fixes. In most cases doing a release from &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt; is preferred.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;&lt;br /&gt;
| development&lt;br /&gt;
| &amp;lt;code&amp;gt;feature/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt;&lt;br /&gt;
| Only feature branches that are ready for production should get merged here. &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt; gets merged here after each merge to it. Merging is done with pull requests and review.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;feature/featurename&amp;lt;/code&amp;gt;&lt;br /&gt;
| development&lt;br /&gt;
| &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;&lt;br /&gt;
| New features get implemented here until they are considered ready for production and merged to &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;support/v0.0.0&amp;lt;/code&amp;gt;&lt;br /&gt;
| LTS&lt;br /&gt;
| &lt;br /&gt;
| Marked experimental in most implementations and unused for now.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* Install gitlab on a vm and integrate external mirrors from github and ldap users from stoney-ldap.&lt;br /&gt;
** keep repo of public mirrors in hieradata so we can configure them from puppet.&lt;br /&gt;
** each organisation in stoney-ldap automatically gets a private project in gitlab.&lt;br /&gt;
* Configure web hook intrastructure and integrate with continuous integration system.&lt;br /&gt;
* Make continuous integration show feedback back in gitlab.&lt;br /&gt;
** check for &amp;lt;code&amp;gt;git annotate&amp;lt;/code&amp;gt; support or use img badges.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;On organization projects in gitlab&#039;&#039;&#039;&lt;br /&gt;
* Each project comes with default repos. &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Repo&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
| Set up using a template, contains a Puppetfile and Puppetfile.lock and a hieradata directory.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;role&amp;lt;/code&amp;gt;&lt;br /&gt;
| Read only copy of global role module for reference.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;profile&amp;lt;/code&amp;gt;&lt;br /&gt;
| Read only copy of global profile module for reference.&lt;br /&gt;
|}&lt;br /&gt;
* Everything in the latter two modules is configurable through hieradata in the first repo.&lt;br /&gt;
* The default setup automatically updates &amp;lt;code&amp;gt;role&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;profile&amp;lt;/code&amp;gt; when they get new merges.&lt;br /&gt;
* A software agent (ci) regularly clones &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;, does a full build and pushes the results back to &amp;lt;code&amp;gt;feature/tinderbox&amp;lt;/code&amp;gt;&lt;br /&gt;
* This agent autmatically creates pull requests if tinderbox builds did not fail.&lt;br /&gt;
* Org leaders may then merge these PRs and bake them into a local release.&lt;br /&gt;
* Some kind of UI helps them do this without much technical knowledge.&lt;br /&gt;
* More repos may be added by the customer.&lt;br /&gt;
* project organizations are private, per customer.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [http://gitlab.org/ gitlab] seems nice even though is is ruby on rails under the hood&lt;br /&gt;
* [https://github.com/sag47/gitlab-mirrors gitlab-mirrors] is a companion app to gitlab for adding readonly mirror repos to gitlab. We might consider hacking it to not use &amp;lt;code&amp;gt;git remote prune&amp;lt;/code&amp;gt;.&lt;br /&gt;
* [http://www.javacodegeeks.com/2014/01/git-flow-with-jenkins-and-gitlab.html git-flow with jenkins and gitlab]&lt;br /&gt;
* [https://wiki.jenkins-ci.org/display/JENKINS/Gitlab+Hook+Plugin gitlab hook for jenkins]&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Binary_package_guide Gentoo Binary Package Guide]&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Preserve-libs Gentoo preserve-libs]&lt;br /&gt;
* [http://swift.siphos.be/aglara/ A Gentoo Linux Advanced Reference Architecture]&lt;br /&gt;
* [http://www.gentoo.org/proj/en/gentoo-alt/prefix/ Gentoo Prefix]&lt;br /&gt;
* man pages&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/portage.5.html portage(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/emerge.1.html emerge(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html make.conf(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.1.html ebuild(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.5.html ebuild(5)]&lt;br /&gt;
&lt;br /&gt;
[[Category: Infrastructure]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=3001</id>
		<title>Gentoo Infrastructure</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=3001"/>
		<updated>2014-01-31T19:52:17Z</updated>

		<summary type="html">&lt;p&gt;Lucas: /* Build host proposal */ -&amp;gt; build farm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This article describes how we plan on using gentoo as an infrastructure backbone for creating a complete and modern IT architecture.&lt;br /&gt;
&lt;br /&gt;
== Glossary ==&lt;br /&gt;
@TODO We need to clean up some terms already (for instance the portage vs puppet profile thing) A [[:Category:Glossary|glossary]] should help us define term more closely (and stick to the definitions).&lt;br /&gt;
&lt;br /&gt;
; portage profile&lt;br /&gt;
: A profile in gentoo portage. Defines either a system or application stack for portage.&lt;br /&gt;
; portage build profile&lt;br /&gt;
: A profile in gentoo portage. Based of a system profile but used during the build phase of the binary packages used in the final deploy.&lt;br /&gt;
; puppet profile&lt;br /&gt;
: A puppet profile contains the implementation logic of how to install and configure an aspect of a system.&lt;br /&gt;
; stack&lt;br /&gt;
: A stack contains a complete and deployable product that may be provisioned and used. Stack have very simple inheritance letting the admin create stack trees based on each other. For instance a Ruby on Rails stack will be based of of a ruby stack which is based off a linux stack.&lt;br /&gt;
&lt;br /&gt;
= Required components =&lt;br /&gt;
* Build host(s) for binary packages&lt;br /&gt;
* HTTP server for serving binary packages and distfiles (required by the ebuilds)&lt;br /&gt;
* Git clone of official portage tree&lt;br /&gt;
* Overlay(s)&lt;br /&gt;
* Own portage profile(s)&lt;br /&gt;
* rsync or Git server for serving the Overlay and the portage profiles&lt;br /&gt;
* Stage3 building system&lt;br /&gt;
* Puppet for configuration management and software installation&lt;br /&gt;
* Git version control for everything (overlays, portage profiles, puppet manifests and scripts/code)&lt;br /&gt;
* Install host (PXE boot / TFTP / DHCP)&lt;br /&gt;
** emc/puppetlabs [https://github.com/puppetlabs/Razor razor] can do this but needs some work for gentoo &lt;br /&gt;
* Automatic base installation script&lt;br /&gt;
** also in the scope of razor&lt;br /&gt;
* Separation of development, staging and production environments&lt;br /&gt;
** tagged and managed in git&lt;br /&gt;
* PKI environment (with dedicated sub CAs) for X509 certificates (used for Puppet, server and client certs etc.)&lt;br /&gt;
* git web interface (make dotfiles and frozen clones accessible to power-users)&lt;br /&gt;
* Central authentication service&lt;br /&gt;
* DNS, DHCP and NTP services&lt;br /&gt;
* Monitoring and alarming system&lt;br /&gt;
* Logging&lt;br /&gt;
* versioning for everything (if it is a committable file, use semver on its repo)&lt;br /&gt;
&lt;br /&gt;
== Binary package requirements ==&lt;br /&gt;
* Ability to build and install binary packages with the same version but different USE flags. For example, MySQL server package (&amp;lt;code&amp;gt;-minimal&amp;lt;/code&amp;gt; and MySQL client &amp;amp; libs package &amp;lt;code&amp;gt;minimal&amp;lt;/code&amp;gt;)&lt;br /&gt;
** don&#039;t go there: this imposes a significant amount of maintenance work and may still break. Rather provide large enough base sets and accept that some packages install too much (you can still disable them at runtime) and build the few deviations from the rule on the servers from source --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:39, 3 January 2014 (CET)&lt;br /&gt;
*** Yes, we need to and can go there :-) I agree with you, that we should do this only if necessary, apache for example can be built once and has the ability to turn features (module loading) on/off via its configuration. Other software does not provide such run-time configuration which results in unwanted server-software and dependencies on the installed hosts (&amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for example). I clearly do not want to have a dedicated build environment for each of those packages, I would rather see a build env, called minimal for example, which is used to build all those database packages with only lib and clients enabled (use the same env for PostgreSQL, OpenLDAP, MySQL etc.). As stated before, the whole build process needs to be automated, so I don&#039;t see a considerable increase of maintenance work coming up here. The dependency problem is mitigated through the fact that we have a frozen portage tree for all our build envs and therefore use the same versions everywhere. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 12:04, 6 January 2014 (CET)&lt;br /&gt;
*** Yes and no on this one. We clearly need to keep the list of packages that require this at bare minimum. &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for instance doesn&#039;t warrant this, we just won&#039;t start the server on non server nodes. Easy as cake. The server code and it&#039;s deps wont do any harm on say a desktop or other server box. Even though I can&#039;t think of example, I do believe we will be needing this possibility when we encounter packages that need to be built using different profiles for different use cases, things like having a php with-curlwrappers vs one with the curl module sans curlwrappers. The important point I take from this is that creating new profiles with small deviations from our default must be very easy (ie. not much work). Basically we need the infras support for n different build profiles to be fully automated and well documented. [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 19:52, 9 January 2014 (CET)&lt;br /&gt;
**** The &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; is definitely a good example, I don&#039;t want to install and maintain MySQL, Apache, PHP, snmpd (including all the deps) etc. on hosts which just need a Zabbix agent. I would also like to pragmatically avoid unused deps, in order to minimize reverse-updates and security updates (which must be provided nonetheless if the software is in use or not). --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 13:20, 10 January 2014 (CET)&lt;br /&gt;
* Providing binary packages for different major (and sometimes minor) versions, for example: &amp;lt;code&amp;gt;dev-db/mysql-5.X.Y&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;dev-db/mysql-6.X.Y&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Provide binary packages for pre-compiled Linux kernels and modules (not just a binary package of &amp;lt;code&amp;gt;sys-kernel/gentoo-sources&amp;lt;/code&amp;gt;)&lt;br /&gt;
** This makes it possible to build stage4 images from binary packages. &lt;br /&gt;
** Most likely there will be separate packages for servers and desktops built with different genkernel configs.&lt;br /&gt;
* Handle reverse dependency updates and ABI changes&lt;br /&gt;
&lt;br /&gt;
== Build host requirements ==&lt;br /&gt;
* Build binary package for all required software&lt;br /&gt;
* Support for multiple environments (development, staging and production)&lt;br /&gt;
* Support for multiple architectures (such as x86, amd64 etc.)&lt;br /&gt;
* Support for multiple build profiles&lt;br /&gt;
** system (or base) profile, such as desktop or server (stage3) (all the packages contained within the &amp;lt;code&amp;gt;/etc/portage/make.profile&amp;lt;/code&amp;gt; or via &amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;)&lt;br /&gt;
** application profiles, such as php5-app, django-app etc.)&lt;br /&gt;
** simple inheritance is used for things like python-app -&amp;gt; django-app&lt;br /&gt;
** stacks consist of one system profile and multiple application profiles&lt;br /&gt;
** don&#039;t do this: Gentoo itself has only a few profiles and even there issues arise when combining them (for example desktop + selinux-hardened) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:40, 3 January 2014 (CET)&lt;br /&gt;
*** Those are build-profiles (for example chroots or some sort of overlay-fs) not Gentoo (portage) profiles, we definitely need to clarify those terms ;) --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 20:01, 5 January 2014 (CET)&lt;br /&gt;
* All build profiles will use a system profile as their base profile&lt;br /&gt;
* Ability to update an existing build profile, without the need to build it from scratch&lt;br /&gt;
* Ability to do fully automated clean builds (ie. for new archs or new stacks)&lt;br /&gt;
* Ability to automatically update all development profiles on a predefined frequency such as daily, weekly or monthly an be notified about build failures&lt;br /&gt;
** [http://jenkins-ci.org/ jenkins ci] can do this using one jenkins master and a least one build slave per architecture.&lt;br /&gt;
** Other options would be [https://github.com/travis-ci/travis-ci travis ci] (not ready for in-house use) or [http://cruisecontrol.sourceforge.net/ cruise control]&lt;br /&gt;
** Rabe already has a jenkins instance: [http://intranet.rabe.ch/jenkins/]. The instance [[Jenkins-01]] is more or less modern and should be easy to reintegrate with puppet.&lt;br /&gt;
* Each build profile stores the built binary packages under a per-defined directory which will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Application build profiles stores only the extra packages within the above directory, packages included in a base profile won&#039;t be duplicated.&lt;br /&gt;
* Old or no longer supported packages will be removed automatically&lt;br /&gt;
* Build a stage 3 tarball, which can be used for the automatic installation via PXE/TFTP.&lt;br /&gt;
** must be able to build a stage tarball for each of the available environment-arch-system profile combinations&lt;br /&gt;
* Handle reverse dependency updates and ABI changes (aka &amp;lt;code&amp;gt;revdep-rebuild&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Handle perl and python (maybe more) dependency updates (aka &amp;lt;code&amp;gt;perl-cleaner&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;python-updater&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Ability to build kernel and modules&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone requirements ==&lt;br /&gt;
* The official portage tree needs to be cloned via Git, which basically enables one to:&lt;br /&gt;
** keep the control over portage tree updates&lt;br /&gt;
** provide an old version of the tree&lt;br /&gt;
** cherry pick updates&lt;br /&gt;
***  this should be avoided at all cost since it can lead to various sorts of breakages (ebuild &amp;lt;-&amp;gt; ebuild, ebuild &amp;lt;-&amp;gt; eclass, ebuild &amp;lt;-&amp;gt; profile, eclass &amp;lt;-&amp;gt; profile interaction) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:24, 3 January 2014 (CET)&lt;br /&gt;
**** Yes, I agree. Nonetheless, we need the &#039;&#039;possibility&#039;&#039; to do cherry picking, for example to react on zero-day exploits. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 19:53, 5 January 2014 (CET)&lt;br /&gt;
* Support for a development, staging and production branch&lt;br /&gt;
** Ability to automatically sync from upstream&lt;br /&gt;
** Easy merge support from one branch to the next &#039;&#039;higher&#039;&#039; one (staging -&amp;gt; production)&lt;br /&gt;
* Notification support for new [http://www.gentoo.org/security/en/glsa/index.xml GLSAs] which affect packages within the cloned trees.&lt;br /&gt;
** Either via automatic update and merge of &amp;lt;code&amp;gt;/usr/portage/metadata/glsa&amp;lt;/code&amp;gt; or via external mechanisms such as consulting the [http://www.gentoo.org/rdf/en/glsa-index.rdf RDF feed].&lt;br /&gt;
** Having an inventory by collecting puppet facts allows to check for security updates in a central location --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:31, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage overlay requirements ==&lt;br /&gt;
* One Git based portage [http://www.gentoo.org/proj/en/overlays/userguide.xml overlay]&lt;br /&gt;
** Contains own [[#Portage_profile_requirements|portage profiles]]&lt;br /&gt;
** Contains own or modified ebuilds or legacy ones removed from the official tree&lt;br /&gt;
* Support for development, staging and production environment (via Git branches)&lt;br /&gt;
* [http://layman.sourceforge.net/ Layman] compatibility&lt;br /&gt;
** Portage has now direct repository support (as has cave/paludis) and layman may be omitted --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:32, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage profile requirements ==&lt;br /&gt;
* Multiple [http://wiki.gentoo.org/wiki/Profile Portage profiles] stored within the [[#Overlay_requirements|overlay]].&lt;br /&gt;
** One for base, desktop and server (maybe more in the future, such as streambox)&lt;br /&gt;
*** desktop and server both inherit from the base profile which serves as the lowest common denominator.&lt;br /&gt;
* Support for multiple architectures (such as x86 and amd64)&lt;br /&gt;
** Avoid definition duplications via parent profile inheriting.&lt;br /&gt;
* All the profiles have an official Gentoo profile as their master&lt;br /&gt;
* Profiles include only packages belonging to a base system, not an application stack (those will be managed via puppet recipes)&lt;br /&gt;
* Profiles can be used to unmask packages required but not belonging to the base system&lt;br /&gt;
* Profiles sets all the default values for the client&#039;s [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html &amp;lt;code&amp;gt;make.conf&amp;lt;/code&amp;gt;], such as USE flags, BINHOSTS, GENTOO_MIRRORS, CFLAGS, CHOST etc.&lt;br /&gt;
** &#039;&#039;&#039;Warning&#039;&#039;&#039;: many such variables are not incremental and therefore need duplication of Gentoo base profile variables (requiring that someone tracks changes in those variables) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:29, 3 January 2014 (CET)&lt;br /&gt;
* keep the profiles (and the inheritance structure) as simple as possible, rather duplicate than inherit for small deviations to avoid inheritence issues --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:33, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Package host requirements ==&lt;br /&gt;
* Serving files via HTTPS&lt;br /&gt;
** Binary packages for all the clients (&amp;lt;code&amp;gt;PORTAGE_BINHOST&amp;lt;/code&amp;gt;), which were built by the [[#Build_host_requirements|build host]]&lt;br /&gt;
*** Binary packages will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
*** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== File mirror host requirements ==&lt;br /&gt;
* Hosts all the files required to build a package (&amp;lt;code&amp;gt;GENTOO_MIRRORS=mirror.example.com/public/gentoo/distfiles&amp;lt;/code&amp;gt;)&lt;br /&gt;
** Acts as a caching mirror for already downloaded packages from an official mirror&lt;br /&gt;
**  Serves fetch-restricted files (&amp;lt;code&amp;gt;dev-java/oracle-jdk-bin&amp;lt;/code&amp;gt; for example), to authorized clients&lt;br /&gt;
* Files are served via HTTPS&lt;br /&gt;
* Distinguishes between three groups of files&lt;br /&gt;
** &#039;&#039;&#039;public&#039;&#039;&#039;: Files which are available to all clients (theoretically even to the entire internet)&lt;br /&gt;
** &#039;&#039;&#039;site-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure (for example those which would put us into [http://www.bettercallsaul.com/ legal troubles] if available to the public)&lt;br /&gt;
** &#039;&#039;&#039;stack-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure and the software stack group (private files of a specific customer) &lt;br /&gt;
* Provides an easy way to let an administrator manually upload new files, for example via WebDAV-CGI, SFTP or a similar mechanism.&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== Puppet requirements ==&lt;br /&gt;
* moved to [[stoney_orchestra:_Requirements]], included below for reference.&lt;br /&gt;
&lt;br /&gt;
{{:stoney orchestra: Requirements}}&lt;br /&gt;
&lt;br /&gt;
== Install host requirements ==&lt;br /&gt;
* Ability to install physical and virtual machines&lt;br /&gt;
* Distinguish machines by their Ethernet MAC address&lt;br /&gt;
* Provide a PXE/TFTP boot mechanism&lt;br /&gt;
* Partition and format the (virtual) harddisks&lt;br /&gt;
* Install a stage3 image which was built by the build host&lt;br /&gt;
* Bootstrap puppet, enabling it to take over the individual installation and customization.&lt;br /&gt;
* Group hosts into&lt;br /&gt;
** environments (development, staging and production)&lt;br /&gt;
** architectures (such as x86, amd64 etc.)&lt;br /&gt;
** portage profiles (system profiles such as desktop and server)&lt;br /&gt;
** &amp;lt;s&amp;gt;stacks (comprising a complete product as a service with the underlying infrastructure)&amp;lt;/s&amp;gt; this is the task of Puppet --[[Benutzer:Chaf|Chaf]] ([[Benutzer Diskussion:Chaf|Diskussion]]) 09:42, 19. Dez. 2013 (CET)&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure requirements ==&lt;br /&gt;
* Local certificate authority for signing [http://en.wikipedia.org/wiki/X509 X.509] certificates.&lt;br /&gt;
* Master certificate authority root certificate which is only used to sign Sub-CA certificates&lt;br /&gt;
* Sub certificate authorities used for various cases such as&lt;br /&gt;
** Puppet certificates [http://docs.puppetlabs.com/puppet/3/reference/config_ssl_external_ca.html]&lt;br /&gt;
** User certificates&lt;br /&gt;
** Client certificates&lt;br /&gt;
** Host certificates&lt;br /&gt;
* Ability to sign, revoke and extend certificates&lt;br /&gt;
* Publish certificate revocation status either via [http://en.wikipedia.org/wiki/Certificate_revocation_list CRL] and/or [http://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol OCSP]&lt;br /&gt;
** CRL is not worth the hassle due to it not defining how often the CRL must be consulted. Since we are in the same physical net OCSP should be far superior here (thank to its live checking support). On the other hand puppet does not do OCSP yet (redmine: [http://projects.puppetlabs.com/issues/10111 #110111]) so we might need to implement both or implement OCSP as well as develop our own automated revocation for puppet.&lt;br /&gt;
* Choose DNs below &amp;lt;code&amp;gt;dc=rabe,dc=ch&amp;lt;/code&amp;gt;&lt;br /&gt;
* register a PEN-OID as issued by IANA if custom schema work is required&lt;br /&gt;
** Use a @rabe email when requesting a PEN at [http://pen.iana.org/pen/PenApplication.page IANA], last time the @purplehaze.ch was a problem!&lt;br /&gt;
* Some of the aforementioned sub-CAs might be implemented as robot CAs with a self service interface (ie for authorized users).&lt;br /&gt;
* Consider using [http://en.wikipedia.org/wiki/Certificate_Management_Protocol CMP] or [http://en.wikipedia.org/wiki/Certificate_Management_over_CMS CMC] as an API to signing, revoking et. al.&lt;br /&gt;
** Since the underlying RFCs of both these protocols are rather new they are not yet broadly supported.&lt;br /&gt;
* Keep local root CA offline!&lt;br /&gt;
** Maybe use an old netbook as root CA :P&lt;br /&gt;
* Support GPG keys for signing packages&lt;br /&gt;
&lt;br /&gt;
== Git hosting requirements ==&lt;br /&gt;
* Public repositories hosted on [http://www.github.com GitHub] (mainly) under the [https://github.com/organizations/radiorabe radiorabe organization] (almost anything which doesn&#039;t leak sensitive informations)&lt;br /&gt;
* Private repositories hosted on the internal infrastructure&lt;br /&gt;
** Accessible via https and a web interface&lt;br /&gt;
** contains some repos with uber-private data the gets compartmentalized even further (ie. hiera datafiles in different repos)&lt;br /&gt;
* One repository per component&lt;br /&gt;
* Daily backup of all repositories&lt;br /&gt;
* Branches for development, staging and production&lt;br /&gt;
** New features are added to the development branch only and later merged up to staging and production&lt;br /&gt;
* Must support pull-requests so we can implement a review process (when pulling through the envs)&lt;br /&gt;
** Sing-Offing might also be required&lt;br /&gt;
* Adhere to [http://semver.org/ Semantic Versioning] for version/release tags.&lt;br /&gt;
** Tag releases as &amp;lt;code&amp;gt;vX.Y.Z&amp;lt;/code&amp;gt; those will be automatically appear on GitHub as downloadable tarballs, which can be referenced within the corresponding ebuilds.&lt;br /&gt;
** Hit 1.0.0 as soon as code lands on production or earlier&lt;br /&gt;
** Commit .lock files when reaching 1.0.0 where applicable (Gemfile.lock, composer.lock) or earlier if needed&lt;br /&gt;
* Must be able to trigger remote events (ie. update master through mcollective after code was promoted to production in a PR)&lt;br /&gt;
* Support the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model&lt;br /&gt;
&lt;br /&gt;
== Messaging requirements ==&lt;br /&gt;
* I&#039;m talking AMPQ, JMS, STOMP, 0MQ and the likes&lt;br /&gt;
** not sure if we need something in this space for the infra&lt;br /&gt;
** it could facilitate comms between components&lt;br /&gt;
** stuff like mcollective and RadioDNS need something in this space&lt;br /&gt;
&lt;br /&gt;
== Monitoring, logging and alarming system requirements ==&lt;br /&gt;
@TODO&lt;br /&gt;
* centralized logging is used throughout&lt;br /&gt;
** with tools that help find and fix problems and do post mortems&lt;br /&gt;
* all systems are always monitored by a full monitoring suite&lt;br /&gt;
* the monitoring suite must support alarming users through multiple paths&lt;br /&gt;
** alarming should include a fallback strategy and a way to acknowledge alarms&lt;br /&gt;
** it must have a easy way to configure scheduled maintenance either before or while the maintenance is undergoing&lt;br /&gt;
* monitoring, logging and alarming are all automatically configured during regular provisioning of machines&lt;br /&gt;
* alerting uses jabber by default with fallbacks to email and sms-through-gsm depending on the site.&lt;br /&gt;
&lt;br /&gt;
= Implementation proposal =&lt;br /&gt;
== Build farm proposal ==&lt;br /&gt;
The build farm consists out of a system of multiple vms to build binary packages for multiple environments, architectures and build profiles.&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
&lt;br /&gt;
==== build orchestration ====&lt;br /&gt;
&lt;br /&gt;
==== package building ====&lt;br /&gt;
* [http://www.chromium.org/chromium-os/developer-guide/chromite-shell-quick-start chromite] build utility from chromium os ([https://chromium.googlesource.com/chromiumos/chromite/ source repo])&lt;br /&gt;
** as far as I recall chromium os does highly parallel building making their build really fast with a slight trade of in long termn stability (ie. build might fail due to dependencies being built out of oder), &lt;br /&gt;
** the [http://www.chromium.org/chromium-os/developer-guide chromium os developer guide] might also be of interest, among other things it shows that google do split the build into a package building part and an image creation part.&lt;br /&gt;
* [https://wiki.sabayon.org/?title=En:Entropy entropy] is sabayons portage replacement, it focuses on binaries due to sabayon being a binary distribution&lt;br /&gt;
** their [https://github.com/Sabayon/build build system &amp;quot;Matter&amp;quot;] might be of interest, it seems to automate large parts of tracking gentoo portage with its tinderbox subsystem&lt;br /&gt;
** sabayon has &amp;lt;code&amp;gt;kernel-switcher&amp;lt;/code&amp;gt; for updating kernels&lt;br /&gt;
** kernel ebuilds live [https://github.com/Sabayon/sabayon-distro/tree/master/sys-kernel/linux-sabayon here] and probably rely on the [https://github.com/Sabayon/sabayon-distro/blob/master/eclass/sabayon-kernel.eclass sabayon-kernel eclass].&lt;br /&gt;
&lt;br /&gt;
==== &amp;quot;stage4&amp;quot;/box/iso building ====&lt;br /&gt;
* [http://packer.io packer.io] can be used to build stage4 (containing a kernel) images and seems to work for gentoo. Packer often gets used to build Vagrant boxes.&lt;br /&gt;
** [https://github.com/pierreozoux/packer-warehouse/blob/master/var-files/gentoo/generate_latest.sh gentoo script from packer-warehouse] used with packer to create a minimal gentoo vagrant box&lt;br /&gt;
** currently packer and packer-warehouse do not seem capable of building gentoo machines out of the box, I tested this with osx/virtualbox using gentoo stage3 and portage snapshots [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
* [https://github.com/jedi4ever/veewee veewee] vagrant box builder (builds stage4 images in a manner similar to packer&lt;br /&gt;
** has support for a massive amount of guest os types&lt;br /&gt;
*** installs puppet/chef using gem due to the oldish versions in gentoo (and probably elsewhere)&lt;br /&gt;
** supports kvm and others as host os&lt;br /&gt;
** while testing with osx/virtualbox I was able to build and export a vagrant box from gentoo stage3 and portage snapshots without any hiccups [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
** is in dire need of DRY: [https://github.com/jedi4ever/veewee/pull/690] to make it worth forking&lt;br /&gt;
* [http://blinkeye.ch/dokuwiki/doku.php/projects/mkstage4 mkstage4]&lt;br /&gt;
** aimed at creating backup stage4 tarballs of gentoo systems&lt;br /&gt;
** written in bash&lt;br /&gt;
** pretty simple, might come in handy as automation tool&lt;br /&gt;
&lt;br /&gt;
==== kernel ====&lt;br /&gt;
&lt;br /&gt;
* at the moment we build tarballs for the kernel+initramfs and the modules using &amp;lt;code&amp;gt;genkernel&amp;lt;/code&amp;gt; and have a separate ebuild which installs them&lt;br /&gt;
* ideally we would like to have an ebuild which takes the kernel sources (like the ebuild for &amp;lt;code&amp;gt;sys-kernel/gentoo-source&amp;lt;/code&amp;gt; does), builds it according to some default configuration or a user configuration if available (&amp;lt;code&amp;gt;savedconfig.eclass&amp;lt;/code&amp;gt;) and then installs the kernel and the modules as well as some minimal headers+configuration to build other packages requiring the sources to be present&lt;br /&gt;
* TODO: check whether dracut has some advantages regarding module loading over genkernel-generated initramfs&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone proposal ==&lt;br /&gt;
== Portage overlay proposal ==&lt;br /&gt;
== Portage profile proposal ==&lt;br /&gt;
== Package and file mirror proposal ==&lt;br /&gt;
== Puppet proposal ==&lt;br /&gt;
* Adhere to Craig Dunns [http://www.craigdunn.org/2012/05/239/ architecture] [http://www.slideshare.net/PuppetLabs/roles-talk]&lt;br /&gt;
** on the system level (ie for each bar-metal or virtual machine)&lt;br /&gt;
*** roles contains the business view (ie. [https://github.com/radiorabe/puppet/blob/master/role/manifests/puppet/master.pp role::puppet::master])&lt;br /&gt;
*** profiles the implementation (such as [https://github.com/radiorabe/puppet/blob/master/profile/manifests/puppet/master.pp profile::puppet::master])&lt;br /&gt;
** on the architecture level (ie. in the cloud-fabric)&lt;br /&gt;
*** roles contains the business view (ie. role::cloud-storage, role::product1)&lt;br /&gt;
*** profiles contain the implementation (ie profile::storage-cluster, profile::storage-webinterface-farm)&lt;br /&gt;
* Keep profiles, roles (as per craig) and Puppetfile in [https://github.com/radiorabe/puppet github.com/radiorabe/puppet]&lt;br /&gt;
** This is where we keep feature/*, develop and master (ie staging) branches&lt;br /&gt;
** An internal clone then contains all these + production (what exactly is in prodution, ie. our release schedule is considered sensitive in this implementation)&lt;br /&gt;
** This lets us use the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model with almost no changes (the one change being us gating stuff into production on the closed clone)&lt;br /&gt;
** github may use hooks to push content to our internal git when they happen&lt;br /&gt;
* All other modules need their own repo and must be published to the puppet module forge&lt;br /&gt;
* Use librarian-puppet (or r10k) for composing the final puppet envs&lt;br /&gt;
** r10k eschews git submodule support we used in puppet-syslogng but has support for multiple envs out of the box&lt;br /&gt;
** librarian-puppet would need to be run once per environment to achieve what r10k does&lt;br /&gt;
* provide develop, master and production branches from private repo as puppet environments on master&lt;br /&gt;
&lt;br /&gt;
== Install host proposal ==&lt;br /&gt;
* use the existing server on [[tftp-01]] on the RaBe infra as a shortcut&lt;br /&gt;
** replace that instance with one native to the infra when it is ready for that&lt;br /&gt;
* iPXE [http://ipxe.org/]&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
* Tools that run puppet on freshly installed machines (and also do some provisioning)&lt;br /&gt;
** [https://forge.puppetlabs.com/puppetlabs/razor puppetlabs razor] bare metal/cloud provisioning tool&lt;br /&gt;
** [http://www.vagrantup.com/ vagrant] cloud provisioning aimed at provisioning developer boxes (with virtualbox). Has 3rd party support for various cloud systems. Vagrant might be interesting for creating dev clouds. I&#039;ve seen this being used on production sites.&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure proposal ==&lt;br /&gt;
* write [[certificate policy]] (in german!)&lt;br /&gt;
* hold a key ceremony for the root and level 1&lt;br /&gt;
** offline ceremony on an old netbook with centos or similar (not debian, probably not gentoo to make this happen soonish)&lt;br /&gt;
** Sign RaBe root cert and level 1 intermediate cert&lt;br /&gt;
** store root cert key on 2 sdcards and as 1 printout somewhere safely&lt;br /&gt;
** store level 1 intermediate key on sdcards for use by admins&lt;br /&gt;
* use level 1 intermediate key to sign level 2 cas as needed&lt;br /&gt;
** level 2 robot ca key for puppet (managed by &amp;lt;code&amp;gt;puppet ca&amp;lt;/code&amp;gt;)&lt;br /&gt;
** level 2 ca for client certs&lt;br /&gt;
** level 2 ca for host certs&lt;br /&gt;
** more level 2 certs&lt;br /&gt;
* use OpenSSL as default software for PKI&lt;br /&gt;
** ssl has the largest userbase which should make it easier on new admins&lt;br /&gt;
** features that openssl does not implement get used as soon as openssl catches up (ie. [http://cmpforopenssl.sourceforge.net/‎ CMP])&lt;br /&gt;
&lt;br /&gt;
== git hosting proposal ==&lt;br /&gt;
&lt;br /&gt;
* adhere to git-flow for all the things. Automate said usage as far as possible.&lt;br /&gt;
{|- class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=4 | git-flow branching &lt;br /&gt;
|-&lt;br /&gt;
! Branch&lt;br /&gt;
! Environment&lt;br /&gt;
! Merge from&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt;&lt;br /&gt;
| production&lt;br /&gt;
| &amp;lt;code&amp;gt;release/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;hotfix/&amp;lt;/code&amp;gt;&lt;br /&gt;
| Released code with a &amp;lt;code&amp;gt;git tag&amp;lt;/code&amp;gt; for each merge.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;release/v0.0.0&amp;lt;/code&amp;gt;&lt;br /&gt;
| staging&lt;br /&gt;
| &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;&lt;br /&gt;
| Contains final releasing work like updating versioning and changelog. This is where we keep semver concerns in check if they where not taken care of already.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;hotfix/v0.0.0&amp;lt;/code&amp;gt;&lt;br /&gt;
| staging&lt;br /&gt;
| &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt;&lt;br /&gt;
| Only for critically urgent fixes. In most cases doing a release from &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt; is preferred.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;&lt;br /&gt;
| development&lt;br /&gt;
| &amp;lt;code&amp;gt;feature/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt;&lt;br /&gt;
| Only feature branches that are ready for production should get merged here. &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt; gets merged here after each merge to it. Merging is done with pull requests and review.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;feature/featurename&amp;lt;/code&amp;gt;&lt;br /&gt;
| development&lt;br /&gt;
| &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;&lt;br /&gt;
| New features get implemented here until they are considered ready for production and merged to &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;support/v0.0.0&amp;lt;/code&amp;gt;&lt;br /&gt;
| LTS&lt;br /&gt;
| &lt;br /&gt;
| Marked experimental in most implementations and unused for now.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* Install gitlab on a vm and integrate external mirrors from github and ldap users from stoney-ldap.&lt;br /&gt;
** keep repo of public mirrors in hieradata so we can configure them from puppet.&lt;br /&gt;
** each organisation in stoney-ldap automatically gets a private project in gitlab.&lt;br /&gt;
* Configure web hook intrastructure and integrate with continuous integration system.&lt;br /&gt;
* Make continuous integration show feedback back in gitlab.&lt;br /&gt;
** check for &amp;lt;code&amp;gt;git annotate&amp;lt;/code&amp;gt; support or use img badges.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;On organization projects in gitlab&#039;&#039;&#039;&lt;br /&gt;
* Each project comes with default repos. &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Repo&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
| Set up using a template, contains a Puppetfile and Puppetfile.lock and a hieradata directory.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;role&amp;lt;/code&amp;gt;&lt;br /&gt;
| Read only copy of global role module for reference.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;profile&amp;lt;/code&amp;gt;&lt;br /&gt;
| Read only copy of global profile module for reference.&lt;br /&gt;
|}&lt;br /&gt;
* Everything in the latter two modules is configurable through hieradata in the first repo.&lt;br /&gt;
* The default setup automatically updates &amp;lt;code&amp;gt;role&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;profile&amp;lt;/code&amp;gt; when they get new merges.&lt;br /&gt;
* A software agent (ci) regularly clones &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;, does a full build and pushes the results back to &amp;lt;code&amp;gt;feature/tinderbox&amp;lt;/code&amp;gt;&lt;br /&gt;
* This agent autmatically creates pull requests if tinderbox builds did not fail.&lt;br /&gt;
* Org leaders may then merge these PRs and bake them into a local release.&lt;br /&gt;
* Some kind of UI helps them do this without much technical knowledge.&lt;br /&gt;
* More repos may be added by the customer.&lt;br /&gt;
* project organizations are private, per customer.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [http://gitlab.org/ gitlab] seems nice even though is is ruby on rails under the hood&lt;br /&gt;
* [https://github.com/sag47/gitlab-mirrors gitlab-mirrors] is a companion app to gitlab for adding readonly mirror repos to gitlab. We might consider hacking it to not use &amp;lt;code&amp;gt;git remote prune&amp;lt;/code&amp;gt;.&lt;br /&gt;
* [http://www.javacodegeeks.com/2014/01/git-flow-with-jenkins-and-gitlab.html git-flow with jenkins and gitlab]&lt;br /&gt;
* [https://wiki.jenkins-ci.org/display/JENKINS/Gitlab+Hook+Plugin gitlab hook for jenkins]&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Binary_package_guide Gentoo Binary Package Guide]&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Preserve-libs Gentoo preserve-libs]&lt;br /&gt;
* [http://swift.siphos.be/aglara/ A Gentoo Linux Advanced Reference Architecture]&lt;br /&gt;
* [http://www.gentoo.org/proj/en/gentoo-alt/prefix/ Gentoo Prefix]&lt;br /&gt;
* man pages&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/portage.5.html portage(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/emerge.1.html emerge(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html make.conf(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.1.html ebuild(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.5.html ebuild(5)]&lt;br /&gt;
&lt;br /&gt;
[[Category: Infrastructure]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=3000</id>
		<title>Gentoo Infrastructure</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=3000"/>
		<updated>2014-01-31T19:50:07Z</updated>

		<summary type="html">&lt;p&gt;Lucas: /* git hosting proposal */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This article describes how we plan on using gentoo as an infrastructure backbone for creating a complete and modern IT architecture.&lt;br /&gt;
&lt;br /&gt;
== Glossary ==&lt;br /&gt;
@TODO We need to clean up some terms already (for instance the portage vs puppet profile thing) A [[:Category:Glossary|glossary]] should help us define term more closely (and stick to the definitions).&lt;br /&gt;
&lt;br /&gt;
; portage profile&lt;br /&gt;
: A profile in gentoo portage. Defines either a system or application stack for portage.&lt;br /&gt;
; portage build profile&lt;br /&gt;
: A profile in gentoo portage. Based of a system profile but used during the build phase of the binary packages used in the final deploy.&lt;br /&gt;
; puppet profile&lt;br /&gt;
: A puppet profile contains the implementation logic of how to install and configure an aspect of a system.&lt;br /&gt;
; stack&lt;br /&gt;
: A stack contains a complete and deployable product that may be provisioned and used. Stack have very simple inheritance letting the admin create stack trees based on each other. For instance a Ruby on Rails stack will be based of of a ruby stack which is based off a linux stack.&lt;br /&gt;
&lt;br /&gt;
= Required components =&lt;br /&gt;
* Build host(s) for binary packages&lt;br /&gt;
* HTTP server for serving binary packages and distfiles (required by the ebuilds)&lt;br /&gt;
* Git clone of official portage tree&lt;br /&gt;
* Overlay(s)&lt;br /&gt;
* Own portage profile(s)&lt;br /&gt;
* rsync or Git server for serving the Overlay and the portage profiles&lt;br /&gt;
* Stage3 building system&lt;br /&gt;
* Puppet for configuration management and software installation&lt;br /&gt;
* Git version control for everything (overlays, portage profiles, puppet manifests and scripts/code)&lt;br /&gt;
* Install host (PXE boot / TFTP / DHCP)&lt;br /&gt;
** emc/puppetlabs [https://github.com/puppetlabs/Razor razor] can do this but needs some work for gentoo &lt;br /&gt;
* Automatic base installation script&lt;br /&gt;
** also in the scope of razor&lt;br /&gt;
* Separation of development, staging and production environments&lt;br /&gt;
** tagged and managed in git&lt;br /&gt;
* PKI environment (with dedicated sub CAs) for X509 certificates (used for Puppet, server and client certs etc.)&lt;br /&gt;
* git web interface (make dotfiles and frozen clones accessible to power-users)&lt;br /&gt;
* Central authentication service&lt;br /&gt;
* DNS, DHCP and NTP services&lt;br /&gt;
* Monitoring and alarming system&lt;br /&gt;
* Logging&lt;br /&gt;
* versioning for everything (if it is a committable file, use semver on its repo)&lt;br /&gt;
&lt;br /&gt;
== Binary package requirements ==&lt;br /&gt;
* Ability to build and install binary packages with the same version but different USE flags. For example, MySQL server package (&amp;lt;code&amp;gt;-minimal&amp;lt;/code&amp;gt; and MySQL client &amp;amp; libs package &amp;lt;code&amp;gt;minimal&amp;lt;/code&amp;gt;)&lt;br /&gt;
** don&#039;t go there: this imposes a significant amount of maintenance work and may still break. Rather provide large enough base sets and accept that some packages install too much (you can still disable them at runtime) and build the few deviations from the rule on the servers from source --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:39, 3 January 2014 (CET)&lt;br /&gt;
*** Yes, we need to and can go there :-) I agree with you, that we should do this only if necessary, apache for example can be built once and has the ability to turn features (module loading) on/off via its configuration. Other software does not provide such run-time configuration which results in unwanted server-software and dependencies on the installed hosts (&amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for example). I clearly do not want to have a dedicated build environment for each of those packages, I would rather see a build env, called minimal for example, which is used to build all those database packages with only lib and clients enabled (use the same env for PostgreSQL, OpenLDAP, MySQL etc.). As stated before, the whole build process needs to be automated, so I don&#039;t see a considerable increase of maintenance work coming up here. The dependency problem is mitigated through the fact that we have a frozen portage tree for all our build envs and therefore use the same versions everywhere. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 12:04, 6 January 2014 (CET)&lt;br /&gt;
*** Yes and no on this one. We clearly need to keep the list of packages that require this at bare minimum. &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for instance doesn&#039;t warrant this, we just won&#039;t start the server on non server nodes. Easy as cake. The server code and it&#039;s deps wont do any harm on say a desktop or other server box. Even though I can&#039;t think of example, I do believe we will be needing this possibility when we encounter packages that need to be built using different profiles for different use cases, things like having a php with-curlwrappers vs one with the curl module sans curlwrappers. The important point I take from this is that creating new profiles with small deviations from our default must be very easy (ie. not much work). Basically we need the infras support for n different build profiles to be fully automated and well documented. [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 19:52, 9 January 2014 (CET)&lt;br /&gt;
**** The &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; is definitely a good example, I don&#039;t want to install and maintain MySQL, Apache, PHP, snmpd (including all the deps) etc. on hosts which just need a Zabbix agent. I would also like to pragmatically avoid unused deps, in order to minimize reverse-updates and security updates (which must be provided nonetheless if the software is in use or not). --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 13:20, 10 January 2014 (CET)&lt;br /&gt;
* Providing binary packages for different major (and sometimes minor) versions, for example: &amp;lt;code&amp;gt;dev-db/mysql-5.X.Y&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;dev-db/mysql-6.X.Y&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Provide binary packages for pre-compiled Linux kernels and modules (not just a binary package of &amp;lt;code&amp;gt;sys-kernel/gentoo-sources&amp;lt;/code&amp;gt;)&lt;br /&gt;
** This makes it possible to build stage4 images from binary packages. &lt;br /&gt;
** Most likely there will be separate packages for servers and desktops built with different genkernel configs.&lt;br /&gt;
* Handle reverse dependency updates and ABI changes&lt;br /&gt;
&lt;br /&gt;
== Build host requirements ==&lt;br /&gt;
* Build binary package for all required software&lt;br /&gt;
* Support for multiple environments (development, staging and production)&lt;br /&gt;
* Support for multiple architectures (such as x86, amd64 etc.)&lt;br /&gt;
* Support for multiple build profiles&lt;br /&gt;
** system (or base) profile, such as desktop or server (stage3) (all the packages contained within the &amp;lt;code&amp;gt;/etc/portage/make.profile&amp;lt;/code&amp;gt; or via &amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;)&lt;br /&gt;
** application profiles, such as php5-app, django-app etc.)&lt;br /&gt;
** simple inheritance is used for things like python-app -&amp;gt; django-app&lt;br /&gt;
** stacks consist of one system profile and multiple application profiles&lt;br /&gt;
** don&#039;t do this: Gentoo itself has only a few profiles and even there issues arise when combining them (for example desktop + selinux-hardened) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:40, 3 January 2014 (CET)&lt;br /&gt;
*** Those are build-profiles (for example chroots or some sort of overlay-fs) not Gentoo (portage) profiles, we definitely need to clarify those terms ;) --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 20:01, 5 January 2014 (CET)&lt;br /&gt;
* All build profiles will use a system profile as their base profile&lt;br /&gt;
* Ability to update an existing build profile, without the need to build it from scratch&lt;br /&gt;
* Ability to do fully automated clean builds (ie. for new archs or new stacks)&lt;br /&gt;
* Ability to automatically update all development profiles on a predefined frequency such as daily, weekly or monthly an be notified about build failures&lt;br /&gt;
** [http://jenkins-ci.org/ jenkins ci] can do this using one jenkins master and a least one build slave per architecture.&lt;br /&gt;
** Other options would be [https://github.com/travis-ci/travis-ci travis ci] (not ready for in-house use) or [http://cruisecontrol.sourceforge.net/ cruise control]&lt;br /&gt;
** Rabe already has a jenkins instance: [http://intranet.rabe.ch/jenkins/]. The instance [[Jenkins-01]] is more or less modern and should be easy to reintegrate with puppet.&lt;br /&gt;
* Each build profile stores the built binary packages under a per-defined directory which will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Application build profiles stores only the extra packages within the above directory, packages included in a base profile won&#039;t be duplicated.&lt;br /&gt;
* Old or no longer supported packages will be removed automatically&lt;br /&gt;
* Build a stage 3 tarball, which can be used for the automatic installation via PXE/TFTP.&lt;br /&gt;
** must be able to build a stage tarball for each of the available environment-arch-system profile combinations&lt;br /&gt;
* Handle reverse dependency updates and ABI changes (aka &amp;lt;code&amp;gt;revdep-rebuild&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Handle perl and python (maybe more) dependency updates (aka &amp;lt;code&amp;gt;perl-cleaner&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;python-updater&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Ability to build kernel and modules&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone requirements ==&lt;br /&gt;
* The official portage tree needs to be cloned via Git, which basically enables one to:&lt;br /&gt;
** keep the control over portage tree updates&lt;br /&gt;
** provide an old version of the tree&lt;br /&gt;
** cherry pick updates&lt;br /&gt;
***  this should be avoided at all cost since it can lead to various sorts of breakages (ebuild &amp;lt;-&amp;gt; ebuild, ebuild &amp;lt;-&amp;gt; eclass, ebuild &amp;lt;-&amp;gt; profile, eclass &amp;lt;-&amp;gt; profile interaction) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:24, 3 January 2014 (CET)&lt;br /&gt;
**** Yes, I agree. Nonetheless, we need the &#039;&#039;possibility&#039;&#039; to do cherry picking, for example to react on zero-day exploits. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 19:53, 5 January 2014 (CET)&lt;br /&gt;
* Support for a development, staging and production branch&lt;br /&gt;
** Ability to automatically sync from upstream&lt;br /&gt;
** Easy merge support from one branch to the next &#039;&#039;higher&#039;&#039; one (staging -&amp;gt; production)&lt;br /&gt;
* Notification support for new [http://www.gentoo.org/security/en/glsa/index.xml GLSAs] which affect packages within the cloned trees.&lt;br /&gt;
** Either via automatic update and merge of &amp;lt;code&amp;gt;/usr/portage/metadata/glsa&amp;lt;/code&amp;gt; or via external mechanisms such as consulting the [http://www.gentoo.org/rdf/en/glsa-index.rdf RDF feed].&lt;br /&gt;
** Having an inventory by collecting puppet facts allows to check for security updates in a central location --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:31, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage overlay requirements ==&lt;br /&gt;
* One Git based portage [http://www.gentoo.org/proj/en/overlays/userguide.xml overlay]&lt;br /&gt;
** Contains own [[#Portage_profile_requirements|portage profiles]]&lt;br /&gt;
** Contains own or modified ebuilds or legacy ones removed from the official tree&lt;br /&gt;
* Support for development, staging and production environment (via Git branches)&lt;br /&gt;
* [http://layman.sourceforge.net/ Layman] compatibility&lt;br /&gt;
** Portage has now direct repository support (as has cave/paludis) and layman may be omitted --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:32, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage profile requirements ==&lt;br /&gt;
* Multiple [http://wiki.gentoo.org/wiki/Profile Portage profiles] stored within the [[#Overlay_requirements|overlay]].&lt;br /&gt;
** One for base, desktop and server (maybe more in the future, such as streambox)&lt;br /&gt;
*** desktop and server both inherit from the base profile which serves as the lowest common denominator.&lt;br /&gt;
* Support for multiple architectures (such as x86 and amd64)&lt;br /&gt;
** Avoid definition duplications via parent profile inheriting.&lt;br /&gt;
* All the profiles have an official Gentoo profile as their master&lt;br /&gt;
* Profiles include only packages belonging to a base system, not an application stack (those will be managed via puppet recipes)&lt;br /&gt;
* Profiles can be used to unmask packages required but not belonging to the base system&lt;br /&gt;
* Profiles sets all the default values for the client&#039;s [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html &amp;lt;code&amp;gt;make.conf&amp;lt;/code&amp;gt;], such as USE flags, BINHOSTS, GENTOO_MIRRORS, CFLAGS, CHOST etc.&lt;br /&gt;
** &#039;&#039;&#039;Warning&#039;&#039;&#039;: many such variables are not incremental and therefore need duplication of Gentoo base profile variables (requiring that someone tracks changes in those variables) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:29, 3 January 2014 (CET)&lt;br /&gt;
* keep the profiles (and the inheritance structure) as simple as possible, rather duplicate than inherit for small deviations to avoid inheritence issues --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:33, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Package host requirements ==&lt;br /&gt;
* Serving files via HTTPS&lt;br /&gt;
** Binary packages for all the clients (&amp;lt;code&amp;gt;PORTAGE_BINHOST&amp;lt;/code&amp;gt;), which were built by the [[#Build_host_requirements|build host]]&lt;br /&gt;
*** Binary packages will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
*** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== File mirror host requirements ==&lt;br /&gt;
* Hosts all the files required to build a package (&amp;lt;code&amp;gt;GENTOO_MIRRORS=mirror.example.com/public/gentoo/distfiles&amp;lt;/code&amp;gt;)&lt;br /&gt;
** Acts as a caching mirror for already downloaded packages from an official mirror&lt;br /&gt;
**  Serves fetch-restricted files (&amp;lt;code&amp;gt;dev-java/oracle-jdk-bin&amp;lt;/code&amp;gt; for example), to authorized clients&lt;br /&gt;
* Files are served via HTTPS&lt;br /&gt;
* Distinguishes between three groups of files&lt;br /&gt;
** &#039;&#039;&#039;public&#039;&#039;&#039;: Files which are available to all clients (theoretically even to the entire internet)&lt;br /&gt;
** &#039;&#039;&#039;site-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure (for example those which would put us into [http://www.bettercallsaul.com/ legal troubles] if available to the public)&lt;br /&gt;
** &#039;&#039;&#039;stack-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure and the software stack group (private files of a specific customer) &lt;br /&gt;
* Provides an easy way to let an administrator manually upload new files, for example via WebDAV-CGI, SFTP or a similar mechanism.&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== Puppet requirements ==&lt;br /&gt;
* moved to [[stoney_orchestra:_Requirements]], included below for reference.&lt;br /&gt;
&lt;br /&gt;
{{:stoney orchestra: Requirements}}&lt;br /&gt;
&lt;br /&gt;
== Install host requirements ==&lt;br /&gt;
* Ability to install physical and virtual machines&lt;br /&gt;
* Distinguish machines by their Ethernet MAC address&lt;br /&gt;
* Provide a PXE/TFTP boot mechanism&lt;br /&gt;
* Partition and format the (virtual) harddisks&lt;br /&gt;
* Install a stage3 image which was built by the build host&lt;br /&gt;
* Bootstrap puppet, enabling it to take over the individual installation and customization.&lt;br /&gt;
* Group hosts into&lt;br /&gt;
** environments (development, staging and production)&lt;br /&gt;
** architectures (such as x86, amd64 etc.)&lt;br /&gt;
** portage profiles (system profiles such as desktop and server)&lt;br /&gt;
** &amp;lt;s&amp;gt;stacks (comprising a complete product as a service with the underlying infrastructure)&amp;lt;/s&amp;gt; this is the task of Puppet --[[Benutzer:Chaf|Chaf]] ([[Benutzer Diskussion:Chaf|Diskussion]]) 09:42, 19. Dez. 2013 (CET)&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure requirements ==&lt;br /&gt;
* Local certificate authority for signing [http://en.wikipedia.org/wiki/X509 X.509] certificates.&lt;br /&gt;
* Master certificate authority root certificate which is only used to sign Sub-CA certificates&lt;br /&gt;
* Sub certificate authorities used for various cases such as&lt;br /&gt;
** Puppet certificates [http://docs.puppetlabs.com/puppet/3/reference/config_ssl_external_ca.html]&lt;br /&gt;
** User certificates&lt;br /&gt;
** Client certificates&lt;br /&gt;
** Host certificates&lt;br /&gt;
* Ability to sign, revoke and extend certificates&lt;br /&gt;
* Publish certificate revocation status either via [http://en.wikipedia.org/wiki/Certificate_revocation_list CRL] and/or [http://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol OCSP]&lt;br /&gt;
** CRL is not worth the hassle due to it not defining how often the CRL must be consulted. Since we are in the same physical net OCSP should be far superior here (thank to its live checking support). On the other hand puppet does not do OCSP yet (redmine: [http://projects.puppetlabs.com/issues/10111 #110111]) so we might need to implement both or implement OCSP as well as develop our own automated revocation for puppet.&lt;br /&gt;
* Choose DNs below &amp;lt;code&amp;gt;dc=rabe,dc=ch&amp;lt;/code&amp;gt;&lt;br /&gt;
* register a PEN-OID as issued by IANA if custom schema work is required&lt;br /&gt;
** Use a @rabe email when requesting a PEN at [http://pen.iana.org/pen/PenApplication.page IANA], last time the @purplehaze.ch was a problem!&lt;br /&gt;
* Some of the aforementioned sub-CAs might be implemented as robot CAs with a self service interface (ie for authorized users).&lt;br /&gt;
* Consider using [http://en.wikipedia.org/wiki/Certificate_Management_Protocol CMP] or [http://en.wikipedia.org/wiki/Certificate_Management_over_CMS CMC] as an API to signing, revoking et. al.&lt;br /&gt;
** Since the underlying RFCs of both these protocols are rather new they are not yet broadly supported.&lt;br /&gt;
* Keep local root CA offline!&lt;br /&gt;
** Maybe use an old netbook as root CA :P&lt;br /&gt;
* Support GPG keys for signing packages&lt;br /&gt;
&lt;br /&gt;
== Git hosting requirements ==&lt;br /&gt;
* Public repositories hosted on [http://www.github.com GitHub] (mainly) under the [https://github.com/organizations/radiorabe radiorabe organization] (almost anything which doesn&#039;t leak sensitive informations)&lt;br /&gt;
* Private repositories hosted on the internal infrastructure&lt;br /&gt;
** Accessible via https and a web interface&lt;br /&gt;
** contains some repos with uber-private data the gets compartmentalized even further (ie. hiera datafiles in different repos)&lt;br /&gt;
* One repository per component&lt;br /&gt;
* Daily backup of all repositories&lt;br /&gt;
* Branches for development, staging and production&lt;br /&gt;
** New features are added to the development branch only and later merged up to staging and production&lt;br /&gt;
* Must support pull-requests so we can implement a review process (when pulling through the envs)&lt;br /&gt;
** Sing-Offing might also be required&lt;br /&gt;
* Adhere to [http://semver.org/ Semantic Versioning] for version/release tags.&lt;br /&gt;
** Tag releases as &amp;lt;code&amp;gt;vX.Y.Z&amp;lt;/code&amp;gt; those will be automatically appear on GitHub as downloadable tarballs, which can be referenced within the corresponding ebuilds.&lt;br /&gt;
** Hit 1.0.0 as soon as code lands on production or earlier&lt;br /&gt;
** Commit .lock files when reaching 1.0.0 where applicable (Gemfile.lock, composer.lock) or earlier if needed&lt;br /&gt;
* Must be able to trigger remote events (ie. update master through mcollective after code was promoted to production in a PR)&lt;br /&gt;
* Support the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model&lt;br /&gt;
&lt;br /&gt;
== Messaging requirements ==&lt;br /&gt;
* I&#039;m talking AMPQ, JMS, STOMP, 0MQ and the likes&lt;br /&gt;
** not sure if we need something in this space for the infra&lt;br /&gt;
** it could facilitate comms between components&lt;br /&gt;
** stuff like mcollective and RadioDNS need something in this space&lt;br /&gt;
&lt;br /&gt;
== Monitoring, logging and alarming system requirements ==&lt;br /&gt;
@TODO&lt;br /&gt;
* centralized logging is used throughout&lt;br /&gt;
** with tools that help find and fix problems and do post mortems&lt;br /&gt;
* all systems are always monitored by a full monitoring suite&lt;br /&gt;
* the monitoring suite must support alarming users through multiple paths&lt;br /&gt;
** alarming should include a fallback strategy and a way to acknowledge alarms&lt;br /&gt;
** it must have a easy way to configure scheduled maintenance either before or while the maintenance is undergoing&lt;br /&gt;
* monitoring, logging and alarming are all automatically configured during regular provisioning of machines&lt;br /&gt;
* alerting uses jabber by default with fallbacks to email and sms-through-gsm depending on the site.&lt;br /&gt;
&lt;br /&gt;
= Implementation proposal =&lt;br /&gt;
== Build host proposal ==&lt;br /&gt;
The build host consists out of various chroots to build binary packages for multiple environments, architectures and build profiles.&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
&lt;br /&gt;
==== build orchestration ====&lt;br /&gt;
&lt;br /&gt;
==== package building ====&lt;br /&gt;
* [http://www.chromium.org/chromium-os/developer-guide/chromite-shell-quick-start chromite] build utility from chromium os ([https://chromium.googlesource.com/chromiumos/chromite/ source repo])&lt;br /&gt;
** as far as I recall chromium os does highly parallel building making their build really fast with a slight trade of in long termn stability (ie. build might fail due to dependencies being built out of oder), &lt;br /&gt;
** the [http://www.chromium.org/chromium-os/developer-guide chromium os developer guide] might also be of interest, among other things it shows that google do split the build into a package building part and an image creation part.&lt;br /&gt;
* [https://wiki.sabayon.org/?title=En:Entropy entropy] is sabayons portage replacement, it focuses on binaries due to sabayon being a binary distribution&lt;br /&gt;
** their [https://github.com/Sabayon/build build system &amp;quot;Matter&amp;quot;] might be of interest, it seems to automate large parts of tracking gentoo portage with its tinderbox subsystem&lt;br /&gt;
** sabayon has &amp;lt;code&amp;gt;kernel-switcher&amp;lt;/code&amp;gt; for updating kernels&lt;br /&gt;
** kernel ebuilds live [https://github.com/Sabayon/sabayon-distro/tree/master/sys-kernel/linux-sabayon here] and probably rely on the [https://github.com/Sabayon/sabayon-distro/blob/master/eclass/sabayon-kernel.eclass sabayon-kernel eclass].&lt;br /&gt;
&lt;br /&gt;
==== &amp;quot;stage4&amp;quot;/box/iso building ====&lt;br /&gt;
* [http://packer.io packer.io] can be used to build stage4 (containing a kernel) images and seems to work for gentoo. Packer often gets used to build Vagrant boxes.&lt;br /&gt;
** [https://github.com/pierreozoux/packer-warehouse/blob/master/var-files/gentoo/generate_latest.sh gentoo script from packer-warehouse] used with packer to create a minimal gentoo vagrant box&lt;br /&gt;
** currently packer and packer-warehouse do not seem capable of building gentoo machines out of the box, I tested this with osx/virtualbox using gentoo stage3 and portage snapshots [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
* [https://github.com/jedi4ever/veewee veewee] vagrant box builder (builds stage4 images in a manner similar to packer&lt;br /&gt;
** has support for a massive amount of guest os types&lt;br /&gt;
*** installs puppet/chef using gem due to the oldish versions in gentoo (and probably elsewhere)&lt;br /&gt;
** supports kvm and others as host os&lt;br /&gt;
** while testing with osx/virtualbox I was able to build and export a vagrant box from gentoo stage3 and portage snapshots without any hiccups [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
** is in dire need of DRY: [https://github.com/jedi4ever/veewee/pull/690] to make it worth forking&lt;br /&gt;
* [http://blinkeye.ch/dokuwiki/doku.php/projects/mkstage4 mkstage4]&lt;br /&gt;
** aimed at creating backup stage4 tarballs of gentoo systems&lt;br /&gt;
** written in bash&lt;br /&gt;
** pretty simple, might come in handy as automation tool&lt;br /&gt;
&lt;br /&gt;
==== kernel ====&lt;br /&gt;
&lt;br /&gt;
* at the moment we build tarballs for the kernel+initramfs and the modules using &amp;lt;code&amp;gt;genkernel&amp;lt;/code&amp;gt; and have a separate ebuild which installs them&lt;br /&gt;
* ideally we would like to have an ebuild which takes the kernel sources (like the ebuild for &amp;lt;code&amp;gt;sys-kernel/gentoo-source&amp;lt;/code&amp;gt; does), builds it according to some default configuration or a user configuration if available (&amp;lt;code&amp;gt;savedconfig.eclass&amp;lt;/code&amp;gt;) and then installs the kernel and the modules as well as some minimal headers+configuration to build other packages requiring the sources to be present&lt;br /&gt;
* TODO: check whether dracut has some advantages regarding module loading over genkernel-generated initramfs&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone proposal ==&lt;br /&gt;
== Portage overlay proposal ==&lt;br /&gt;
== Portage profile proposal ==&lt;br /&gt;
== Package and file mirror proposal ==&lt;br /&gt;
== Puppet proposal ==&lt;br /&gt;
* Adhere to Craig Dunns [http://www.craigdunn.org/2012/05/239/ architecture] [http://www.slideshare.net/PuppetLabs/roles-talk]&lt;br /&gt;
** on the system level (ie for each bar-metal or virtual machine)&lt;br /&gt;
*** roles contains the business view (ie. [https://github.com/radiorabe/puppet/blob/master/role/manifests/puppet/master.pp role::puppet::master])&lt;br /&gt;
*** profiles the implementation (such as [https://github.com/radiorabe/puppet/blob/master/profile/manifests/puppet/master.pp profile::puppet::master])&lt;br /&gt;
** on the architecture level (ie. in the cloud-fabric)&lt;br /&gt;
*** roles contains the business view (ie. role::cloud-storage, role::product1)&lt;br /&gt;
*** profiles contain the implementation (ie profile::storage-cluster, profile::storage-webinterface-farm)&lt;br /&gt;
* Keep profiles, roles (as per craig) and Puppetfile in [https://github.com/radiorabe/puppet github.com/radiorabe/puppet]&lt;br /&gt;
** This is where we keep feature/*, develop and master (ie staging) branches&lt;br /&gt;
** An internal clone then contains all these + production (what exactly is in prodution, ie. our release schedule is considered sensitive in this implementation)&lt;br /&gt;
** This lets us use the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model with almost no changes (the one change being us gating stuff into production on the closed clone)&lt;br /&gt;
** github may use hooks to push content to our internal git when they happen&lt;br /&gt;
* All other modules need their own repo and must be published to the puppet module forge&lt;br /&gt;
* Use librarian-puppet (or r10k) for composing the final puppet envs&lt;br /&gt;
** r10k eschews git submodule support we used in puppet-syslogng but has support for multiple envs out of the box&lt;br /&gt;
** librarian-puppet would need to be run once per environment to achieve what r10k does&lt;br /&gt;
* provide develop, master and production branches from private repo as puppet environments on master&lt;br /&gt;
&lt;br /&gt;
== Install host proposal ==&lt;br /&gt;
* use the existing server on [[tftp-01]] on the RaBe infra as a shortcut&lt;br /&gt;
** replace that instance with one native to the infra when it is ready for that&lt;br /&gt;
* iPXE [http://ipxe.org/]&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
* Tools that run puppet on freshly installed machines (and also do some provisioning)&lt;br /&gt;
** [https://forge.puppetlabs.com/puppetlabs/razor puppetlabs razor] bare metal/cloud provisioning tool&lt;br /&gt;
** [http://www.vagrantup.com/ vagrant] cloud provisioning aimed at provisioning developer boxes (with virtualbox). Has 3rd party support for various cloud systems. Vagrant might be interesting for creating dev clouds. I&#039;ve seen this being used on production sites.&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure proposal ==&lt;br /&gt;
* write [[certificate policy]] (in german!)&lt;br /&gt;
* hold a key ceremony for the root and level 1&lt;br /&gt;
** offline ceremony on an old netbook with centos or similar (not debian, probably not gentoo to make this happen soonish)&lt;br /&gt;
** Sign RaBe root cert and level 1 intermediate cert&lt;br /&gt;
** store root cert key on 2 sdcards and as 1 printout somewhere safely&lt;br /&gt;
** store level 1 intermediate key on sdcards for use by admins&lt;br /&gt;
* use level 1 intermediate key to sign level 2 cas as needed&lt;br /&gt;
** level 2 robot ca key for puppet (managed by &amp;lt;code&amp;gt;puppet ca&amp;lt;/code&amp;gt;)&lt;br /&gt;
** level 2 ca for client certs&lt;br /&gt;
** level 2 ca for host certs&lt;br /&gt;
** more level 2 certs&lt;br /&gt;
* use OpenSSL as default software for PKI&lt;br /&gt;
** ssl has the largest userbase which should make it easier on new admins&lt;br /&gt;
** features that openssl does not implement get used as soon as openssl catches up (ie. [http://cmpforopenssl.sourceforge.net/‎ CMP])&lt;br /&gt;
&lt;br /&gt;
== git hosting proposal ==&lt;br /&gt;
&lt;br /&gt;
* adhere to git-flow for all the things. Automate said usage as far as possible.&lt;br /&gt;
{|- class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! colspan=4 | git-flow branching &lt;br /&gt;
|-&lt;br /&gt;
! Branch&lt;br /&gt;
! Environment&lt;br /&gt;
! Merge from&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt;&lt;br /&gt;
| production&lt;br /&gt;
| &amp;lt;code&amp;gt;release/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;hotfix/&amp;lt;/code&amp;gt;&lt;br /&gt;
| Released code with a &amp;lt;code&amp;gt;git tag&amp;lt;/code&amp;gt; for each merge.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;release/v0.0.0&amp;lt;/code&amp;gt;&lt;br /&gt;
| staging&lt;br /&gt;
| &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;&lt;br /&gt;
| Contains final releasing work like updating versioning and changelog. This is where we keep semver concerns in check if they where not taken care of already.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;hotfix/v0.0.0&amp;lt;/code&amp;gt;&lt;br /&gt;
| staging&lt;br /&gt;
| &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt;&lt;br /&gt;
| Only for critically urgent fixes. In most cases doing a release from &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt; is preferred.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;&lt;br /&gt;
| development&lt;br /&gt;
| &amp;lt;code&amp;gt;feature/&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt;&lt;br /&gt;
| Only feature branches that are ready for production should get merged here. &amp;lt;code&amp;gt;master&amp;lt;/code&amp;gt; gets merged here after each merge to it. Merging is done with pull requests and review.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;feature/featurename&amp;lt;/code&amp;gt;&lt;br /&gt;
| development&lt;br /&gt;
| &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;&lt;br /&gt;
| New features get implemented here until they are considered ready for production and merged to &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;support/v0.0.0&amp;lt;/code&amp;gt;&lt;br /&gt;
| LTS&lt;br /&gt;
| &lt;br /&gt;
| Marked experimental in most implementations and unused for now.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* Install gitlab on a vm and integrate external mirrors from github and ldap users from stoney-ldap.&lt;br /&gt;
** keep repo of public mirrors in hieradata so we can configure them from puppet.&lt;br /&gt;
** each organisation in stoney-ldap automatically gets a private project in gitlab.&lt;br /&gt;
* Configure web hook intrastructure and integrate with continuous integration system.&lt;br /&gt;
* Make continuous integration show feedback back in gitlab.&lt;br /&gt;
** check for &amp;lt;code&amp;gt;git annotate&amp;lt;/code&amp;gt; support or use img badges.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;On organization projects in gitlab&#039;&#039;&#039;&lt;br /&gt;
* Each project comes with default repos. &lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Repo&lt;br /&gt;
! Description&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;puppet&amp;lt;/code&amp;gt;&lt;br /&gt;
| Set up using a template, contains a Puppetfile and Puppetfile.lock and a hieradata directory.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;role&amp;lt;/code&amp;gt;&lt;br /&gt;
| Read only copy of global role module for reference.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;profile&amp;lt;/code&amp;gt;&lt;br /&gt;
| Read only copy of global profile module for reference.&lt;br /&gt;
|}&lt;br /&gt;
* Everything in the latter two modules is configurable through hieradata in the first repo.&lt;br /&gt;
* The default setup automatically updates &amp;lt;code&amp;gt;role&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;profile&amp;lt;/code&amp;gt; when they get new merges.&lt;br /&gt;
* A software agent (ci) regularly clones &amp;lt;code&amp;gt;develop&amp;lt;/code&amp;gt;, does a full build and pushes the results back to &amp;lt;code&amp;gt;feature/tinderbox&amp;lt;/code&amp;gt;&lt;br /&gt;
* This agent autmatically creates pull requests if tinderbox builds did not fail.&lt;br /&gt;
* Org leaders may then merge these PRs and bake them into a local release.&lt;br /&gt;
* Some kind of UI helps them do this without much technical knowledge.&lt;br /&gt;
* More repos may be added by the customer.&lt;br /&gt;
* project organizations are private, per customer.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [http://gitlab.org/ gitlab] seems nice even though is is ruby on rails under the hood&lt;br /&gt;
* [https://github.com/sag47/gitlab-mirrors gitlab-mirrors] is a companion app to gitlab for adding readonly mirror repos to gitlab. We might consider hacking it to not use &amp;lt;code&amp;gt;git remote prune&amp;lt;/code&amp;gt;.&lt;br /&gt;
* [http://www.javacodegeeks.com/2014/01/git-flow-with-jenkins-and-gitlab.html git-flow with jenkins and gitlab]&lt;br /&gt;
* [https://wiki.jenkins-ci.org/display/JENKINS/Gitlab+Hook+Plugin gitlab hook for jenkins]&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Binary_package_guide Gentoo Binary Package Guide]&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Preserve-libs Gentoo preserve-libs]&lt;br /&gt;
* [http://swift.siphos.be/aglara/ A Gentoo Linux Advanced Reference Architecture]&lt;br /&gt;
* [http://www.gentoo.org/proj/en/gentoo-alt/prefix/ Gentoo Prefix]&lt;br /&gt;
* man pages&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/portage.5.html portage(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/emerge.1.html emerge(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html make.conf(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.1.html ebuild(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.5.html ebuild(5)]&lt;br /&gt;
&lt;br /&gt;
[[Category: Infrastructure]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2999</id>
		<title>Gentoo Infrastructure</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2999"/>
		<updated>2014-01-31T18:22:35Z</updated>

		<summary type="html">&lt;p&gt;Lucas: /* &amp;quot;stage4&amp;quot;/box/iso building */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This article describes how we plan on using gentoo as an infrastructure backbone for creating a complete and modern IT architecture.&lt;br /&gt;
&lt;br /&gt;
== Glossary ==&lt;br /&gt;
@TODO We need to clean up some terms already (for instance the portage vs puppet profile thing) A [[:Category:Glossary|glossary]] should help us define term more closely (and stick to the definitions).&lt;br /&gt;
&lt;br /&gt;
; portage profile&lt;br /&gt;
: A profile in gentoo portage. Defines either a system or application stack for portage.&lt;br /&gt;
; portage build profile&lt;br /&gt;
: A profile in gentoo portage. Based of a system profile but used during the build phase of the binary packages used in the final deploy.&lt;br /&gt;
; puppet profile&lt;br /&gt;
: A puppet profile contains the implementation logic of how to install and configure an aspect of a system.&lt;br /&gt;
; stack&lt;br /&gt;
: A stack contains a complete and deployable product that may be provisioned and used. Stack have very simple inheritance letting the admin create stack trees based on each other. For instance a Ruby on Rails stack will be based of of a ruby stack which is based off a linux stack.&lt;br /&gt;
&lt;br /&gt;
= Required components =&lt;br /&gt;
* Build host(s) for binary packages&lt;br /&gt;
* HTTP server for serving binary packages and distfiles (required by the ebuilds)&lt;br /&gt;
* Git clone of official portage tree&lt;br /&gt;
* Overlay(s)&lt;br /&gt;
* Own portage profile(s)&lt;br /&gt;
* rsync or Git server for serving the Overlay and the portage profiles&lt;br /&gt;
* Stage3 building system&lt;br /&gt;
* Puppet for configuration management and software installation&lt;br /&gt;
* Git version control for everything (overlays, portage profiles, puppet manifests and scripts/code)&lt;br /&gt;
* Install host (PXE boot / TFTP / DHCP)&lt;br /&gt;
** emc/puppetlabs [https://github.com/puppetlabs/Razor razor] can do this but needs some work for gentoo &lt;br /&gt;
* Automatic base installation script&lt;br /&gt;
** also in the scope of razor&lt;br /&gt;
* Separation of development, staging and production environments&lt;br /&gt;
** tagged and managed in git&lt;br /&gt;
* PKI environment (with dedicated sub CAs) for X509 certificates (used for Puppet, server and client certs etc.)&lt;br /&gt;
* git web interface (make dotfiles and frozen clones accessible to power-users)&lt;br /&gt;
* Central authentication service&lt;br /&gt;
* DNS, DHCP and NTP services&lt;br /&gt;
* Monitoring and alarming system&lt;br /&gt;
* Logging&lt;br /&gt;
* versioning for everything (if it is a committable file, use semver on its repo)&lt;br /&gt;
&lt;br /&gt;
== Binary package requirements ==&lt;br /&gt;
* Ability to build and install binary packages with the same version but different USE flags. For example, MySQL server package (&amp;lt;code&amp;gt;-minimal&amp;lt;/code&amp;gt; and MySQL client &amp;amp; libs package &amp;lt;code&amp;gt;minimal&amp;lt;/code&amp;gt;)&lt;br /&gt;
** don&#039;t go there: this imposes a significant amount of maintenance work and may still break. Rather provide large enough base sets and accept that some packages install too much (you can still disable them at runtime) and build the few deviations from the rule on the servers from source --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:39, 3 January 2014 (CET)&lt;br /&gt;
*** Yes, we need to and can go there :-) I agree with you, that we should do this only if necessary, apache for example can be built once and has the ability to turn features (module loading) on/off via its configuration. Other software does not provide such run-time configuration which results in unwanted server-software and dependencies on the installed hosts (&amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for example). I clearly do not want to have a dedicated build environment for each of those packages, I would rather see a build env, called minimal for example, which is used to build all those database packages with only lib and clients enabled (use the same env for PostgreSQL, OpenLDAP, MySQL etc.). As stated before, the whole build process needs to be automated, so I don&#039;t see a considerable increase of maintenance work coming up here. The dependency problem is mitigated through the fact that we have a frozen portage tree for all our build envs and therefore use the same versions everywhere. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 12:04, 6 January 2014 (CET)&lt;br /&gt;
*** Yes and no on this one. We clearly need to keep the list of packages that require this at bare minimum. &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for instance doesn&#039;t warrant this, we just won&#039;t start the server on non server nodes. Easy as cake. The server code and it&#039;s deps wont do any harm on say a desktop or other server box. Even though I can&#039;t think of example, I do believe we will be needing this possibility when we encounter packages that need to be built using different profiles for different use cases, things like having a php with-curlwrappers vs one with the curl module sans curlwrappers. The important point I take from this is that creating new profiles with small deviations from our default must be very easy (ie. not much work). Basically we need the infras support for n different build profiles to be fully automated and well documented. [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 19:52, 9 January 2014 (CET)&lt;br /&gt;
**** The &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; is definitely a good example, I don&#039;t want to install and maintain MySQL, Apache, PHP, snmpd (including all the deps) etc. on hosts which just need a Zabbix agent. I would also like to pragmatically avoid unused deps, in order to minimize reverse-updates and security updates (which must be provided nonetheless if the software is in use or not). --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 13:20, 10 January 2014 (CET)&lt;br /&gt;
* Providing binary packages for different major (and sometimes minor) versions, for example: &amp;lt;code&amp;gt;dev-db/mysql-5.X.Y&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;dev-db/mysql-6.X.Y&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Provide binary packages for pre-compiled Linux kernels and modules (not just a binary package of &amp;lt;code&amp;gt;sys-kernel/gentoo-sources&amp;lt;/code&amp;gt;)&lt;br /&gt;
** This makes it possible to build stage4 images from binary packages. &lt;br /&gt;
** Most likely there will be separate packages for servers and desktops built with different genkernel configs.&lt;br /&gt;
* Handle reverse dependency updates and ABI changes&lt;br /&gt;
&lt;br /&gt;
== Build host requirements ==&lt;br /&gt;
* Build binary package for all required software&lt;br /&gt;
* Support for multiple environments (development, staging and production)&lt;br /&gt;
* Support for multiple architectures (such as x86, amd64 etc.)&lt;br /&gt;
* Support for multiple build profiles&lt;br /&gt;
** system (or base) profile, such as desktop or server (stage3) (all the packages contained within the &amp;lt;code&amp;gt;/etc/portage/make.profile&amp;lt;/code&amp;gt; or via &amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;)&lt;br /&gt;
** application profiles, such as php5-app, django-app etc.)&lt;br /&gt;
** simple inheritance is used for things like python-app -&amp;gt; django-app&lt;br /&gt;
** stacks consist of one system profile and multiple application profiles&lt;br /&gt;
** don&#039;t do this: Gentoo itself has only a few profiles and even there issues arise when combining them (for example desktop + selinux-hardened) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:40, 3 January 2014 (CET)&lt;br /&gt;
*** Those are build-profiles (for example chroots or some sort of overlay-fs) not Gentoo (portage) profiles, we definitely need to clarify those terms ;) --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 20:01, 5 January 2014 (CET)&lt;br /&gt;
* All build profiles will use a system profile as their base profile&lt;br /&gt;
* Ability to update an existing build profile, without the need to build it from scratch&lt;br /&gt;
* Ability to do fully automated clean builds (ie. for new archs or new stacks)&lt;br /&gt;
* Ability to automatically update all development profiles on a predefined frequency such as daily, weekly or monthly an be notified about build failures&lt;br /&gt;
** [http://jenkins-ci.org/ jenkins ci] can do this using one jenkins master and a least one build slave per architecture.&lt;br /&gt;
** Other options would be [https://github.com/travis-ci/travis-ci travis ci] (not ready for in-house use) or [http://cruisecontrol.sourceforge.net/ cruise control]&lt;br /&gt;
** Rabe already has a jenkins instance: [http://intranet.rabe.ch/jenkins/]. The instance [[Jenkins-01]] is more or less modern and should be easy to reintegrate with puppet.&lt;br /&gt;
* Each build profile stores the built binary packages under a per-defined directory which will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Application build profiles stores only the extra packages within the above directory, packages included in a base profile won&#039;t be duplicated.&lt;br /&gt;
* Old or no longer supported packages will be removed automatically&lt;br /&gt;
* Build a stage 3 tarball, which can be used for the automatic installation via PXE/TFTP.&lt;br /&gt;
** must be able to build a stage tarball for each of the available environment-arch-system profile combinations&lt;br /&gt;
* Handle reverse dependency updates and ABI changes (aka &amp;lt;code&amp;gt;revdep-rebuild&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Handle perl and python (maybe more) dependency updates (aka &amp;lt;code&amp;gt;perl-cleaner&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;python-updater&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Ability to build kernel and modules&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone requirements ==&lt;br /&gt;
* The official portage tree needs to be cloned via Git, which basically enables one to:&lt;br /&gt;
** keep the control over portage tree updates&lt;br /&gt;
** provide an old version of the tree&lt;br /&gt;
** cherry pick updates&lt;br /&gt;
***  this should be avoided at all cost since it can lead to various sorts of breakages (ebuild &amp;lt;-&amp;gt; ebuild, ebuild &amp;lt;-&amp;gt; eclass, ebuild &amp;lt;-&amp;gt; profile, eclass &amp;lt;-&amp;gt; profile interaction) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:24, 3 January 2014 (CET)&lt;br /&gt;
**** Yes, I agree. Nonetheless, we need the &#039;&#039;possibility&#039;&#039; to do cherry picking, for example to react on zero-day exploits. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 19:53, 5 January 2014 (CET)&lt;br /&gt;
* Support for a development, staging and production branch&lt;br /&gt;
** Ability to automatically sync from upstream&lt;br /&gt;
** Easy merge support from one branch to the next &#039;&#039;higher&#039;&#039; one (staging -&amp;gt; production)&lt;br /&gt;
* Notification support for new [http://www.gentoo.org/security/en/glsa/index.xml GLSAs] which affect packages within the cloned trees.&lt;br /&gt;
** Either via automatic update and merge of &amp;lt;code&amp;gt;/usr/portage/metadata/glsa&amp;lt;/code&amp;gt; or via external mechanisms such as consulting the [http://www.gentoo.org/rdf/en/glsa-index.rdf RDF feed].&lt;br /&gt;
** Having an inventory by collecting puppet facts allows to check for security updates in a central location --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:31, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage overlay requirements ==&lt;br /&gt;
* One Git based portage [http://www.gentoo.org/proj/en/overlays/userguide.xml overlay]&lt;br /&gt;
** Contains own [[#Portage_profile_requirements|portage profiles]]&lt;br /&gt;
** Contains own or modified ebuilds or legacy ones removed from the official tree&lt;br /&gt;
* Support for development, staging and production environment (via Git branches)&lt;br /&gt;
* [http://layman.sourceforge.net/ Layman] compatibility&lt;br /&gt;
** Portage has now direct repository support (as has cave/paludis) and layman may be omitted --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:32, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage profile requirements ==&lt;br /&gt;
* Multiple [http://wiki.gentoo.org/wiki/Profile Portage profiles] stored within the [[#Overlay_requirements|overlay]].&lt;br /&gt;
** One for base, desktop and server (maybe more in the future, such as streambox)&lt;br /&gt;
*** desktop and server both inherit from the base profile which serves as the lowest common denominator.&lt;br /&gt;
* Support for multiple architectures (such as x86 and amd64)&lt;br /&gt;
** Avoid definition duplications via parent profile inheriting.&lt;br /&gt;
* All the profiles have an official Gentoo profile as their master&lt;br /&gt;
* Profiles include only packages belonging to a base system, not an application stack (those will be managed via puppet recipes)&lt;br /&gt;
* Profiles can be used to unmask packages required but not belonging to the base system&lt;br /&gt;
* Profiles sets all the default values for the client&#039;s [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html &amp;lt;code&amp;gt;make.conf&amp;lt;/code&amp;gt;], such as USE flags, BINHOSTS, GENTOO_MIRRORS, CFLAGS, CHOST etc.&lt;br /&gt;
** &#039;&#039;&#039;Warning&#039;&#039;&#039;: many such variables are not incremental and therefore need duplication of Gentoo base profile variables (requiring that someone tracks changes in those variables) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:29, 3 January 2014 (CET)&lt;br /&gt;
* keep the profiles (and the inheritance structure) as simple as possible, rather duplicate than inherit for small deviations to avoid inheritence issues --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:33, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Package host requirements ==&lt;br /&gt;
* Serving files via HTTPS&lt;br /&gt;
** Binary packages for all the clients (&amp;lt;code&amp;gt;PORTAGE_BINHOST&amp;lt;/code&amp;gt;), which were built by the [[#Build_host_requirements|build host]]&lt;br /&gt;
*** Binary packages will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
*** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== File mirror host requirements ==&lt;br /&gt;
* Hosts all the files required to build a package (&amp;lt;code&amp;gt;GENTOO_MIRRORS=mirror.example.com/public/gentoo/distfiles&amp;lt;/code&amp;gt;)&lt;br /&gt;
** Acts as a caching mirror for already downloaded packages from an official mirror&lt;br /&gt;
**  Serves fetch-restricted files (&amp;lt;code&amp;gt;dev-java/oracle-jdk-bin&amp;lt;/code&amp;gt; for example), to authorized clients&lt;br /&gt;
* Files are served via HTTPS&lt;br /&gt;
* Distinguishes between three groups of files&lt;br /&gt;
** &#039;&#039;&#039;public&#039;&#039;&#039;: Files which are available to all clients (theoretically even to the entire internet)&lt;br /&gt;
** &#039;&#039;&#039;site-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure (for example those which would put us into [http://www.bettercallsaul.com/ legal troubles] if available to the public)&lt;br /&gt;
** &#039;&#039;&#039;stack-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure and the software stack group (private files of a specific customer) &lt;br /&gt;
* Provides an easy way to let an administrator manually upload new files, for example via WebDAV-CGI, SFTP or a similar mechanism.&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== Puppet requirements ==&lt;br /&gt;
* moved to [[stoney_orchestra:_Requirements]], included below for reference.&lt;br /&gt;
&lt;br /&gt;
{{:stoney orchestra: Requirements}}&lt;br /&gt;
&lt;br /&gt;
== Install host requirements ==&lt;br /&gt;
* Ability to install physical and virtual machines&lt;br /&gt;
* Distinguish machines by their Ethernet MAC address&lt;br /&gt;
* Provide a PXE/TFTP boot mechanism&lt;br /&gt;
* Partition and format the (virtual) harddisks&lt;br /&gt;
* Install a stage3 image which was built by the build host&lt;br /&gt;
* Bootstrap puppet, enabling it to take over the individual installation and customization.&lt;br /&gt;
* Group hosts into&lt;br /&gt;
** environments (development, staging and production)&lt;br /&gt;
** architectures (such as x86, amd64 etc.)&lt;br /&gt;
** portage profiles (system profiles such as desktop and server)&lt;br /&gt;
** &amp;lt;s&amp;gt;stacks (comprising a complete product as a service with the underlying infrastructure)&amp;lt;/s&amp;gt; this is the task of Puppet --[[Benutzer:Chaf|Chaf]] ([[Benutzer Diskussion:Chaf|Diskussion]]) 09:42, 19. Dez. 2013 (CET)&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure requirements ==&lt;br /&gt;
* Local certificate authority for signing [http://en.wikipedia.org/wiki/X509 X.509] certificates.&lt;br /&gt;
* Master certificate authority root certificate which is only used to sign Sub-CA certificates&lt;br /&gt;
* Sub certificate authorities used for various cases such as&lt;br /&gt;
** Puppet certificates [http://docs.puppetlabs.com/puppet/3/reference/config_ssl_external_ca.html]&lt;br /&gt;
** User certificates&lt;br /&gt;
** Client certificates&lt;br /&gt;
** Host certificates&lt;br /&gt;
* Ability to sign, revoke and extend certificates&lt;br /&gt;
* Publish certificate revocation status either via [http://en.wikipedia.org/wiki/Certificate_revocation_list CRL] and/or [http://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol OCSP]&lt;br /&gt;
** CRL is not worth the hassle due to it not defining how often the CRL must be consulted. Since we are in the same physical net OCSP should be far superior here (thank to its live checking support). On the other hand puppet does not do OCSP yet (redmine: [http://projects.puppetlabs.com/issues/10111 #110111]) so we might need to implement both or implement OCSP as well as develop our own automated revocation for puppet.&lt;br /&gt;
* Choose DNs below &amp;lt;code&amp;gt;dc=rabe,dc=ch&amp;lt;/code&amp;gt;&lt;br /&gt;
* register a PEN-OID as issued by IANA if custom schema work is required&lt;br /&gt;
** Use a @rabe email when requesting a PEN at [http://pen.iana.org/pen/PenApplication.page IANA], last time the @purplehaze.ch was a problem!&lt;br /&gt;
* Some of the aforementioned sub-CAs might be implemented as robot CAs with a self service interface (ie for authorized users).&lt;br /&gt;
* Consider using [http://en.wikipedia.org/wiki/Certificate_Management_Protocol CMP] or [http://en.wikipedia.org/wiki/Certificate_Management_over_CMS CMC] as an API to signing, revoking et. al.&lt;br /&gt;
** Since the underlying RFCs of both these protocols are rather new they are not yet broadly supported.&lt;br /&gt;
* Keep local root CA offline!&lt;br /&gt;
** Maybe use an old netbook as root CA :P&lt;br /&gt;
* Support GPG keys for signing packages&lt;br /&gt;
&lt;br /&gt;
== Git hosting requirements ==&lt;br /&gt;
* Public repositories hosted on [http://www.github.com GitHub] (mainly) under the [https://github.com/organizations/radiorabe radiorabe organization] (almost anything which doesn&#039;t leak sensitive informations)&lt;br /&gt;
* Private repositories hosted on the internal infrastructure&lt;br /&gt;
** Accessible via https and a web interface&lt;br /&gt;
** contains some repos with uber-private data the gets compartmentalized even further (ie. hiera datafiles in different repos)&lt;br /&gt;
* One repository per component&lt;br /&gt;
* Daily backup of all repositories&lt;br /&gt;
* Branches for development, staging and production&lt;br /&gt;
** New features are added to the development branch only and later merged up to staging and production&lt;br /&gt;
* Must support pull-requests so we can implement a review process (when pulling through the envs)&lt;br /&gt;
** Sing-Offing might also be required&lt;br /&gt;
* Adhere to [http://semver.org/ Semantic Versioning] for version/release tags.&lt;br /&gt;
** Tag releases as &amp;lt;code&amp;gt;vX.Y.Z&amp;lt;/code&amp;gt; those will be automatically appear on GitHub as downloadable tarballs, which can be referenced within the corresponding ebuilds.&lt;br /&gt;
** Hit 1.0.0 as soon as code lands on production or earlier&lt;br /&gt;
** Commit .lock files when reaching 1.0.0 where applicable (Gemfile.lock, composer.lock) or earlier if needed&lt;br /&gt;
* Must be able to trigger remote events (ie. update master through mcollective after code was promoted to production in a PR)&lt;br /&gt;
* Support the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model&lt;br /&gt;
&lt;br /&gt;
== Messaging requirements ==&lt;br /&gt;
* I&#039;m talking AMPQ, JMS, STOMP, 0MQ and the likes&lt;br /&gt;
** not sure if we need something in this space for the infra&lt;br /&gt;
** it could facilitate comms between components&lt;br /&gt;
** stuff like mcollective and RadioDNS need something in this space&lt;br /&gt;
&lt;br /&gt;
== Monitoring, logging and alarming system requirements ==&lt;br /&gt;
@TODO&lt;br /&gt;
* centralized logging is used throughout&lt;br /&gt;
** with tools that help find and fix problems and do post mortems&lt;br /&gt;
* all systems are always monitored by a full monitoring suite&lt;br /&gt;
* the monitoring suite must support alarming users through multiple paths&lt;br /&gt;
** alarming should include a fallback strategy and a way to acknowledge alarms&lt;br /&gt;
** it must have a easy way to configure scheduled maintenance either before or while the maintenance is undergoing&lt;br /&gt;
* monitoring, logging and alarming are all automatically configured during regular provisioning of machines&lt;br /&gt;
* alerting uses jabber by default with fallbacks to email and sms-through-gsm depending on the site.&lt;br /&gt;
&lt;br /&gt;
= Implementation proposal =&lt;br /&gt;
== Build host proposal ==&lt;br /&gt;
The build host consists out of various chroots to build binary packages for multiple environments, architectures and build profiles.&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
&lt;br /&gt;
==== build orchestration ====&lt;br /&gt;
&lt;br /&gt;
==== package building ====&lt;br /&gt;
* [http://www.chromium.org/chromium-os/developer-guide/chromite-shell-quick-start chromite] build utility from chromium os ([https://chromium.googlesource.com/chromiumos/chromite/ source repo])&lt;br /&gt;
** as far as I recall chromium os does highly parallel building making their build really fast with a slight trade of in long termn stability (ie. build might fail due to dependencies being built out of oder), &lt;br /&gt;
** the [http://www.chromium.org/chromium-os/developer-guide chromium os developer guide] might also be of interest, among other things it shows that google do split the build into a package building part and an image creation part.&lt;br /&gt;
* [https://wiki.sabayon.org/?title=En:Entropy entropy] is sabayons portage replacement, it focuses on binaries due to sabayon being a binary distribution&lt;br /&gt;
** their [https://github.com/Sabayon/build build system &amp;quot;Matter&amp;quot;] might be of interest, it seems to automate large parts of tracking gentoo portage with its tinderbox subsystem&lt;br /&gt;
** sabayon has &amp;lt;code&amp;gt;kernel-switcher&amp;lt;/code&amp;gt; for updating kernels&lt;br /&gt;
** kernel ebuilds live [https://github.com/Sabayon/sabayon-distro/tree/master/sys-kernel/linux-sabayon here] and probably rely on the [https://github.com/Sabayon/sabayon-distro/blob/master/eclass/sabayon-kernel.eclass sabayon-kernel eclass].&lt;br /&gt;
&lt;br /&gt;
==== &amp;quot;stage4&amp;quot;/box/iso building ====&lt;br /&gt;
* [http://packer.io packer.io] can be used to build stage4 (containing a kernel) images and seems to work for gentoo. Packer often gets used to build Vagrant boxes.&lt;br /&gt;
** [https://github.com/pierreozoux/packer-warehouse/blob/master/var-files/gentoo/generate_latest.sh gentoo script from packer-warehouse] used with packer to create a minimal gentoo vagrant box&lt;br /&gt;
** currently packer and packer-warehouse do not seem capable of building gentoo machines out of the box, I tested this with osx/virtualbox using gentoo stage3 and portage snapshots [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
* [https://github.com/jedi4ever/veewee veewee] vagrant box builder (builds stage4 images in a manner similar to packer&lt;br /&gt;
** has support for a massive amount of guest os types&lt;br /&gt;
*** installs puppet/chef using gem due to the oldish versions in gentoo (and probably elsewhere)&lt;br /&gt;
** supports kvm and others as host os&lt;br /&gt;
** while testing with osx/virtualbox I was able to build and export a vagrant box from gentoo stage3 and portage snapshots without any hiccups [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
** is in dire need of DRY: [https://github.com/jedi4ever/veewee/pull/690] to make it worth forking&lt;br /&gt;
* [http://blinkeye.ch/dokuwiki/doku.php/projects/mkstage4 mkstage4]&lt;br /&gt;
** aimed at creating backup stage4 tarballs of gentoo systems&lt;br /&gt;
** written in bash&lt;br /&gt;
** pretty simple, might come in handy as automation tool&lt;br /&gt;
&lt;br /&gt;
==== kernel ====&lt;br /&gt;
&lt;br /&gt;
* at the moment we build tarballs for the kernel+initramfs and the modules using &amp;lt;code&amp;gt;genkernel&amp;lt;/code&amp;gt; and have a separate ebuild which installs them&lt;br /&gt;
* ideally we would like to have an ebuild which takes the kernel sources (like the ebuild for &amp;lt;code&amp;gt;sys-kernel/gentoo-source&amp;lt;/code&amp;gt; does), builds it according to some default configuration or a user configuration if available (&amp;lt;code&amp;gt;savedconfig.eclass&amp;lt;/code&amp;gt;) and then installs the kernel and the modules as well as some minimal headers+configuration to build other packages requiring the sources to be present&lt;br /&gt;
* TODO: check whether dracut has some advantages regarding module loading over genkernel-generated initramfs&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone proposal ==&lt;br /&gt;
== Portage overlay proposal ==&lt;br /&gt;
== Portage profile proposal ==&lt;br /&gt;
== Package and file mirror proposal ==&lt;br /&gt;
== Puppet proposal ==&lt;br /&gt;
* Adhere to Craig Dunns [http://www.craigdunn.org/2012/05/239/ architecture] [http://www.slideshare.net/PuppetLabs/roles-talk]&lt;br /&gt;
** on the system level (ie for each bar-metal or virtual machine)&lt;br /&gt;
*** roles contains the business view (ie. [https://github.com/radiorabe/puppet/blob/master/role/manifests/puppet/master.pp role::puppet::master])&lt;br /&gt;
*** profiles the implementation (such as [https://github.com/radiorabe/puppet/blob/master/profile/manifests/puppet/master.pp profile::puppet::master])&lt;br /&gt;
** on the architecture level (ie. in the cloud-fabric)&lt;br /&gt;
*** roles contains the business view (ie. role::cloud-storage, role::product1)&lt;br /&gt;
*** profiles contain the implementation (ie profile::storage-cluster, profile::storage-webinterface-farm)&lt;br /&gt;
* Keep profiles, roles (as per craig) and Puppetfile in [https://github.com/radiorabe/puppet github.com/radiorabe/puppet]&lt;br /&gt;
** This is where we keep feature/*, develop and master (ie staging) branches&lt;br /&gt;
** An internal clone then contains all these + production (what exactly is in prodution, ie. our release schedule is considered sensitive in this implementation)&lt;br /&gt;
** This lets us use the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model with almost no changes (the one change being us gating stuff into production on the closed clone)&lt;br /&gt;
** github may use hooks to push content to our internal git when they happen&lt;br /&gt;
* All other modules need their own repo and must be published to the puppet module forge&lt;br /&gt;
* Use librarian-puppet (or r10k) for composing the final puppet envs&lt;br /&gt;
** r10k eschews git submodule support we used in puppet-syslogng but has support for multiple envs out of the box&lt;br /&gt;
** librarian-puppet would need to be run once per environment to achieve what r10k does&lt;br /&gt;
* provide develop, master and production branches from private repo as puppet environments on master&lt;br /&gt;
&lt;br /&gt;
== Install host proposal ==&lt;br /&gt;
* use the existing server on [[tftp-01]] on the RaBe infra as a shortcut&lt;br /&gt;
** replace that instance with one native to the infra when it is ready for that&lt;br /&gt;
* iPXE [http://ipxe.org/]&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
* Tools that run puppet on freshly installed machines (and also do some provisioning)&lt;br /&gt;
** [https://forge.puppetlabs.com/puppetlabs/razor puppetlabs razor] bare metal/cloud provisioning tool&lt;br /&gt;
** [http://www.vagrantup.com/ vagrant] cloud provisioning aimed at provisioning developer boxes (with virtualbox). Has 3rd party support for various cloud systems. Vagrant might be interesting for creating dev clouds. I&#039;ve seen this being used on production sites.&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure proposal ==&lt;br /&gt;
* write [[certificate policy]] (in german!)&lt;br /&gt;
* hold a key ceremony for the root and level 1&lt;br /&gt;
** offline ceremony on an old netbook with centos or similar (not debian, probably not gentoo to make this happen soonish)&lt;br /&gt;
** Sign RaBe root cert and level 1 intermediate cert&lt;br /&gt;
** store root cert key on 2 sdcards and as 1 printout somewhere safely&lt;br /&gt;
** store level 1 intermediate key on sdcards for use by admins&lt;br /&gt;
* use level 1 intermediate key to sign level 2 cas as needed&lt;br /&gt;
** level 2 robot ca key for puppet (managed by &amp;lt;code&amp;gt;puppet ca&amp;lt;/code&amp;gt;)&lt;br /&gt;
** level 2 ca for client certs&lt;br /&gt;
** level 2 ca for host certs&lt;br /&gt;
** more level 2 certs&lt;br /&gt;
* use OpenSSL as default software for PKI&lt;br /&gt;
** ssl has the largest userbase which should make it easier on new admins&lt;br /&gt;
** features that openssl does not implement get used as soon as openssl catches up (ie. [http://cmpforopenssl.sourceforge.net/‎ CMP])&lt;br /&gt;
&lt;br /&gt;
== git hosting proposal ==&lt;br /&gt;
&lt;br /&gt;
* [http://gitlab.org/ gitlab] seems nice even though is is ruby on rails under the hood&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Binary_package_guide Gentoo Binary Package Guide]&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Preserve-libs Gentoo preserve-libs]&lt;br /&gt;
* [http://swift.siphos.be/aglara/ A Gentoo Linux Advanced Reference Architecture]&lt;br /&gt;
* [http://www.gentoo.org/proj/en/gentoo-alt/prefix/ Gentoo Prefix]&lt;br /&gt;
* man pages&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/portage.5.html portage(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/emerge.1.html emerge(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html make.conf(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.1.html ebuild(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.5.html ebuild(5)]&lt;br /&gt;
&lt;br /&gt;
[[Category: Infrastructure]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2803</id>
		<title>Gentoo Infrastructure</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2803"/>
		<updated>2014-01-11T21:41:34Z</updated>

		<summary type="html">&lt;p&gt;Lucas: /* &amp;quot;stage4&amp;quot;/box/iso building */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This article describes how we plan on using gentoo as an infrastructure backbone for creating a complete and modern IT architecture.&lt;br /&gt;
&lt;br /&gt;
== Glossary ==&lt;br /&gt;
@TODO We need to clean up some terms already (for instance the portage vs puppet profile thing) A [[:Category:Glossary|glossary]] should help us define term more closely (and stick to the definitions).&lt;br /&gt;
&lt;br /&gt;
; portage profile&lt;br /&gt;
: A profile in gentoo portage. Defines either a system or application stack for portage.&lt;br /&gt;
; portage build profile&lt;br /&gt;
: A profile in gentoo portage. Based of a system profile but used during the build phase of the binary packages used in the final deploy.&lt;br /&gt;
; puppet profile&lt;br /&gt;
: A puppet profile contains the implementation logic of how to install and configure an aspect of a system.&lt;br /&gt;
; stack&lt;br /&gt;
: A stack contains a complete and deployable product that may be provisioned and used. Stack have very simple inheritance letting the admin create stack trees based on each other. For instance a Ruby on Rails stack will be based of of a ruby stack which is based off a linux stack.&lt;br /&gt;
&lt;br /&gt;
= Required components =&lt;br /&gt;
* Build host(s) for binary packages&lt;br /&gt;
* HTTP server for serving binary packages and distfiles (required by the ebuilds)&lt;br /&gt;
* Git clone of official portage tree&lt;br /&gt;
* Overlay(s)&lt;br /&gt;
* Own portage profile(s)&lt;br /&gt;
* rsync or Git server for serving the Overlay and the portage profiles&lt;br /&gt;
* Stage3 building system&lt;br /&gt;
* Puppet for configuration management and software installation&lt;br /&gt;
* Git version control for everything (overlays, portage profiles, puppet manifests and scripts/code)&lt;br /&gt;
* Install host (PXE boot / TFTP / DHCP)&lt;br /&gt;
** emc/puppetlabs [https://github.com/puppetlabs/Razor razor] can do this but needs some work for gentoo &lt;br /&gt;
* Automatic base installation script&lt;br /&gt;
** also in the scope of razor&lt;br /&gt;
* Separation of development, staging and production environments&lt;br /&gt;
** tagged and managed in git&lt;br /&gt;
* PKI environment (with dedicated sub CAs) for X509 certificates (used for Puppet, server and client certs etc.)&lt;br /&gt;
* git web interface (make dotfiles and frozen clones accessible to power-users)&lt;br /&gt;
* Central authentication service&lt;br /&gt;
* DNS, DHCP and NTP services&lt;br /&gt;
* Monitoring and alarming system&lt;br /&gt;
* Logging&lt;br /&gt;
* versioning for everything (if it is a committable file, use semver on its repo)&lt;br /&gt;
&lt;br /&gt;
== Binary package requirements ==&lt;br /&gt;
* Ability to build and install binary packages with the same version but different USE flags. For example, MySQL server package (&amp;lt;code&amp;gt;-minimal&amp;lt;/code&amp;gt; and MySQL client &amp;amp; libs package &amp;lt;code&amp;gt;minimal&amp;lt;/code&amp;gt;)&lt;br /&gt;
** don&#039;t go there: this imposes a significant amount of maintenance work and may still break. Rather provide large enough base sets and accept that some packages install too much (you can still disable them at runtime) and build the few deviations from the rule on the servers from source --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:39, 3 January 2014 (CET)&lt;br /&gt;
*** Yes, we need to and can go there :-) I agree with you, that we should do this only if necessary, apache for example can be built once and has the ability to turn features (module loading) on/off via its configuration. Other software does not provide such run-time configuration which results in unwanted server-software and dependencies on the installed hosts (&amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for example). I clearly do not want to have a dedicated build environment for each of those packages, I would rather see a build env, called minimal for example, which is used to build all those database packages with only lib and clients enabled (use the same env for PostgreSQL, OpenLDAP, MySQL etc.). As stated before, the whole build process needs to be automated, so I don&#039;t see a considerable increase of maintenance work coming up here. The dependency problem is mitigated through the fact that we have a frozen portage tree for all our build envs and therefore use the same versions everywhere. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 12:04, 6 January 2014 (CET)&lt;br /&gt;
*** Yes and no on this one. We clearly need to keep the list of packages that require this at bare minimum. &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for instance doesn&#039;t warrant this, we just won&#039;t start the server on non server nodes. Easy as cake. The server code and it&#039;s deps wont do any harm on say a desktop or other server box. Even though I can&#039;t think of example, I do believe we will be needing this possibility when we encounter packages that need to be built using different profiles for different use cases, things like having a php with-curlwrappers vs one with the curl module sans curlwrappers. The important point I take from this is that creating new profiles with small deviations from our default must be very easy (ie. not much work). Basically we need the infras support for n different build profiles to be fully automated and well documented. [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 19:52, 9 January 2014 (CET)&lt;br /&gt;
**** The &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; is definitely a good example, I don&#039;t want to install and maintain MySQL, Apache, PHP, snmpd (including all the deps) etc. on hosts which just need a Zabbix agent. I would also like to pragmatically avoid unused deps, in order to minimize reverse-updates and security updates (which must be provided nonetheless if the software is in use or not). --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 13:20, 10 January 2014 (CET)&lt;br /&gt;
* Providing binary packages for different major (and sometimes minor) versions, for example: &amp;lt;code&amp;gt;dev-db/mysql-5.X.Y&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;dev-db/mysql-6.X.Y&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Provide binary packages for pre-compiled Linux kernels and modules (not just a binary package of &amp;lt;code&amp;gt;sys-kernel/gentoo-sources&amp;lt;/code&amp;gt;)&lt;br /&gt;
** This makes it possible to build stage4 images from binary packages. &lt;br /&gt;
** Most likely there will be separate packages for servers and desktops built with different genkernel configs.&lt;br /&gt;
* Handle reverse dependency updates and ABI changes&lt;br /&gt;
&lt;br /&gt;
== Build host requirements ==&lt;br /&gt;
* Build binary package for all required software&lt;br /&gt;
* Support for multiple environments (development, staging and production)&lt;br /&gt;
* Support for multiple architectures (such as x86, amd64 etc.)&lt;br /&gt;
* Support for multiple build profiles&lt;br /&gt;
** system (or base) profile, such as desktop or server (stage3) (all the packages contained within the &amp;lt;code&amp;gt;/etc/portage/make.profile&amp;lt;/code&amp;gt; or via &amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;)&lt;br /&gt;
** application profiles, such as php5-app, django-app etc.)&lt;br /&gt;
** simple inheritance is used for things like python-app -&amp;gt; django-app&lt;br /&gt;
** stacks consist of one system profile and multiple application profiles&lt;br /&gt;
** don&#039;t do this: Gentoo itself has only a few profiles and even there issues arise when combining them (for example desktop + selinux-hardened) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:40, 3 January 2014 (CET)&lt;br /&gt;
*** Those are build-profiles (for example chroots or some sort of overlay-fs) not Gentoo (portage) profiles, we definitely need to clarify those terms ;) --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 20:01, 5 January 2014 (CET)&lt;br /&gt;
* All build profiles will use a system profile as their base profile&lt;br /&gt;
* Ability to update an existing build profile, without the need to build it from scratch&lt;br /&gt;
* Ability to do fully automated clean builds (ie. for new archs or new stacks)&lt;br /&gt;
* Ability to automatically update all development profiles on a predefined frequency such as daily, weekly or monthly an be notified about build failures&lt;br /&gt;
** [http://jenkins-ci.org/ jenkins ci] can do this using one jenkins master and a least one build slave per architecture.&lt;br /&gt;
** Other options would be [https://github.com/travis-ci/travis-ci travis ci] (not ready for in-house use) or [http://cruisecontrol.sourceforge.net/ cruise control]&lt;br /&gt;
** Rabe already has a jenkins instance: [http://intranet.rabe.ch/jenkins/]. The instance [[Jenkins-01]] is more or less modern and should be easy to reintegrate with puppet.&lt;br /&gt;
* Each build profile stores the built binary packages under a per-defined directory which will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Application build profiles stores only the extra packages within the above directory, packages included in a base profile won&#039;t be duplicated.&lt;br /&gt;
* Old or no longer supported packages will be removed automatically&lt;br /&gt;
* Build a stage 3 tarball, which can be used for the automatic installation via PXE/TFTP.&lt;br /&gt;
** must be able to build a stage tarball for each of the available environment-arch-system profile combinations&lt;br /&gt;
* Handle reverse dependency updates and ABI changes (aka &amp;lt;code&amp;gt;revdep-rebuild&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Handle perl and python (maybe more) dependency updates (aka &amp;lt;code&amp;gt;perl-cleaner&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;python-updater&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Ability to build kernel and modules&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone requirements ==&lt;br /&gt;
* The official portage tree needs to be cloned via Git, which basically enables one to:&lt;br /&gt;
** keep the control over portage tree updates&lt;br /&gt;
** provide an old version of the tree&lt;br /&gt;
** cherry pick updates&lt;br /&gt;
***  this should be avoided at all cost since it can lead to various sorts of breakages (ebuild &amp;lt;-&amp;gt; ebuild, ebuild &amp;lt;-&amp;gt; eclass, ebuild &amp;lt;-&amp;gt; profile, eclass &amp;lt;-&amp;gt; profile interaction) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:24, 3 January 2014 (CET)&lt;br /&gt;
**** Yes, I agree. Nonetheless, we need the &#039;&#039;possibility&#039;&#039; to do cherry picking, for example to react on zero-day exploits. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 19:53, 5 January 2014 (CET)&lt;br /&gt;
* Support for a development, staging and production branch&lt;br /&gt;
** Ability to automatically sync from upstream&lt;br /&gt;
** Easy merge support from one branch to the next &#039;&#039;higher&#039;&#039; one (staging -&amp;gt; production)&lt;br /&gt;
* Notification support for new [http://www.gentoo.org/security/en/glsa/index.xml GLSAs] which affect packages within the cloned trees.&lt;br /&gt;
** Either via automatic update and merge of &amp;lt;code&amp;gt;/usr/portage/metadata/glsa&amp;lt;/code&amp;gt; or via external mechanisms such as consulting the [http://www.gentoo.org/rdf/en/glsa-index.rdf RDF feed].&lt;br /&gt;
** Having an inventory by collecting puppet facts allows to check for security updates in a central location --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:31, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage overlay requirements ==&lt;br /&gt;
* One Git based portage [http://www.gentoo.org/proj/en/overlays/userguide.xml overlay]&lt;br /&gt;
** Contains own [[#Portage_profile_requirements|portage profiles]]&lt;br /&gt;
** Contains own or modified ebuilds or legacy ones removed from the official tree&lt;br /&gt;
* Support for development, staging and production environment (via Git branches)&lt;br /&gt;
* [http://layman.sourceforge.net/ Layman] compatibility&lt;br /&gt;
** Portage has now direct repository support (as has cave/paludis) and layman may be omitted --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:32, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage profile requirements ==&lt;br /&gt;
* Multiple [http://wiki.gentoo.org/wiki/Profile Portage profiles] stored within the [[#Overlay_requirements|overlay]].&lt;br /&gt;
** One for base, desktop and server (maybe more in the future, such as streambox)&lt;br /&gt;
*** desktop and server both inherit from the base profile which serves as the lowest common denominator.&lt;br /&gt;
* Support for multiple architectures (such as x86 and amd64)&lt;br /&gt;
** Avoid definition duplications via parent profile inheriting.&lt;br /&gt;
* All the profiles have an official Gentoo profile as their master&lt;br /&gt;
* Profiles include only packages belonging to a base system, not an application stack (those will be managed via puppet recipes)&lt;br /&gt;
* Profiles can be used to unmask packages required but not belonging to the base system&lt;br /&gt;
* Profiles sets all the default values for the client&#039;s [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html &amp;lt;code&amp;gt;make.conf&amp;lt;/code&amp;gt;], such as USE flags, BINHOSTS, GENTOO_MIRRORS, CFLAGS, CHOST etc.&lt;br /&gt;
** &#039;&#039;&#039;Warning&#039;&#039;&#039;: many such variables are not incremental and therefore need duplication of Gentoo base profile variables (requiring that someone tracks changes in those variables) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:29, 3 January 2014 (CET)&lt;br /&gt;
* keep the profiles (and the inheritance structure) as simple as possible, rather duplicate than inherit for small deviations to avoid inheritence issues --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:33, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Package host requirements ==&lt;br /&gt;
* Serving files via HTTPS&lt;br /&gt;
** Binary packages for all the clients (&amp;lt;code&amp;gt;PORTAGE_BINHOST&amp;lt;/code&amp;gt;), which were built by the [[#Build_host_requirements|build host]]&lt;br /&gt;
*** Binary packages will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
*** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== File mirror host requirements ==&lt;br /&gt;
* Hosts all the files required to build a package (&amp;lt;code&amp;gt;GENTOO_MIRRORS=mirror.example.com/public/gentoo/distfiles&amp;lt;/code&amp;gt;)&lt;br /&gt;
** Acts as a caching mirror for already downloaded packages from an official mirror&lt;br /&gt;
**  Serves fetch-restricted files (&amp;lt;code&amp;gt;dev-java/oracle-jdk-bin&amp;lt;/code&amp;gt; for example), to authorized clients&lt;br /&gt;
* Files are served via HTTPS&lt;br /&gt;
* Distinguishes between three groups of files&lt;br /&gt;
** &#039;&#039;&#039;public&#039;&#039;&#039;: Files which are available to all clients (theoretically even to the entire internet)&lt;br /&gt;
** &#039;&#039;&#039;site-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure (for example those which would put us into [http://www.bettercallsaul.com/ legal troubles] if available to the public)&lt;br /&gt;
** &#039;&#039;&#039;stack-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure and the software stack group (private files of a specific customer) &lt;br /&gt;
* Provides an easy way to let an administrator manually upload new files, for example via WebDAV-CGI, SFTP or a similar mechanism.&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== Puppet requirements ==&lt;br /&gt;
* moved to [[stoney_orchestra:_Requirements]], included below for reference.&lt;br /&gt;
&lt;br /&gt;
{{:stoney orchestra: Requirements}}&lt;br /&gt;
&lt;br /&gt;
== Install host requirements ==&lt;br /&gt;
* Ability to install physical and virtual machines&lt;br /&gt;
* Distinguish machines by their Ethernet MAC address&lt;br /&gt;
* Provide a PXE/TFTP boot mechanism&lt;br /&gt;
* Partition and format the (virtual) harddisks&lt;br /&gt;
* Install a stage3 image which was built by the build host&lt;br /&gt;
* Bootstrap puppet, enabling it to take over the individual installation and customization.&lt;br /&gt;
* Group hosts into&lt;br /&gt;
** environments (development, staging and production)&lt;br /&gt;
** architectures (such as x86, amd64 etc.)&lt;br /&gt;
** portage profiles (system profiles such as desktop and server)&lt;br /&gt;
** &amp;lt;s&amp;gt;stacks (comprising a complete product as a service with the underlying infrastructure)&amp;lt;/s&amp;gt; this is the task of Puppet --[[Benutzer:Chaf|Chaf]] ([[Benutzer Diskussion:Chaf|Diskussion]]) 09:42, 19. Dez. 2013 (CET)&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure requirements ==&lt;br /&gt;
* Local certificate authority for signing [http://en.wikipedia.org/wiki/X509 X.509] certificates.&lt;br /&gt;
* Master certificate authority root certificate which is only used to sign Sub-CA certificates&lt;br /&gt;
* Sub certificate authorities used for various cases such as&lt;br /&gt;
** Puppet certificates [http://docs.puppetlabs.com/puppet/3/reference/config_ssl_external_ca.html]&lt;br /&gt;
** User certificates&lt;br /&gt;
** Client certificates&lt;br /&gt;
** Host certificates&lt;br /&gt;
* Ability to sign, revoke and extend certificates&lt;br /&gt;
* Publish certificate revocation status either via [http://en.wikipedia.org/wiki/Certificate_revocation_list CRL] and/or [http://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol OCSP]&lt;br /&gt;
** CRL is not worth the hassle due to it not defining how often the CRL must be consulted. Since we are in the same physical net OCSP should be far superior here (thank to its live checking support). On the other hand puppet does not do OCSP yet (redmine: [http://projects.puppetlabs.com/issues/10111 #110111]) so we might need to implement both or implement OCSP as well as develop our own automated revocation for puppet.&lt;br /&gt;
* Choose DNs below &amp;lt;code&amp;gt;dc=rabe,dc=ch&amp;lt;/code&amp;gt;&lt;br /&gt;
* register a PEN-OID as issued by IANA if custom schema work is required&lt;br /&gt;
** Use a @rabe email when requesting a PEN at [http://pen.iana.org/pen/PenApplication.page IANA], last time the @purplehaze.ch was a problem!&lt;br /&gt;
* Some of the aforementioned sub-CAs might be implemented as robot CAs with a self service interface (ie for authorized users).&lt;br /&gt;
* Consider using [http://en.wikipedia.org/wiki/Certificate_Management_Protocol CMP] or [http://en.wikipedia.org/wiki/Certificate_Management_over_CMS CMC] as an API to signing, revoking et. al.&lt;br /&gt;
** Since the underlying RFCs of both these protocols are rather new they are not yet broadly supported.&lt;br /&gt;
* Keep local root CA offline!&lt;br /&gt;
** Maybe use an old netbook as root CA :P&lt;br /&gt;
* Support GPG keys for signing packages&lt;br /&gt;
&lt;br /&gt;
== Git hosting requirements ==&lt;br /&gt;
* Public repositories hosted on [http://www.github.com GitHub] (mainly) under the [https://github.com/organizations/radiorabe radiorabe organization] (almost anything which doesn&#039;t leak sensitive informations)&lt;br /&gt;
* Private repositories hosted on the internal infrastructure&lt;br /&gt;
** Accessible via https and a web interface&lt;br /&gt;
** contains some repos with uber-private data the gets compartmentalized even further (ie. hiera datafiles in different repos)&lt;br /&gt;
* One repository per component&lt;br /&gt;
* Daily backup of all repositories&lt;br /&gt;
* Branches for development, staging and production&lt;br /&gt;
** New features are added to the development branch only and later merged up to staging and production&lt;br /&gt;
* Must support pull-requests so we can implement a review process (when pulling through the envs)&lt;br /&gt;
** Sing-Offing might also be required&lt;br /&gt;
* Adhere to [http://semver.org/ Semantic Versioning] for version/release tags.&lt;br /&gt;
** Tag releases as &amp;lt;code&amp;gt;vX.Y.Z&amp;lt;/code&amp;gt; those will be automatically appear on GitHub as downloadable tarballs, which can be referenced within the corresponding ebuilds.&lt;br /&gt;
** Hit 1.0.0 as soon as code lands on production or earlier&lt;br /&gt;
** Commit .lock files when reaching 1.0.0 where applicable (Gemfile.lock, composer.lock) or earlier if needed&lt;br /&gt;
* Must be able to trigger remote events (ie. update master through mcollective after code was promoted to production in a PR)&lt;br /&gt;
* Support the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model&lt;br /&gt;
&lt;br /&gt;
== Messaging requirements ==&lt;br /&gt;
* I&#039;m talking AMPQ, JMS, STOMP, 0MQ and the likes&lt;br /&gt;
** not sure if we need something in this space for the infra&lt;br /&gt;
** it could facilitate comms between components&lt;br /&gt;
** stuff like mcollective and RadioDNS need something in this space&lt;br /&gt;
&lt;br /&gt;
== Monitoring, logging and alarming system requirements ==&lt;br /&gt;
@TODO&lt;br /&gt;
* centralized logging is used throughout&lt;br /&gt;
** with tools that help find and fix problems and do post mortems&lt;br /&gt;
* all systems are always monitored by a full monitoring suite&lt;br /&gt;
* the monitoring suite must support alarming users through multiple paths&lt;br /&gt;
** alarming should include a fallback strategy and a way to acknowledge alarms&lt;br /&gt;
** it must have a easy way to configure scheduled maintenance either before or while the maintenance is undergoing&lt;br /&gt;
* monitoring, logging and alarming are all automatically configured during regular provisioning of machines&lt;br /&gt;
* alerting uses jabber by default with fallbacks to email and sms-through-gsm depending on the site.&lt;br /&gt;
&lt;br /&gt;
= Implementation proposal =&lt;br /&gt;
== Build host proposal ==&lt;br /&gt;
The build host consists out of various chroots to build binary packages for multiple environments, architectures and build profiles.&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
&lt;br /&gt;
==== build orchestration ====&lt;br /&gt;
&lt;br /&gt;
==== package building ====&lt;br /&gt;
* [http://www.chromium.org/chromium-os/developer-guide/chromite-shell-quick-start chromite] build utility from chromium os ([https://chromium.googlesource.com/chromiumos/chromite/ source repo])&lt;br /&gt;
** as far as I recall chromium os does highly parallel building making their build really fast with a slight trade of in long termn stability (ie. build might fail due to dependencies being built out of oder), &lt;br /&gt;
** the [http://www.chromium.org/chromium-os/developer-guide chromium os developer guide] might also be of interest, among other things it shows that google do split the build into a package building part and an image creation part.&lt;br /&gt;
* [https://wiki.sabayon.org/?title=En:Entropy entropy] is sabayons portage replacement, it focuses on binaries due to sabayon being a binary distribution&lt;br /&gt;
** their [https://github.com/Sabayon/build build system &amp;quot;Matter&amp;quot;] might be of interest, it seems to automate large parts of tracking gentoo portage with its tinderbox subsystem&lt;br /&gt;
** sabayon has &amp;lt;code&amp;gt;kernel-switcher&amp;lt;/code&amp;gt; for updating kernels&lt;br /&gt;
** kernel ebuilds live [https://github.com/Sabayon/sabayon-distro/tree/master/sys-kernel/linux-sabayon here] and probably rely on the [https://github.com/Sabayon/sabayon-distro/blob/master/eclass/sabayon-kernel.eclass sabayon-kernel eclass].&lt;br /&gt;
&lt;br /&gt;
==== &amp;quot;stage4&amp;quot;/box/iso building ====&lt;br /&gt;
* [http://packer.io packer.io] can be used to build stage4 (containing a kernel) images and seems to work for gentoo. Packer often gets used to build Vagrant boxes.&lt;br /&gt;
** [https://github.com/pierreozoux/packer-warehouse/blob/master/var-files/gentoo/generate_latest.sh gentoo script from packer-warehouse] used with packer to create a minimal gentoo vagrant box&lt;br /&gt;
** currently packer and packer-warehouse do not seem capable of building gentoo machines out of the box, I tested this with osx/virtualbox using gentoo stage3 and portage snapshots [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
* [https://github.com/jedi4ever/veewee veewee] vagrant box builder (builds stage4 images in a manner similar to packer&lt;br /&gt;
** has support for a massive amount of guest os types&lt;br /&gt;
*** installs puppet/chef using gem due to the oldish versions in gentoo (and probably elsewhere)&lt;br /&gt;
** supports kvm and others as host os&lt;br /&gt;
** while testing with osx/virtualbox I was able to build and export a vagrant box from gentoo stage3 and portage snapshots without any hiccups [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
** is in dire need of DRY: [https://github.com/jedi4ever/veewee/pull/690] to make it worth forking&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone proposal ==&lt;br /&gt;
== Portage overlay proposal ==&lt;br /&gt;
== Portage profile proposal ==&lt;br /&gt;
== Package and file mirror proposal ==&lt;br /&gt;
== Puppet proposal ==&lt;br /&gt;
* Adhere to Craig Dunns [http://www.craigdunn.org/2012/05/239/ architecture] [http://www.slideshare.net/PuppetLabs/roles-talk]&lt;br /&gt;
** on the system level (ie for each bar-metal or virtual machine)&lt;br /&gt;
*** roles contains the business view (ie. [https://github.com/radiorabe/puppet/blob/master/role/manifests/puppet/master.pp role::puppet::master])&lt;br /&gt;
*** profiles the implementation (such as [https://github.com/radiorabe/puppet/blob/master/profile/manifests/puppet/master.pp profile::puppet::master])&lt;br /&gt;
** on the architecture level (ie. in the cloud-fabric)&lt;br /&gt;
*** roles contains the business view (ie. role::cloud-storage, role::product1)&lt;br /&gt;
*** profiles contain the implementation (ie profile::storage-cluster, profile::storage-webinterface-farm)&lt;br /&gt;
* Keep profiles, roles (as per craig) and Puppetfile in [https://github.com/radiorabe/puppet github.com/radiorabe/puppet]&lt;br /&gt;
** This is where we keep feature/*, develop and master (ie staging) branches&lt;br /&gt;
** An internal clone then contains all these + production (what exactly is in prodution, ie. our release schedule is considered sensitive in this implementation)&lt;br /&gt;
** This lets us use the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model with almost no changes (the one change being us gating stuff into production on the closed clone)&lt;br /&gt;
** github may use hooks to push content to our internal git when they happen&lt;br /&gt;
* All other modules need their own repo and must be published to the puppet module forge&lt;br /&gt;
* Use librarian-puppet (or r10k) for composing the final puppet envs&lt;br /&gt;
** r10k eschews git submodule support we used in puppet-syslogng but has support for multiple envs out of the box&lt;br /&gt;
** librarian-puppet would need to be run once per environment to achieve what r10k does&lt;br /&gt;
* provide develop, master and production branches from private repo as puppet environments on master&lt;br /&gt;
&lt;br /&gt;
== Install host proposal ==&lt;br /&gt;
* use the existing server on [[tftp-01]] on the RaBe infra as a shortcut&lt;br /&gt;
** replace that instance with one native to the infra when it is ready for that&lt;br /&gt;
* iPXE [http://ipxe.org/]&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
* Tools that run puppet on freshly installed machines (and also do some provisioning)&lt;br /&gt;
** [https://forge.puppetlabs.com/puppetlabs/razor puppetlabs razor] bare metal/cloud provisioning tool&lt;br /&gt;
** [http://www.vagrantup.com/ vagrant] cloud provisioning aimed at provisioning developer boxes (with virtualbox). Has 3rd party support for various cloud systems. Vagrant might be interesting for creating dev clouds. I&#039;ve seen this being used on production sites.&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure proposal ==&lt;br /&gt;
* write [[certificate policy]] (in german!)&lt;br /&gt;
* hold a key ceremony for the root and level 1&lt;br /&gt;
** offline ceremony on an old netbook with centos or similar (not debian, probably not gentoo to make this happen soonish)&lt;br /&gt;
** Sign RaBe root cert and level 1 intermediate cert&lt;br /&gt;
** store root cert key on 2 sdcards and as 1 printout somewhere safely&lt;br /&gt;
** store level 1 intermediate key on sdcards for use by admins&lt;br /&gt;
* use level 1 intermediate key to sign level 2 cas as needed&lt;br /&gt;
** level 2 robot ca key for puppet (managed by &amp;lt;code&amp;gt;puppet ca&amp;lt;/code&amp;gt;)&lt;br /&gt;
** level 2 ca for client certs&lt;br /&gt;
** level 2 ca for host certs&lt;br /&gt;
** more level 2 certs&lt;br /&gt;
* use OpenSSL as default software for PKI&lt;br /&gt;
** ssl has the largest userbase which should make it easier on new admins&lt;br /&gt;
** features that openssl does not implement get used as soon as openssl catches up (ie. [http://cmpforopenssl.sourceforge.net/‎ CMP])&lt;br /&gt;
&lt;br /&gt;
== git hosting proposal ==&lt;br /&gt;
&lt;br /&gt;
* [http://gitlab.org/ gitlab] seems nice even though is is ruby on rails under the hood&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Binary_package_guide Gentoo Binary Package Guide]&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Preserve-libs Gentoo preserve-libs]&lt;br /&gt;
* [http://swift.siphos.be/aglara/ A Gentoo Linux Advanced Reference Architecture]&lt;br /&gt;
* [http://www.gentoo.org/proj/en/gentoo-alt/prefix/ Gentoo Prefix]&lt;br /&gt;
* man pages&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/portage.5.html portage(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/emerge.1.html emerge(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html make.conf(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.1.html ebuild(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.5.html ebuild(5)]&lt;br /&gt;
&lt;br /&gt;
[[Category: Infrastructure]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2802</id>
		<title>Gentoo Infrastructure</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2802"/>
		<updated>2014-01-11T14:59:16Z</updated>

		<summary type="html">&lt;p&gt;Lucas: /* package building */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This article describes how we plan on using gentoo as an infrastructure backbone for creating a complete and modern IT architecture.&lt;br /&gt;
&lt;br /&gt;
== Glossary ==&lt;br /&gt;
@TODO We need to clean up some terms already (for instance the portage vs puppet profile thing) A [[:Category:Glossary|glossary]] should help us define term more closely (and stick to the definitions).&lt;br /&gt;
&lt;br /&gt;
; portage profile&lt;br /&gt;
: A profile in gentoo portage. Defines either a system or application stack for portage.&lt;br /&gt;
; portage build profile&lt;br /&gt;
: A profile in gentoo portage. Based of a system profile but used during the build phase of the binary packages used in the final deploy.&lt;br /&gt;
; puppet profile&lt;br /&gt;
: A puppet profile contains the implementation logic of how to install and configure an aspect of a system.&lt;br /&gt;
; stack&lt;br /&gt;
: A stack contains a complete and deployable product that may be provisioned and used. Stack have very simple inheritance letting the admin create stack trees based on each other. For instance a Ruby on Rails stack will be based of of a ruby stack which is based off a linux stack.&lt;br /&gt;
&lt;br /&gt;
= Required components =&lt;br /&gt;
* Build host(s) for binary packages&lt;br /&gt;
* HTTP server for serving binary packages and distfiles (required by the ebuilds)&lt;br /&gt;
* Git clone of official portage tree&lt;br /&gt;
* Overlay(s)&lt;br /&gt;
* Own portage profile(s)&lt;br /&gt;
* rsync or Git server for serving the Overlay and the portage profiles&lt;br /&gt;
* Stage3 building system&lt;br /&gt;
* Puppet for configuration management and software installation&lt;br /&gt;
* Git version control for everything (overlays, portage profiles, puppet manifests and scripts/code)&lt;br /&gt;
* Install host (PXE boot / TFTP / DHCP)&lt;br /&gt;
** emc/puppetlabs [https://github.com/puppetlabs/Razor razor] can do this but needs some work for gentoo &lt;br /&gt;
* Automatic base installation script&lt;br /&gt;
** also in the scope of razor&lt;br /&gt;
* Separation of development, staging and production environments&lt;br /&gt;
** tagged and managed in git&lt;br /&gt;
* PKI environment (with dedicated sub CAs) for X509 certificates (used for Puppet, server and client certs etc.)&lt;br /&gt;
* git web interface (make dotfiles and frozen clones accessible to power-users)&lt;br /&gt;
* Central authentication service&lt;br /&gt;
* DNS, DHCP and NTP services&lt;br /&gt;
* Monitoring and alarming system&lt;br /&gt;
* Logging&lt;br /&gt;
* versioning for everything (if it is a committable file, use semver on its repo)&lt;br /&gt;
&lt;br /&gt;
== Binary package requirements ==&lt;br /&gt;
* Ability to build and install binary packages with the same version but different USE flags. For example, MySQL server package (&amp;lt;code&amp;gt;-minimal&amp;lt;/code&amp;gt; and MySQL client &amp;amp; libs package &amp;lt;code&amp;gt;minimal&amp;lt;/code&amp;gt;)&lt;br /&gt;
** don&#039;t go there: this imposes a significant amount of maintenance work and may still break. Rather provide large enough base sets and accept that some packages install too much (you can still disable them at runtime) and build the few deviations from the rule on the servers from source --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:39, 3 January 2014 (CET)&lt;br /&gt;
*** Yes, we need to and can go there :-) I agree with you, that we should do this only if necessary, apache for example can be built once and has the ability to turn features (module loading) on/off via its configuration. Other software does not provide such run-time configuration which results in unwanted server-software and dependencies on the installed hosts (&amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for example). I clearly do not want to have a dedicated build environment for each of those packages, I would rather see a build env, called minimal for example, which is used to build all those database packages with only lib and clients enabled (use the same env for PostgreSQL, OpenLDAP, MySQL etc.). As stated before, the whole build process needs to be automated, so I don&#039;t see a considerable increase of maintenance work coming up here. The dependency problem is mitigated through the fact that we have a frozen portage tree for all our build envs and therefore use the same versions everywhere. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 12:04, 6 January 2014 (CET)&lt;br /&gt;
*** Yes and no on this one. We clearly need to keep the list of packages that require this at bare minimum. &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for instance doesn&#039;t warrant this, we just won&#039;t start the server on non server nodes. Easy as cake. The server code and it&#039;s deps wont do any harm on say a desktop or other server box. Even though I can&#039;t think of example, I do believe we will be needing this possibility when we encounter packages that need to be built using different profiles for different use cases, things like having a php with-curlwrappers vs one with the curl module sans curlwrappers. The important point I take from this is that creating new profiles with small deviations from our default must be very easy (ie. not much work). Basically we need the infras support for n different build profiles to be fully automated and well documented. [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 19:52, 9 January 2014 (CET)&lt;br /&gt;
**** The &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; is definitely a good example, I don&#039;t want to install and maintain MySQL, Apache, PHP, snmpd (including all the deps) etc. on hosts which just need a Zabbix agent. I would also like to pragmatically avoid unused deps, in order to minimize reverse-updates and security updates (which must be provided nonetheless if the software is in use or not). --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 13:20, 10 January 2014 (CET)&lt;br /&gt;
* Providing binary packages for different major (and sometimes minor) versions, for example: &amp;lt;code&amp;gt;dev-db/mysql-5.X.Y&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;dev-db/mysql-6.X.Y&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Provide binary packages for pre-compiled Linux kernels and modules (not just a binary package of &amp;lt;code&amp;gt;sys-kernel/gentoo-sources&amp;lt;/code&amp;gt;)&lt;br /&gt;
** This makes it possible to build stage4 images from binary packages. &lt;br /&gt;
** Most likely there will be separate packages for servers and desktops built with different genkernel configs.&lt;br /&gt;
* Handle reverse dependency updates and ABI changes&lt;br /&gt;
&lt;br /&gt;
== Build host requirements ==&lt;br /&gt;
* Build binary package for all required software&lt;br /&gt;
* Support for multiple environments (development, staging and production)&lt;br /&gt;
* Support for multiple architectures (such as x86, amd64 etc.)&lt;br /&gt;
* Support for multiple build profiles&lt;br /&gt;
** system (or base) profile, such as desktop or server (stage3) (all the packages contained within the &amp;lt;code&amp;gt;/etc/portage/make.profile&amp;lt;/code&amp;gt; or via &amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;)&lt;br /&gt;
** application profiles, such as php5-app, django-app etc.)&lt;br /&gt;
** simple inheritance is used for things like python-app -&amp;gt; django-app&lt;br /&gt;
** stacks consist of one system profile and multiple application profiles&lt;br /&gt;
** don&#039;t do this: Gentoo itself has only a few profiles and even there issues arise when combining them (for example desktop + selinux-hardened) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:40, 3 January 2014 (CET)&lt;br /&gt;
*** Those are build-profiles (for example chroots or some sort of overlay-fs) not Gentoo (portage) profiles, we definitely need to clarify those terms ;) --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 20:01, 5 January 2014 (CET)&lt;br /&gt;
* All build profiles will use a system profile as their base profile&lt;br /&gt;
* Ability to update an existing build profile, without the need to build it from scratch&lt;br /&gt;
* Ability to do fully automated clean builds (ie. for new archs or new stacks)&lt;br /&gt;
* Ability to automatically update all development profiles on a predefined frequency such as daily, weekly or monthly an be notified about build failures&lt;br /&gt;
** [http://jenkins-ci.org/ jenkins ci] can do this using one jenkins master and a least one build slave per architecture.&lt;br /&gt;
** Other options would be [https://github.com/travis-ci/travis-ci travis ci] (not ready for in-house use) or [http://cruisecontrol.sourceforge.net/ cruise control]&lt;br /&gt;
** Rabe already has a jenkins instance: [http://intranet.rabe.ch/jenkins/]. The instance [[Jenkins-01]] is more or less modern and should be easy to reintegrate with puppet.&lt;br /&gt;
* Each build profile stores the built binary packages under a per-defined directory which will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Application build profiles stores only the extra packages within the above directory, packages included in a base profile won&#039;t be duplicated.&lt;br /&gt;
* Old or no longer supported packages will be removed automatically&lt;br /&gt;
* Build a stage 3 tarball, which can be used for the automatic installation via PXE/TFTP.&lt;br /&gt;
** must be able to build a stage tarball for each of the available environment-arch-system profile combinations&lt;br /&gt;
* Handle reverse dependency updates and ABI changes (aka &amp;lt;code&amp;gt;revdep-rebuild&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Handle perl and python (maybe more) dependency updates (aka &amp;lt;code&amp;gt;perl-cleaner&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;python-updater&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Ability to build kernel and modules&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone requirements ==&lt;br /&gt;
* The official portage tree needs to be cloned via Git, which basically enables one to:&lt;br /&gt;
** keep the control over portage tree updates&lt;br /&gt;
** provide an old version of the tree&lt;br /&gt;
** cherry pick updates&lt;br /&gt;
***  this should be avoided at all cost since it can lead to various sorts of breakages (ebuild &amp;lt;-&amp;gt; ebuild, ebuild &amp;lt;-&amp;gt; eclass, ebuild &amp;lt;-&amp;gt; profile, eclass &amp;lt;-&amp;gt; profile interaction) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:24, 3 January 2014 (CET)&lt;br /&gt;
**** Yes, I agree. Nonetheless, we need the &#039;&#039;possibility&#039;&#039; to do cherry picking, for example to react on zero-day exploits. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 19:53, 5 January 2014 (CET)&lt;br /&gt;
* Support for a development, staging and production branch&lt;br /&gt;
** Ability to automatically sync from upstream&lt;br /&gt;
** Easy merge support from one branch to the next &#039;&#039;higher&#039;&#039; one (staging -&amp;gt; production)&lt;br /&gt;
* Notification support for new [http://www.gentoo.org/security/en/glsa/index.xml GLSAs] which affect packages within the cloned trees.&lt;br /&gt;
** Either via automatic update and merge of &amp;lt;code&amp;gt;/usr/portage/metadata/glsa&amp;lt;/code&amp;gt; or via external mechanisms such as consulting the [http://www.gentoo.org/rdf/en/glsa-index.rdf RDF feed].&lt;br /&gt;
** Having an inventory by collecting puppet facts allows to check for security updates in a central location --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:31, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage overlay requirements ==&lt;br /&gt;
* One Git based portage [http://www.gentoo.org/proj/en/overlays/userguide.xml overlay]&lt;br /&gt;
** Contains own [[#Portage_profile_requirements|portage profiles]]&lt;br /&gt;
** Contains own or modified ebuilds or legacy ones removed from the official tree&lt;br /&gt;
* Support for development, staging and production environment (via Git branches)&lt;br /&gt;
* [http://layman.sourceforge.net/ Layman] compatibility&lt;br /&gt;
** Portage has now direct repository support (as has cave/paludis) and layman may be omitted --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:32, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage profile requirements ==&lt;br /&gt;
* Multiple [http://wiki.gentoo.org/wiki/Profile Portage profiles] stored within the [[#Overlay_requirements|overlay]].&lt;br /&gt;
** One for base, desktop and server (maybe more in the future, such as streambox)&lt;br /&gt;
*** desktop and server both inherit from the base profile which serves as the lowest common denominator.&lt;br /&gt;
* Support for multiple architectures (such as x86 and amd64)&lt;br /&gt;
** Avoid definition duplications via parent profile inheriting.&lt;br /&gt;
* All the profiles have an official Gentoo profile as their master&lt;br /&gt;
* Profiles include only packages belonging to a base system, not an application stack (those will be managed via puppet recipes)&lt;br /&gt;
* Profiles can be used to unmask packages required but not belonging to the base system&lt;br /&gt;
* Profiles sets all the default values for the client&#039;s [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html &amp;lt;code&amp;gt;make.conf&amp;lt;/code&amp;gt;], such as USE flags, BINHOSTS, GENTOO_MIRRORS, CFLAGS, CHOST etc.&lt;br /&gt;
** &#039;&#039;&#039;Warning&#039;&#039;&#039;: many such variables are not incremental and therefore need duplication of Gentoo base profile variables (requiring that someone tracks changes in those variables) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:29, 3 January 2014 (CET)&lt;br /&gt;
* keep the profiles (and the inheritance structure) as simple as possible, rather duplicate than inherit for small deviations to avoid inheritence issues --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:33, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Package host requirements ==&lt;br /&gt;
* Serving files via HTTPS&lt;br /&gt;
** Binary packages for all the clients (&amp;lt;code&amp;gt;PORTAGE_BINHOST&amp;lt;/code&amp;gt;), which were built by the [[#Build_host_requirements|build host]]&lt;br /&gt;
*** Binary packages will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
*** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== File mirror host requirements ==&lt;br /&gt;
* Hosts all the files required to build a package (&amp;lt;code&amp;gt;GENTOO_MIRRORS=mirror.example.com/public/gentoo/distfiles&amp;lt;/code&amp;gt;)&lt;br /&gt;
** Acts as a caching mirror for already downloaded packages from an official mirror&lt;br /&gt;
**  Serves fetch-restricted files (&amp;lt;code&amp;gt;dev-java/oracle-jdk-bin&amp;lt;/code&amp;gt; for example), to authorized clients&lt;br /&gt;
* Files are served via HTTPS&lt;br /&gt;
* Distinguishes between three groups of files&lt;br /&gt;
** &#039;&#039;&#039;public&#039;&#039;&#039;: Files which are available to all clients (theoretically even to the entire internet)&lt;br /&gt;
** &#039;&#039;&#039;site-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure (for example those which would put us into [http://www.bettercallsaul.com/ legal troubles] if available to the public)&lt;br /&gt;
** &#039;&#039;&#039;stack-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure and the software stack group (private files of a specific customer) &lt;br /&gt;
* Provides an easy way to let an administrator manually upload new files, for example via WebDAV-CGI, SFTP or a similar mechanism.&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== Puppet requirements ==&lt;br /&gt;
* moved to [[stoney_orchestra:_Requirements]], included below for reference.&lt;br /&gt;
&lt;br /&gt;
{{:stoney orchestra: Requirements}}&lt;br /&gt;
&lt;br /&gt;
== Install host requirements ==&lt;br /&gt;
* Ability to install physical and virtual machines&lt;br /&gt;
* Distinguish machines by their Ethernet MAC address&lt;br /&gt;
* Provide a PXE/TFTP boot mechanism&lt;br /&gt;
* Partition and format the (virtual) harddisks&lt;br /&gt;
* Install a stage3 image which was built by the build host&lt;br /&gt;
* Bootstrap puppet, enabling it to take over the individual installation and customization.&lt;br /&gt;
* Group hosts into&lt;br /&gt;
** environments (development, staging and production)&lt;br /&gt;
** architectures (such as x86, amd64 etc.)&lt;br /&gt;
** portage profiles (system profiles such as desktop and server)&lt;br /&gt;
** &amp;lt;s&amp;gt;stacks (comprising a complete product as a service with the underlying infrastructure)&amp;lt;/s&amp;gt; this is the task of Puppet --[[Benutzer:Chaf|Chaf]] ([[Benutzer Diskussion:Chaf|Diskussion]]) 09:42, 19. Dez. 2013 (CET)&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure requirements ==&lt;br /&gt;
* Local certificate authority for signing [http://en.wikipedia.org/wiki/X509 X.509] certificates.&lt;br /&gt;
* Master certificate authority root certificate which is only used to sign Sub-CA certificates&lt;br /&gt;
* Sub certificate authorities used for various cases such as&lt;br /&gt;
** Puppet certificates [http://docs.puppetlabs.com/puppet/3/reference/config_ssl_external_ca.html]&lt;br /&gt;
** User certificates&lt;br /&gt;
** Client certificates&lt;br /&gt;
** Host certificates&lt;br /&gt;
* Ability to sign, revoke and extend certificates&lt;br /&gt;
* Publish certificate revocation status either via [http://en.wikipedia.org/wiki/Certificate_revocation_list CRL] and/or [http://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol OCSP]&lt;br /&gt;
** CRL is not worth the hassle due to it not defining how often the CRL must be consulted. Since we are in the same physical net OCSP should be far superior here (thank to its live checking support). On the other hand puppet does not do OCSP yet (redmine: [http://projects.puppetlabs.com/issues/10111 #110111]) so we might need to implement both or implement OCSP as well as develop our own automated revocation for puppet.&lt;br /&gt;
* Choose DNs below &amp;lt;code&amp;gt;dc=rabe,dc=ch&amp;lt;/code&amp;gt;&lt;br /&gt;
* register a PEN-OID as issued by IANA if custom schema work is required&lt;br /&gt;
** Use a @rabe email when requesting a PEN at [http://pen.iana.org/pen/PenApplication.page IANA], last time the @purplehaze.ch was a problem!&lt;br /&gt;
* Some of the aforementioned sub-CAs might be implemented as robot CAs with a self service interface (ie for authorized users).&lt;br /&gt;
* Consider using [http://en.wikipedia.org/wiki/Certificate_Management_Protocol CMP] or [http://en.wikipedia.org/wiki/Certificate_Management_over_CMS CMC] as an API to signing, revoking et. al.&lt;br /&gt;
** Since the underlying RFCs of both these protocols are rather new they are not yet broadly supported.&lt;br /&gt;
* Keep local root CA offline!&lt;br /&gt;
** Maybe use an old netbook as root CA :P&lt;br /&gt;
* Support GPG keys for signing packages&lt;br /&gt;
&lt;br /&gt;
== Git hosting requirements ==&lt;br /&gt;
* Public repositories hosted on [http://www.github.com GitHub] (mainly) under the [https://github.com/organizations/radiorabe radiorabe organization] (almost anything which doesn&#039;t leak sensitive informations)&lt;br /&gt;
* Private repositories hosted on the internal infrastructure&lt;br /&gt;
** Accessible via https and a web interface&lt;br /&gt;
** contains some repos with uber-private data the gets compartmentalized even further (ie. hiera datafiles in different repos)&lt;br /&gt;
* One repository per component&lt;br /&gt;
* Daily backup of all repositories&lt;br /&gt;
* Branches for development, staging and production&lt;br /&gt;
** New features are added to the development branch only and later merged up to staging and production&lt;br /&gt;
* Must support pull-requests so we can implement a review process (when pulling through the envs)&lt;br /&gt;
** Sing-Offing might also be required&lt;br /&gt;
* Adhere to [http://semver.org/ Semantic Versioning] for version/release tags.&lt;br /&gt;
** Tag releases as &amp;lt;code&amp;gt;vX.Y.Z&amp;lt;/code&amp;gt; those will be automatically appear on GitHub as downloadable tarballs, which can be referenced within the corresponding ebuilds.&lt;br /&gt;
** Hit 1.0.0 as soon as code lands on production or earlier&lt;br /&gt;
** Commit .lock files when reaching 1.0.0 where applicable (Gemfile.lock, composer.lock) or earlier if needed&lt;br /&gt;
* Must be able to trigger remote events (ie. update master through mcollective after code was promoted to production in a PR)&lt;br /&gt;
* Support the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model&lt;br /&gt;
&lt;br /&gt;
== Messaging requirements ==&lt;br /&gt;
* I&#039;m talking AMPQ, JMS, STOMP, 0MQ and the likes&lt;br /&gt;
** not sure if we need something in this space for the infra&lt;br /&gt;
** it could facilitate comms between components&lt;br /&gt;
** stuff like mcollective and RadioDNS need something in this space&lt;br /&gt;
&lt;br /&gt;
== Monitoring, logging and alarming system requirements ==&lt;br /&gt;
@TODO&lt;br /&gt;
* centralized logging is used throughout&lt;br /&gt;
** with tools that help find and fix problems and do post mortems&lt;br /&gt;
* all systems are always monitored by a full monitoring suite&lt;br /&gt;
* the monitoring suite must support alarming users through multiple paths&lt;br /&gt;
** alarming should include a fallback strategy and a way to acknowledge alarms&lt;br /&gt;
** it must have a easy way to configure scheduled maintenance either before or while the maintenance is undergoing&lt;br /&gt;
* monitoring, logging and alarming are all automatically configured during regular provisioning of machines&lt;br /&gt;
* alerting uses jabber by default with fallbacks to email and sms-through-gsm depending on the site.&lt;br /&gt;
&lt;br /&gt;
= Implementation proposal =&lt;br /&gt;
== Build host proposal ==&lt;br /&gt;
The build host consists out of various chroots to build binary packages for multiple environments, architectures and build profiles.&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
&lt;br /&gt;
==== build orchestration ====&lt;br /&gt;
&lt;br /&gt;
==== package building ====&lt;br /&gt;
* [http://www.chromium.org/chromium-os/developer-guide/chromite-shell-quick-start chromite] build utility from chromium os ([https://chromium.googlesource.com/chromiumos/chromite/ source repo])&lt;br /&gt;
** as far as I recall chromium os does highly parallel building making their build really fast with a slight trade of in long termn stability (ie. build might fail due to dependencies being built out of oder), &lt;br /&gt;
** the [http://www.chromium.org/chromium-os/developer-guide chromium os developer guide] might also be of interest, among other things it shows that google do split the build into a package building part and an image creation part.&lt;br /&gt;
* [https://wiki.sabayon.org/?title=En:Entropy entropy] is sabayons portage replacement, it focuses on binaries due to sabayon being a binary distribution&lt;br /&gt;
** their [https://github.com/Sabayon/build build system &amp;quot;Matter&amp;quot;] might be of interest, it seems to automate large parts of tracking gentoo portage with its tinderbox subsystem&lt;br /&gt;
** sabayon has &amp;lt;code&amp;gt;kernel-switcher&amp;lt;/code&amp;gt; for updating kernels&lt;br /&gt;
** kernel ebuilds live [https://github.com/Sabayon/sabayon-distro/tree/master/sys-kernel/linux-sabayon here] and probably rely on the [https://github.com/Sabayon/sabayon-distro/blob/master/eclass/sabayon-kernel.eclass sabayon-kernel eclass].&lt;br /&gt;
&lt;br /&gt;
==== &amp;quot;stage4&amp;quot;/box/iso building ====&lt;br /&gt;
* [http://packer.io packer.io] can be used to build stage4 (containing a kernel) images and seems to work for gentoo. Packer often gets used to build Vagrant boxes.&lt;br /&gt;
** [https://github.com/pierreozoux/packer-warehouse/blob/master/var-files/gentoo/generate_latest.sh gentoo script from packer-warehouse] used with packer to create a minimal gentoo vagrant box&lt;br /&gt;
** currently packer and packer-warehouse do not seem capable of building gentoo machines out of the box, I tested this with osx/virtualbox using gentoo stage3 and portage snapshots [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
* [https://github.com/jedi4ever/veewee veewee] vagrant box builder (builds stage4 images in a manner similar to packer&lt;br /&gt;
** has support for a massive amount of guest os types&lt;br /&gt;
*** installs puppet/chef using gem due to the oldish versions in gentoo (and probably elsewhere)&lt;br /&gt;
** supports kvm and others as host os&lt;br /&gt;
** while testing with osx/virtualbox I was able to build and export a vagrant box from gentoo stage3 and portage snapshots without any hiccups [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone proposal ==&lt;br /&gt;
== Portage overlay proposal ==&lt;br /&gt;
== Portage profile proposal ==&lt;br /&gt;
== Package and file mirror proposal ==&lt;br /&gt;
== Puppet proposal ==&lt;br /&gt;
* Adhere to Craig Dunns [http://www.craigdunn.org/2012/05/239/ architecture] [http://www.slideshare.net/PuppetLabs/roles-talk]&lt;br /&gt;
** on the system level (ie for each bar-metal or virtual machine)&lt;br /&gt;
*** roles contains the business view (ie. [https://github.com/radiorabe/puppet/blob/master/role/manifests/puppet/master.pp role::puppet::master])&lt;br /&gt;
*** profiles the implementation (such as [https://github.com/radiorabe/puppet/blob/master/profile/manifests/puppet/master.pp profile::puppet::master])&lt;br /&gt;
** on the architecture level (ie. in the cloud-fabric)&lt;br /&gt;
*** roles contains the business view (ie. role::cloud-storage, role::product1)&lt;br /&gt;
*** profiles contain the implementation (ie profile::storage-cluster, profile::storage-webinterface-farm)&lt;br /&gt;
* Keep profiles, roles (as per craig) and Puppetfile in [https://github.com/radiorabe/puppet github.com/radiorabe/puppet]&lt;br /&gt;
** This is where we keep feature/*, develop and master (ie staging) branches&lt;br /&gt;
** An internal clone then contains all these + production (what exactly is in prodution, ie. our release schedule is considered sensitive in this implementation)&lt;br /&gt;
** This lets us use the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model with almost no changes (the one change being us gating stuff into production on the closed clone)&lt;br /&gt;
** github may use hooks to push content to our internal git when they happen&lt;br /&gt;
* All other modules need their own repo and must be published to the puppet module forge&lt;br /&gt;
* Use librarian-puppet (or r10k) for composing the final puppet envs&lt;br /&gt;
** r10k eschews git submodule support we used in puppet-syslogng but has support for multiple envs out of the box&lt;br /&gt;
** librarian-puppet would need to be run once per environment to achieve what r10k does&lt;br /&gt;
* provide develop, master and production branches from private repo as puppet environments on master&lt;br /&gt;
&lt;br /&gt;
== Install host proposal ==&lt;br /&gt;
* use the existing server on [[tftp-01]] on the RaBe infra as a shortcut&lt;br /&gt;
** replace that instance with one native to the infra when it is ready for that&lt;br /&gt;
* iPXE [http://ipxe.org/]&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
* Tools that run puppet on freshly installed machines (and also do some provisioning)&lt;br /&gt;
** [https://forge.puppetlabs.com/puppetlabs/razor puppetlabs razor] bare metal/cloud provisioning tool&lt;br /&gt;
** [http://www.vagrantup.com/ vagrant] cloud provisioning aimed at provisioning developer boxes (with virtualbox). Has 3rd party support for various cloud systems. Vagrant might be interesting for creating dev clouds. I&#039;ve seen this being used on production sites.&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure proposal ==&lt;br /&gt;
* write [[certificate policy]] (in german!)&lt;br /&gt;
* hold a key ceremony for the root and level 1&lt;br /&gt;
** offline ceremony on an old netbook with centos or similar (not debian, probably not gentoo to make this happen soonish)&lt;br /&gt;
** Sign RaBe root cert and level 1 intermediate cert&lt;br /&gt;
** store root cert key on 2 sdcards and as 1 printout somewhere safely&lt;br /&gt;
** store level 1 intermediate key on sdcards for use by admins&lt;br /&gt;
* use level 1 intermediate key to sign level 2 cas as needed&lt;br /&gt;
** level 2 robot ca key for puppet (managed by &amp;lt;code&amp;gt;puppet ca&amp;lt;/code&amp;gt;)&lt;br /&gt;
** level 2 ca for client certs&lt;br /&gt;
** level 2 ca for host certs&lt;br /&gt;
** more level 2 certs&lt;br /&gt;
* use OpenSSL as default software for PKI&lt;br /&gt;
** ssl has the largest userbase which should make it easier on new admins&lt;br /&gt;
** features that openssl does not implement get used as soon as openssl catches up (ie. [http://cmpforopenssl.sourceforge.net/‎ CMP])&lt;br /&gt;
&lt;br /&gt;
== git hosting proposal ==&lt;br /&gt;
&lt;br /&gt;
* [http://gitlab.org/ gitlab] seems nice even though is is ruby on rails under the hood&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Binary_package_guide Gentoo Binary Package Guide]&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Preserve-libs Gentoo preserve-libs]&lt;br /&gt;
* [http://swift.siphos.be/aglara/ A Gentoo Linux Advanced Reference Architecture]&lt;br /&gt;
* [http://www.gentoo.org/proj/en/gentoo-alt/prefix/ Gentoo Prefix]&lt;br /&gt;
* man pages&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/portage.5.html portage(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/emerge.1.html emerge(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html make.conf(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.1.html ebuild(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.5.html ebuild(5)]&lt;br /&gt;
&lt;br /&gt;
[[Category: Infrastructure]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2801</id>
		<title>Gentoo Infrastructure</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2801"/>
		<updated>2014-01-11T14:56:04Z</updated>

		<summary type="html">&lt;p&gt;Lucas: /* package building */ some sabayon stuff&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This article describes how we plan on using gentoo as an infrastructure backbone for creating a complete and modern IT architecture.&lt;br /&gt;
&lt;br /&gt;
== Glossary ==&lt;br /&gt;
@TODO We need to clean up some terms already (for instance the portage vs puppet profile thing) A [[:Category:Glossary|glossary]] should help us define term more closely (and stick to the definitions).&lt;br /&gt;
&lt;br /&gt;
; portage profile&lt;br /&gt;
: A profile in gentoo portage. Defines either a system or application stack for portage.&lt;br /&gt;
; portage build profile&lt;br /&gt;
: A profile in gentoo portage. Based of a system profile but used during the build phase of the binary packages used in the final deploy.&lt;br /&gt;
; puppet profile&lt;br /&gt;
: A puppet profile contains the implementation logic of how to install and configure an aspect of a system.&lt;br /&gt;
; stack&lt;br /&gt;
: A stack contains a complete and deployable product that may be provisioned and used. Stack have very simple inheritance letting the admin create stack trees based on each other. For instance a Ruby on Rails stack will be based of of a ruby stack which is based off a linux stack.&lt;br /&gt;
&lt;br /&gt;
= Required components =&lt;br /&gt;
* Build host(s) for binary packages&lt;br /&gt;
* HTTP server for serving binary packages and distfiles (required by the ebuilds)&lt;br /&gt;
* Git clone of official portage tree&lt;br /&gt;
* Overlay(s)&lt;br /&gt;
* Own portage profile(s)&lt;br /&gt;
* rsync or Git server for serving the Overlay and the portage profiles&lt;br /&gt;
* Stage3 building system&lt;br /&gt;
* Puppet for configuration management and software installation&lt;br /&gt;
* Git version control for everything (overlays, portage profiles, puppet manifests and scripts/code)&lt;br /&gt;
* Install host (PXE boot / TFTP / DHCP)&lt;br /&gt;
** emc/puppetlabs [https://github.com/puppetlabs/Razor razor] can do this but needs some work for gentoo &lt;br /&gt;
* Automatic base installation script&lt;br /&gt;
** also in the scope of razor&lt;br /&gt;
* Separation of development, staging and production environments&lt;br /&gt;
** tagged and managed in git&lt;br /&gt;
* PKI environment (with dedicated sub CAs) for X509 certificates (used for Puppet, server and client certs etc.)&lt;br /&gt;
* git web interface (make dotfiles and frozen clones accessible to power-users)&lt;br /&gt;
* Central authentication service&lt;br /&gt;
* DNS, DHCP and NTP services&lt;br /&gt;
* Monitoring and alarming system&lt;br /&gt;
* Logging&lt;br /&gt;
* versioning for everything (if it is a committable file, use semver on its repo)&lt;br /&gt;
&lt;br /&gt;
== Binary package requirements ==&lt;br /&gt;
* Ability to build and install binary packages with the same version but different USE flags. For example, MySQL server package (&amp;lt;code&amp;gt;-minimal&amp;lt;/code&amp;gt; and MySQL client &amp;amp; libs package &amp;lt;code&amp;gt;minimal&amp;lt;/code&amp;gt;)&lt;br /&gt;
** don&#039;t go there: this imposes a significant amount of maintenance work and may still break. Rather provide large enough base sets and accept that some packages install too much (you can still disable them at runtime) and build the few deviations from the rule on the servers from source --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:39, 3 January 2014 (CET)&lt;br /&gt;
*** Yes, we need to and can go there :-) I agree with you, that we should do this only if necessary, apache for example can be built once and has the ability to turn features (module loading) on/off via its configuration. Other software does not provide such run-time configuration which results in unwanted server-software and dependencies on the installed hosts (&amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for example). I clearly do not want to have a dedicated build environment for each of those packages, I would rather see a build env, called minimal for example, which is used to build all those database packages with only lib and clients enabled (use the same env for PostgreSQL, OpenLDAP, MySQL etc.). As stated before, the whole build process needs to be automated, so I don&#039;t see a considerable increase of maintenance work coming up here. The dependency problem is mitigated through the fact that we have a frozen portage tree for all our build envs and therefore use the same versions everywhere. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 12:04, 6 January 2014 (CET)&lt;br /&gt;
*** Yes and no on this one. We clearly need to keep the list of packages that require this at bare minimum. &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for instance doesn&#039;t warrant this, we just won&#039;t start the server on non server nodes. Easy as cake. The server code and it&#039;s deps wont do any harm on say a desktop or other server box. Even though I can&#039;t think of example, I do believe we will be needing this possibility when we encounter packages that need to be built using different profiles for different use cases, things like having a php with-curlwrappers vs one with the curl module sans curlwrappers. The important point I take from this is that creating new profiles with small deviations from our default must be very easy (ie. not much work). Basically we need the infras support for n different build profiles to be fully automated and well documented. [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 19:52, 9 January 2014 (CET)&lt;br /&gt;
**** The &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; is definitely a good example, I don&#039;t want to install and maintain MySQL, Apache, PHP, snmpd (including all the deps) etc. on hosts which just need a Zabbix agent. I would also like to pragmatically avoid unused deps, in order to minimize reverse-updates and security updates (which must be provided nonetheless if the software is in use or not). --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 13:20, 10 January 2014 (CET)&lt;br /&gt;
* Providing binary packages for different major (and sometimes minor) versions, for example: &amp;lt;code&amp;gt;dev-db/mysql-5.X.Y&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;dev-db/mysql-6.X.Y&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Provide binary packages for pre-compiled Linux kernels and modules (not just a binary package of &amp;lt;code&amp;gt;sys-kernel/gentoo-sources&amp;lt;/code&amp;gt;)&lt;br /&gt;
** This makes it possible to build stage4 images from binary packages. &lt;br /&gt;
** Most likely there will be separate packages for servers and desktops built with different genkernel configs.&lt;br /&gt;
* Handle reverse dependency updates and ABI changes&lt;br /&gt;
&lt;br /&gt;
== Build host requirements ==&lt;br /&gt;
* Build binary package for all required software&lt;br /&gt;
* Support for multiple environments (development, staging and production)&lt;br /&gt;
* Support for multiple architectures (such as x86, amd64 etc.)&lt;br /&gt;
* Support for multiple build profiles&lt;br /&gt;
** system (or base) profile, such as desktop or server (stage3) (all the packages contained within the &amp;lt;code&amp;gt;/etc/portage/make.profile&amp;lt;/code&amp;gt; or via &amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;)&lt;br /&gt;
** application profiles, such as php5-app, django-app etc.)&lt;br /&gt;
** simple inheritance is used for things like python-app -&amp;gt; django-app&lt;br /&gt;
** stacks consist of one system profile and multiple application profiles&lt;br /&gt;
** don&#039;t do this: Gentoo itself has only a few profiles and even there issues arise when combining them (for example desktop + selinux-hardened) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:40, 3 January 2014 (CET)&lt;br /&gt;
*** Those are build-profiles (for example chroots or some sort of overlay-fs) not Gentoo (portage) profiles, we definitely need to clarify those terms ;) --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 20:01, 5 January 2014 (CET)&lt;br /&gt;
* All build profiles will use a system profile as their base profile&lt;br /&gt;
* Ability to update an existing build profile, without the need to build it from scratch&lt;br /&gt;
* Ability to do fully automated clean builds (ie. for new archs or new stacks)&lt;br /&gt;
* Ability to automatically update all development profiles on a predefined frequency such as daily, weekly or monthly an be notified about build failures&lt;br /&gt;
** [http://jenkins-ci.org/ jenkins ci] can do this using one jenkins master and a least one build slave per architecture.&lt;br /&gt;
** Other options would be [https://github.com/travis-ci/travis-ci travis ci] (not ready for in-house use) or [http://cruisecontrol.sourceforge.net/ cruise control]&lt;br /&gt;
** Rabe already has a jenkins instance: [http://intranet.rabe.ch/jenkins/]. The instance [[Jenkins-01]] is more or less modern and should be easy to reintegrate with puppet.&lt;br /&gt;
* Each build profile stores the built binary packages under a per-defined directory which will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Application build profiles stores only the extra packages within the above directory, packages included in a base profile won&#039;t be duplicated.&lt;br /&gt;
* Old or no longer supported packages will be removed automatically&lt;br /&gt;
* Build a stage 3 tarball, which can be used for the automatic installation via PXE/TFTP.&lt;br /&gt;
** must be able to build a stage tarball for each of the available environment-arch-system profile combinations&lt;br /&gt;
* Handle reverse dependency updates and ABI changes (aka &amp;lt;code&amp;gt;revdep-rebuild&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Handle perl and python (maybe more) dependency updates (aka &amp;lt;code&amp;gt;perl-cleaner&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;python-updater&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Ability to build kernel and modules&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone requirements ==&lt;br /&gt;
* The official portage tree needs to be cloned via Git, which basically enables one to:&lt;br /&gt;
** keep the control over portage tree updates&lt;br /&gt;
** provide an old version of the tree&lt;br /&gt;
** cherry pick updates&lt;br /&gt;
***  this should be avoided at all cost since it can lead to various sorts of breakages (ebuild &amp;lt;-&amp;gt; ebuild, ebuild &amp;lt;-&amp;gt; eclass, ebuild &amp;lt;-&amp;gt; profile, eclass &amp;lt;-&amp;gt; profile interaction) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:24, 3 January 2014 (CET)&lt;br /&gt;
**** Yes, I agree. Nonetheless, we need the &#039;&#039;possibility&#039;&#039; to do cherry picking, for example to react on zero-day exploits. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 19:53, 5 January 2014 (CET)&lt;br /&gt;
* Support for a development, staging and production branch&lt;br /&gt;
** Ability to automatically sync from upstream&lt;br /&gt;
** Easy merge support from one branch to the next &#039;&#039;higher&#039;&#039; one (staging -&amp;gt; production)&lt;br /&gt;
* Notification support for new [http://www.gentoo.org/security/en/glsa/index.xml GLSAs] which affect packages within the cloned trees.&lt;br /&gt;
** Either via automatic update and merge of &amp;lt;code&amp;gt;/usr/portage/metadata/glsa&amp;lt;/code&amp;gt; or via external mechanisms such as consulting the [http://www.gentoo.org/rdf/en/glsa-index.rdf RDF feed].&lt;br /&gt;
** Having an inventory by collecting puppet facts allows to check for security updates in a central location --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:31, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage overlay requirements ==&lt;br /&gt;
* One Git based portage [http://www.gentoo.org/proj/en/overlays/userguide.xml overlay]&lt;br /&gt;
** Contains own [[#Portage_profile_requirements|portage profiles]]&lt;br /&gt;
** Contains own or modified ebuilds or legacy ones removed from the official tree&lt;br /&gt;
* Support for development, staging and production environment (via Git branches)&lt;br /&gt;
* [http://layman.sourceforge.net/ Layman] compatibility&lt;br /&gt;
** Portage has now direct repository support (as has cave/paludis) and layman may be omitted --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:32, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage profile requirements ==&lt;br /&gt;
* Multiple [http://wiki.gentoo.org/wiki/Profile Portage profiles] stored within the [[#Overlay_requirements|overlay]].&lt;br /&gt;
** One for base, desktop and server (maybe more in the future, such as streambox)&lt;br /&gt;
*** desktop and server both inherit from the base profile which serves as the lowest common denominator.&lt;br /&gt;
* Support for multiple architectures (such as x86 and amd64)&lt;br /&gt;
** Avoid definition duplications via parent profile inheriting.&lt;br /&gt;
* All the profiles have an official Gentoo profile as their master&lt;br /&gt;
* Profiles include only packages belonging to a base system, not an application stack (those will be managed via puppet recipes)&lt;br /&gt;
* Profiles can be used to unmask packages required but not belonging to the base system&lt;br /&gt;
* Profiles sets all the default values for the client&#039;s [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html &amp;lt;code&amp;gt;make.conf&amp;lt;/code&amp;gt;], such as USE flags, BINHOSTS, GENTOO_MIRRORS, CFLAGS, CHOST etc.&lt;br /&gt;
** &#039;&#039;&#039;Warning&#039;&#039;&#039;: many such variables are not incremental and therefore need duplication of Gentoo base profile variables (requiring that someone tracks changes in those variables) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:29, 3 January 2014 (CET)&lt;br /&gt;
* keep the profiles (and the inheritance structure) as simple as possible, rather duplicate than inherit for small deviations to avoid inheritence issues --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:33, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Package host requirements ==&lt;br /&gt;
* Serving files via HTTPS&lt;br /&gt;
** Binary packages for all the clients (&amp;lt;code&amp;gt;PORTAGE_BINHOST&amp;lt;/code&amp;gt;), which were built by the [[#Build_host_requirements|build host]]&lt;br /&gt;
*** Binary packages will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
*** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== File mirror host requirements ==&lt;br /&gt;
* Hosts all the files required to build a package (&amp;lt;code&amp;gt;GENTOO_MIRRORS=mirror.example.com/public/gentoo/distfiles&amp;lt;/code&amp;gt;)&lt;br /&gt;
** Acts as a caching mirror for already downloaded packages from an official mirror&lt;br /&gt;
**  Serves fetch-restricted files (&amp;lt;code&amp;gt;dev-java/oracle-jdk-bin&amp;lt;/code&amp;gt; for example), to authorized clients&lt;br /&gt;
* Files are served via HTTPS&lt;br /&gt;
* Distinguishes between three groups of files&lt;br /&gt;
** &#039;&#039;&#039;public&#039;&#039;&#039;: Files which are available to all clients (theoretically even to the entire internet)&lt;br /&gt;
** &#039;&#039;&#039;site-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure (for example those which would put us into [http://www.bettercallsaul.com/ legal troubles] if available to the public)&lt;br /&gt;
** &#039;&#039;&#039;stack-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure and the software stack group (private files of a specific customer) &lt;br /&gt;
* Provides an easy way to let an administrator manually upload new files, for example via WebDAV-CGI, SFTP or a similar mechanism.&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== Puppet requirements ==&lt;br /&gt;
* moved to [[stoney_orchestra:_Requirements]], included below for reference.&lt;br /&gt;
&lt;br /&gt;
{{:stoney orchestra: Requirements}}&lt;br /&gt;
&lt;br /&gt;
== Install host requirements ==&lt;br /&gt;
* Ability to install physical and virtual machines&lt;br /&gt;
* Distinguish machines by their Ethernet MAC address&lt;br /&gt;
* Provide a PXE/TFTP boot mechanism&lt;br /&gt;
* Partition and format the (virtual) harddisks&lt;br /&gt;
* Install a stage3 image which was built by the build host&lt;br /&gt;
* Bootstrap puppet, enabling it to take over the individual installation and customization.&lt;br /&gt;
* Group hosts into&lt;br /&gt;
** environments (development, staging and production)&lt;br /&gt;
** architectures (such as x86, amd64 etc.)&lt;br /&gt;
** portage profiles (system profiles such as desktop and server)&lt;br /&gt;
** &amp;lt;s&amp;gt;stacks (comprising a complete product as a service with the underlying infrastructure)&amp;lt;/s&amp;gt; this is the task of Puppet --[[Benutzer:Chaf|Chaf]] ([[Benutzer Diskussion:Chaf|Diskussion]]) 09:42, 19. Dez. 2013 (CET)&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure requirements ==&lt;br /&gt;
* Local certificate authority for signing [http://en.wikipedia.org/wiki/X509 X.509] certificates.&lt;br /&gt;
* Master certificate authority root certificate which is only used to sign Sub-CA certificates&lt;br /&gt;
* Sub certificate authorities used for various cases such as&lt;br /&gt;
** Puppet certificates [http://docs.puppetlabs.com/puppet/3/reference/config_ssl_external_ca.html]&lt;br /&gt;
** User certificates&lt;br /&gt;
** Client certificates&lt;br /&gt;
** Host certificates&lt;br /&gt;
* Ability to sign, revoke and extend certificates&lt;br /&gt;
* Publish certificate revocation status either via [http://en.wikipedia.org/wiki/Certificate_revocation_list CRL] and/or [http://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol OCSP]&lt;br /&gt;
** CRL is not worth the hassle due to it not defining how often the CRL must be consulted. Since we are in the same physical net OCSP should be far superior here (thank to its live checking support). On the other hand puppet does not do OCSP yet (redmine: [http://projects.puppetlabs.com/issues/10111 #110111]) so we might need to implement both or implement OCSP as well as develop our own automated revocation for puppet.&lt;br /&gt;
* Choose DNs below &amp;lt;code&amp;gt;dc=rabe,dc=ch&amp;lt;/code&amp;gt;&lt;br /&gt;
* register a PEN-OID as issued by IANA if custom schema work is required&lt;br /&gt;
** Use a @rabe email when requesting a PEN at [http://pen.iana.org/pen/PenApplication.page IANA], last time the @purplehaze.ch was a problem!&lt;br /&gt;
* Some of the aforementioned sub-CAs might be implemented as robot CAs with a self service interface (ie for authorized users).&lt;br /&gt;
* Consider using [http://en.wikipedia.org/wiki/Certificate_Management_Protocol CMP] or [http://en.wikipedia.org/wiki/Certificate_Management_over_CMS CMC] as an API to signing, revoking et. al.&lt;br /&gt;
** Since the underlying RFCs of both these protocols are rather new they are not yet broadly supported.&lt;br /&gt;
* Keep local root CA offline!&lt;br /&gt;
** Maybe use an old netbook as root CA :P&lt;br /&gt;
* Support GPG keys for signing packages&lt;br /&gt;
&lt;br /&gt;
== Git hosting requirements ==&lt;br /&gt;
* Public repositories hosted on [http://www.github.com GitHub] (mainly) under the [https://github.com/organizations/radiorabe radiorabe organization] (almost anything which doesn&#039;t leak sensitive informations)&lt;br /&gt;
* Private repositories hosted on the internal infrastructure&lt;br /&gt;
** Accessible via https and a web interface&lt;br /&gt;
** contains some repos with uber-private data the gets compartmentalized even further (ie. hiera datafiles in different repos)&lt;br /&gt;
* One repository per component&lt;br /&gt;
* Daily backup of all repositories&lt;br /&gt;
* Branches for development, staging and production&lt;br /&gt;
** New features are added to the development branch only and later merged up to staging and production&lt;br /&gt;
* Must support pull-requests so we can implement a review process (when pulling through the envs)&lt;br /&gt;
** Sing-Offing might also be required&lt;br /&gt;
* Adhere to [http://semver.org/ Semantic Versioning] for version/release tags.&lt;br /&gt;
** Tag releases as &amp;lt;code&amp;gt;vX.Y.Z&amp;lt;/code&amp;gt; those will be automatically appear on GitHub as downloadable tarballs, which can be referenced within the corresponding ebuilds.&lt;br /&gt;
** Hit 1.0.0 as soon as code lands on production or earlier&lt;br /&gt;
** Commit .lock files when reaching 1.0.0 where applicable (Gemfile.lock, composer.lock) or earlier if needed&lt;br /&gt;
* Must be able to trigger remote events (ie. update master through mcollective after code was promoted to production in a PR)&lt;br /&gt;
* Support the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model&lt;br /&gt;
&lt;br /&gt;
== Messaging requirements ==&lt;br /&gt;
* I&#039;m talking AMPQ, JMS, STOMP, 0MQ and the likes&lt;br /&gt;
** not sure if we need something in this space for the infra&lt;br /&gt;
** it could facilitate comms between components&lt;br /&gt;
** stuff like mcollective and RadioDNS need something in this space&lt;br /&gt;
&lt;br /&gt;
== Monitoring, logging and alarming system requirements ==&lt;br /&gt;
@TODO&lt;br /&gt;
* centralized logging is used throughout&lt;br /&gt;
** with tools that help find and fix problems and do post mortems&lt;br /&gt;
* all systems are always monitored by a full monitoring suite&lt;br /&gt;
* the monitoring suite must support alarming users through multiple paths&lt;br /&gt;
** alarming should include a fallback strategy and a way to acknowledge alarms&lt;br /&gt;
** it must have a easy way to configure scheduled maintenance either before or while the maintenance is undergoing&lt;br /&gt;
* monitoring, logging and alarming are all automatically configured during regular provisioning of machines&lt;br /&gt;
* alerting uses jabber by default with fallbacks to email and sms-through-gsm depending on the site.&lt;br /&gt;
&lt;br /&gt;
= Implementation proposal =&lt;br /&gt;
== Build host proposal ==&lt;br /&gt;
The build host consists out of various chroots to build binary packages for multiple environments, architectures and build profiles.&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
&lt;br /&gt;
==== build orchestration ====&lt;br /&gt;
&lt;br /&gt;
==== package building ====&lt;br /&gt;
* [http://www.chromium.org/chromium-os/developer-guide/chromite-shell-quick-start chromite] build utility from chromium os ([https://chromium.googlesource.com/chromiumos/chromite/ source repo])&lt;br /&gt;
** as far as I recall chromium os does highly parallel building making their build really fast with a slight trade of in long termn stability (ie. build might fail due to dependencies being built out of oder), &lt;br /&gt;
** the [http://www.chromium.org/chromium-os/developer-guide chromium os developer guide] might also be of interest, among other things it shows that google do split the build into a package building part and an image creation part.&lt;br /&gt;
* [https://wiki.sabayon.org/?title=En:Entropy entropy] is sabayons portage replacement, it focuses on binaries due to sabayon being a binary distribution&lt;br /&gt;
** their build system &amp;quot;Matter&amp;quot; might be of interest (although I haven;t found any info), also sabayon has &amp;lt;code&amp;gt;kernel-switcher&amp;lt;/code&amp;gt; for updating kernels&lt;br /&gt;
** kernel ebuilds live [https://github.com/Sabayon/sabayon-distro/tree/master/sys-kernel/linux-sabayon here] and probably rely on the [https://github.com/Sabayon/sabayon-distro/blob/master/eclass/sabayon-kernel.eclass sabayon-kernel eclass].&lt;br /&gt;
&lt;br /&gt;
==== &amp;quot;stage4&amp;quot;/box/iso building ====&lt;br /&gt;
* [http://packer.io packer.io] can be used to build stage4 (containing a kernel) images and seems to work for gentoo. Packer often gets used to build Vagrant boxes.&lt;br /&gt;
** [https://github.com/pierreozoux/packer-warehouse/blob/master/var-files/gentoo/generate_latest.sh gentoo script from packer-warehouse] used with packer to create a minimal gentoo vagrant box&lt;br /&gt;
** currently packer and packer-warehouse do not seem capable of building gentoo machines out of the box, I tested this with osx/virtualbox using gentoo stage3 and portage snapshots [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
* [https://github.com/jedi4ever/veewee veewee] vagrant box builder (builds stage4 images in a manner similar to packer&lt;br /&gt;
** has support for a massive amount of guest os types&lt;br /&gt;
*** installs puppet/chef using gem due to the oldish versions in gentoo (and probably elsewhere)&lt;br /&gt;
** supports kvm and others as host os&lt;br /&gt;
** while testing with osx/virtualbox I was able to build and export a vagrant box from gentoo stage3 and portage snapshots without any hiccups [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone proposal ==&lt;br /&gt;
== Portage overlay proposal ==&lt;br /&gt;
== Portage profile proposal ==&lt;br /&gt;
== Package and file mirror proposal ==&lt;br /&gt;
== Puppet proposal ==&lt;br /&gt;
* Adhere to Craig Dunns [http://www.craigdunn.org/2012/05/239/ architecture] [http://www.slideshare.net/PuppetLabs/roles-talk]&lt;br /&gt;
** on the system level (ie for each bar-metal or virtual machine)&lt;br /&gt;
*** roles contains the business view (ie. [https://github.com/radiorabe/puppet/blob/master/role/manifests/puppet/master.pp role::puppet::master])&lt;br /&gt;
*** profiles the implementation (such as [https://github.com/radiorabe/puppet/blob/master/profile/manifests/puppet/master.pp profile::puppet::master])&lt;br /&gt;
** on the architecture level (ie. in the cloud-fabric)&lt;br /&gt;
*** roles contains the business view (ie. role::cloud-storage, role::product1)&lt;br /&gt;
*** profiles contain the implementation (ie profile::storage-cluster, profile::storage-webinterface-farm)&lt;br /&gt;
* Keep profiles, roles (as per craig) and Puppetfile in [https://github.com/radiorabe/puppet github.com/radiorabe/puppet]&lt;br /&gt;
** This is where we keep feature/*, develop and master (ie staging) branches&lt;br /&gt;
** An internal clone then contains all these + production (what exactly is in prodution, ie. our release schedule is considered sensitive in this implementation)&lt;br /&gt;
** This lets us use the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model with almost no changes (the one change being us gating stuff into production on the closed clone)&lt;br /&gt;
** github may use hooks to push content to our internal git when they happen&lt;br /&gt;
* All other modules need their own repo and must be published to the puppet module forge&lt;br /&gt;
* Use librarian-puppet (or r10k) for composing the final puppet envs&lt;br /&gt;
** r10k eschews git submodule support we used in puppet-syslogng but has support for multiple envs out of the box&lt;br /&gt;
** librarian-puppet would need to be run once per environment to achieve what r10k does&lt;br /&gt;
* provide develop, master and production branches from private repo as puppet environments on master&lt;br /&gt;
&lt;br /&gt;
== Install host proposal ==&lt;br /&gt;
* use the existing server on [[tftp-01]] on the RaBe infra as a shortcut&lt;br /&gt;
** replace that instance with one native to the infra when it is ready for that&lt;br /&gt;
* iPXE [http://ipxe.org/]&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
* Tools that run puppet on freshly installed machines (and also do some provisioning)&lt;br /&gt;
** [https://forge.puppetlabs.com/puppetlabs/razor puppetlabs razor] bare metal/cloud provisioning tool&lt;br /&gt;
** [http://www.vagrantup.com/ vagrant] cloud provisioning aimed at provisioning developer boxes (with virtualbox). Has 3rd party support for various cloud systems. Vagrant might be interesting for creating dev clouds. I&#039;ve seen this being used on production sites.&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure proposal ==&lt;br /&gt;
* write [[certificate policy]] (in german!)&lt;br /&gt;
* hold a key ceremony for the root and level 1&lt;br /&gt;
** offline ceremony on an old netbook with centos or similar (not debian, probably not gentoo to make this happen soonish)&lt;br /&gt;
** Sign RaBe root cert and level 1 intermediate cert&lt;br /&gt;
** store root cert key on 2 sdcards and as 1 printout somewhere safely&lt;br /&gt;
** store level 1 intermediate key on sdcards for use by admins&lt;br /&gt;
* use level 1 intermediate key to sign level 2 cas as needed&lt;br /&gt;
** level 2 robot ca key for puppet (managed by &amp;lt;code&amp;gt;puppet ca&amp;lt;/code&amp;gt;)&lt;br /&gt;
** level 2 ca for client certs&lt;br /&gt;
** level 2 ca for host certs&lt;br /&gt;
** more level 2 certs&lt;br /&gt;
* use OpenSSL as default software for PKI&lt;br /&gt;
** ssl has the largest userbase which should make it easier on new admins&lt;br /&gt;
** features that openssl does not implement get used as soon as openssl catches up (ie. [http://cmpforopenssl.sourceforge.net/‎ CMP])&lt;br /&gt;
&lt;br /&gt;
== git hosting proposal ==&lt;br /&gt;
&lt;br /&gt;
* [http://gitlab.org/ gitlab] seems nice even though is is ruby on rails under the hood&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Binary_package_guide Gentoo Binary Package Guide]&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Preserve-libs Gentoo preserve-libs]&lt;br /&gt;
* [http://swift.siphos.be/aglara/ A Gentoo Linux Advanced Reference Architecture]&lt;br /&gt;
* [http://www.gentoo.org/proj/en/gentoo-alt/prefix/ Gentoo Prefix]&lt;br /&gt;
* man pages&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/portage.5.html portage(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/emerge.1.html emerge(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html make.conf(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.1.html ebuild(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.5.html ebuild(5)]&lt;br /&gt;
&lt;br /&gt;
[[Category: Infrastructure]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2800</id>
		<title>Gentoo Infrastructure</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2800"/>
		<updated>2014-01-11T10:37:39Z</updated>

		<summary type="html">&lt;p&gt;Lucas: /* Build host proposal */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This article describes how we plan on using gentoo as an infrastructure backbone for creating a complete and modern IT architecture.&lt;br /&gt;
&lt;br /&gt;
== Glossary ==&lt;br /&gt;
@TODO We need to clean up some terms already (for instance the portage vs puppet profile thing) A [[:Category:Glossary|glossary]] should help us define term more closely (and stick to the definitions).&lt;br /&gt;
&lt;br /&gt;
; portage profile&lt;br /&gt;
: A profile in gentoo portage. Defines either a system or application stack for portage.&lt;br /&gt;
; portage build profile&lt;br /&gt;
: A profile in gentoo portage. Based of a system profile but used during the build phase of the binary packages used in the final deploy.&lt;br /&gt;
; puppet profile&lt;br /&gt;
: A puppet profile contains the implementation logic of how to install and configure an aspect of a system.&lt;br /&gt;
; stack&lt;br /&gt;
: A stack contains a complete and deployable product that may be provisioned and used. Stack have very simple inheritance letting the admin create stack trees based on each other. For instance a Ruby on Rails stack will be based of of a ruby stack which is based off a linux stack.&lt;br /&gt;
&lt;br /&gt;
= Required components =&lt;br /&gt;
* Build host(s) for binary packages&lt;br /&gt;
* HTTP server for serving binary packages and distfiles (required by the ebuilds)&lt;br /&gt;
* Git clone of official portage tree&lt;br /&gt;
* Overlay(s)&lt;br /&gt;
* Own portage profile(s)&lt;br /&gt;
* rsync or Git server for serving the Overlay and the portage profiles&lt;br /&gt;
* Stage3 building system&lt;br /&gt;
* Puppet for configuration management and software installation&lt;br /&gt;
* Git version control for everything (overlays, portage profiles, puppet manifests and scripts/code)&lt;br /&gt;
* Install host (PXE boot / TFTP / DHCP)&lt;br /&gt;
** emc/puppetlabs [https://github.com/puppetlabs/Razor razor] can do this but needs some work for gentoo &lt;br /&gt;
* Automatic base installation script&lt;br /&gt;
** also in the scope of razor&lt;br /&gt;
* Separation of development, staging and production environments&lt;br /&gt;
** tagged and managed in git&lt;br /&gt;
* PKI environment (with dedicated sub CAs) for X509 certificates (used for Puppet, server and client certs etc.)&lt;br /&gt;
* git web interface (make dotfiles and frozen clones accessible to power-users)&lt;br /&gt;
* Central authentication service&lt;br /&gt;
* DNS, DHCP and NTP services&lt;br /&gt;
* Monitoring and alarming system&lt;br /&gt;
* Logging&lt;br /&gt;
* versioning for everything (if it is a committable file, use semver on its repo)&lt;br /&gt;
&lt;br /&gt;
== Binary package requirements ==&lt;br /&gt;
* Ability to build and install binary packages with the same version but different USE flags. For example, MySQL server package (&amp;lt;code&amp;gt;-minimal&amp;lt;/code&amp;gt; and MySQL client &amp;amp; libs package &amp;lt;code&amp;gt;minimal&amp;lt;/code&amp;gt;)&lt;br /&gt;
** don&#039;t go there: this imposes a significant amount of maintenance work and may still break. Rather provide large enough base sets and accept that some packages install too much (you can still disable them at runtime) and build the few deviations from the rule on the servers from source --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:39, 3 January 2014 (CET)&lt;br /&gt;
*** Yes, we need to and can go there :-) I agree with you, that we should do this only if necessary, apache for example can be built once and has the ability to turn features (module loading) on/off via its configuration. Other software does not provide such run-time configuration which results in unwanted server-software and dependencies on the installed hosts (&amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for example). I clearly do not want to have a dedicated build environment for each of those packages, I would rather see a build env, called minimal for example, which is used to build all those database packages with only lib and clients enabled (use the same env for PostgreSQL, OpenLDAP, MySQL etc.). As stated before, the whole build process needs to be automated, so I don&#039;t see a considerable increase of maintenance work coming up here. The dependency problem is mitigated through the fact that we have a frozen portage tree for all our build envs and therefore use the same versions everywhere. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 12:04, 6 January 2014 (CET)&lt;br /&gt;
*** Yes and no on this one. We clearly need to keep the list of packages that require this at bare minimum. &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for instance doesn&#039;t warrant this, we just won&#039;t start the server on non server nodes. Easy as cake. The server code and it&#039;s deps wont do any harm on say a desktop or other server box. Even though I can&#039;t think of example, I do believe we will be needing this possibility when we encounter packages that need to be built using different profiles for different use cases, things like having a php with-curlwrappers vs one with the curl module sans curlwrappers. The important point I take from this is that creating new profiles with small deviations from our default must be very easy (ie. not much work). Basically we need the infras support for n different build profiles to be fully automated and well documented. [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 19:52, 9 January 2014 (CET)&lt;br /&gt;
**** The &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; is definitely a good example, I don&#039;t want to install and maintain MySQL, Apache, PHP, snmpd (including all the deps) etc. on hosts which just need a Zabbix agent. I would also like to pragmatically avoid unused deps, in order to minimize reverse-updates and security updates (which must be provided nonetheless if the software is in use or not). --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 13:20, 10 January 2014 (CET)&lt;br /&gt;
* Providing binary packages for different major (and sometimes minor) versions, for example: &amp;lt;code&amp;gt;dev-db/mysql-5.X.Y&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;dev-db/mysql-6.X.Y&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Provide binary packages for pre-compiled Linux kernels and modules (not just a binary package of &amp;lt;code&amp;gt;sys-kernel/gentoo-sources&amp;lt;/code&amp;gt;)&lt;br /&gt;
** This makes it possible to build stage4 images from binary packages. &lt;br /&gt;
** Most likely there will be separate packages for servers and desktops built with different genkernel configs.&lt;br /&gt;
* Handle reverse dependency updates and ABI changes&lt;br /&gt;
&lt;br /&gt;
== Build host requirements ==&lt;br /&gt;
* Build binary package for all required software&lt;br /&gt;
* Support for multiple environments (development, staging and production)&lt;br /&gt;
* Support for multiple architectures (such as x86, amd64 etc.)&lt;br /&gt;
* Support for multiple build profiles&lt;br /&gt;
** system (or base) profile, such as desktop or server (stage3) (all the packages contained within the &amp;lt;code&amp;gt;/etc/portage/make.profile&amp;lt;/code&amp;gt; or via &amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;)&lt;br /&gt;
** application profiles, such as php5-app, django-app etc.)&lt;br /&gt;
** simple inheritance is used for things like python-app -&amp;gt; django-app&lt;br /&gt;
** stacks consist of one system profile and multiple application profiles&lt;br /&gt;
** don&#039;t do this: Gentoo itself has only a few profiles and even there issues arise when combining them (for example desktop + selinux-hardened) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:40, 3 January 2014 (CET)&lt;br /&gt;
*** Those are build-profiles (for example chroots or some sort of overlay-fs) not Gentoo (portage) profiles, we definitely need to clarify those terms ;) --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 20:01, 5 January 2014 (CET)&lt;br /&gt;
* All build profiles will use a system profile as their base profile&lt;br /&gt;
* Ability to update an existing build profile, without the need to build it from scratch&lt;br /&gt;
* Ability to do fully automated clean builds (ie. for new archs or new stacks)&lt;br /&gt;
* Ability to automatically update all development profiles on a predefined frequency such as daily, weekly or monthly an be notified about build failures&lt;br /&gt;
** [http://jenkins-ci.org/ jenkins ci] can do this using one jenkins master and a least one build slave per architecture.&lt;br /&gt;
** Other options would be [https://github.com/travis-ci/travis-ci travis ci] (not ready for in-house use) or [http://cruisecontrol.sourceforge.net/ cruise control]&lt;br /&gt;
** Rabe already has a jenkins instance: [http://intranet.rabe.ch/jenkins/]. The instance [[Jenkins-01]] is more or less modern and should be easy to reintegrate with puppet.&lt;br /&gt;
* Each build profile stores the built binary packages under a per-defined directory which will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Application build profiles stores only the extra packages within the above directory, packages included in a base profile won&#039;t be duplicated.&lt;br /&gt;
* Old or no longer supported packages will be removed automatically&lt;br /&gt;
* Build a stage 3 tarball, which can be used for the automatic installation via PXE/TFTP.&lt;br /&gt;
** must be able to build a stage tarball for each of the available environment-arch-system profile combinations&lt;br /&gt;
* Handle reverse dependency updates and ABI changes (aka &amp;lt;code&amp;gt;revdep-rebuild&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Handle perl and python (maybe more) dependency updates (aka &amp;lt;code&amp;gt;perl-cleaner&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;python-updater&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Ability to build kernel and modules&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone requirements ==&lt;br /&gt;
* The official portage tree needs to be cloned via Git, which basically enables one to:&lt;br /&gt;
** keep the control over portage tree updates&lt;br /&gt;
** provide an old version of the tree&lt;br /&gt;
** cherry pick updates&lt;br /&gt;
***  this should be avoided at all cost since it can lead to various sorts of breakages (ebuild &amp;lt;-&amp;gt; ebuild, ebuild &amp;lt;-&amp;gt; eclass, ebuild &amp;lt;-&amp;gt; profile, eclass &amp;lt;-&amp;gt; profile interaction) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:24, 3 January 2014 (CET)&lt;br /&gt;
**** Yes, I agree. Nonetheless, we need the &#039;&#039;possibility&#039;&#039; to do cherry picking, for example to react on zero-day exploits. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 19:53, 5 January 2014 (CET)&lt;br /&gt;
* Support for a development, staging and production branch&lt;br /&gt;
** Ability to automatically sync from upstream&lt;br /&gt;
** Easy merge support from one branch to the next &#039;&#039;higher&#039;&#039; one (staging -&amp;gt; production)&lt;br /&gt;
* Notification support for new [http://www.gentoo.org/security/en/glsa/index.xml GLSAs] which affect packages within the cloned trees.&lt;br /&gt;
** Either via automatic update and merge of &amp;lt;code&amp;gt;/usr/portage/metadata/glsa&amp;lt;/code&amp;gt; or via external mechanisms such as consulting the [http://www.gentoo.org/rdf/en/glsa-index.rdf RDF feed].&lt;br /&gt;
** Having an inventory by collecting puppet facts allows to check for security updates in a central location --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:31, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage overlay requirements ==&lt;br /&gt;
* One Git based portage [http://www.gentoo.org/proj/en/overlays/userguide.xml overlay]&lt;br /&gt;
** Contains own [[#Portage_profile_requirements|portage profiles]]&lt;br /&gt;
** Contains own or modified ebuilds or legacy ones removed from the official tree&lt;br /&gt;
* Support for development, staging and production environment (via Git branches)&lt;br /&gt;
* [http://layman.sourceforge.net/ Layman] compatibility&lt;br /&gt;
** Portage has now direct repository support (as has cave/paludis) and layman may be omitted --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:32, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage profile requirements ==&lt;br /&gt;
* Multiple [http://wiki.gentoo.org/wiki/Profile Portage profiles] stored within the [[#Overlay_requirements|overlay]].&lt;br /&gt;
** One for base, desktop and server (maybe more in the future, such as streambox)&lt;br /&gt;
*** desktop and server both inherit from the base profile which serves as the lowest common denominator.&lt;br /&gt;
* Support for multiple architectures (such as x86 and amd64)&lt;br /&gt;
** Avoid definition duplications via parent profile inheriting.&lt;br /&gt;
* All the profiles have an official Gentoo profile as their master&lt;br /&gt;
* Profiles include only packages belonging to a base system, not an application stack (those will be managed via puppet recipes)&lt;br /&gt;
* Profiles can be used to unmask packages required but not belonging to the base system&lt;br /&gt;
* Profiles sets all the default values for the client&#039;s [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html &amp;lt;code&amp;gt;make.conf&amp;lt;/code&amp;gt;], such as USE flags, BINHOSTS, GENTOO_MIRRORS, CFLAGS, CHOST etc.&lt;br /&gt;
** &#039;&#039;&#039;Warning&#039;&#039;&#039;: many such variables are not incremental and therefore need duplication of Gentoo base profile variables (requiring that someone tracks changes in those variables) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:29, 3 January 2014 (CET)&lt;br /&gt;
* keep the profiles (and the inheritance structure) as simple as possible, rather duplicate than inherit for small deviations to avoid inheritence issues --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:33, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Package host requirements ==&lt;br /&gt;
* Serving files via HTTPS&lt;br /&gt;
** Binary packages for all the clients (&amp;lt;code&amp;gt;PORTAGE_BINHOST&amp;lt;/code&amp;gt;), which were built by the [[#Build_host_requirements|build host]]&lt;br /&gt;
*** Binary packages will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
*** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== File mirror host requirements ==&lt;br /&gt;
* Hosts all the files required to build a package (&amp;lt;code&amp;gt;GENTOO_MIRRORS=mirror.example.com/public/gentoo/distfiles&amp;lt;/code&amp;gt;)&lt;br /&gt;
** Acts as a caching mirror for already downloaded packages from an official mirror&lt;br /&gt;
**  Serves fetch-restricted files (&amp;lt;code&amp;gt;dev-java/oracle-jdk-bin&amp;lt;/code&amp;gt; for example), to authorized clients&lt;br /&gt;
* Files are served via HTTPS&lt;br /&gt;
* Distinguishes between three groups of files&lt;br /&gt;
** &#039;&#039;&#039;public&#039;&#039;&#039;: Files which are available to all clients (theoretically even to the entire internet)&lt;br /&gt;
** &#039;&#039;&#039;site-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure (for example those which would put us into [http://www.bettercallsaul.com/ legal troubles] if available to the public)&lt;br /&gt;
** &#039;&#039;&#039;stack-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure and the software stack group (private files of a specific customer) &lt;br /&gt;
* Provides an easy way to let an administrator manually upload new files, for example via WebDAV-CGI, SFTP or a similar mechanism.&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== Puppet requirements ==&lt;br /&gt;
* moved to [[stoney_orchestra:_Requirements]], included below for reference.&lt;br /&gt;
&lt;br /&gt;
{{:stoney orchestra: Requirements}}&lt;br /&gt;
&lt;br /&gt;
== Install host requirements ==&lt;br /&gt;
* Ability to install physical and virtual machines&lt;br /&gt;
* Distinguish machines by their Ethernet MAC address&lt;br /&gt;
* Provide a PXE/TFTP boot mechanism&lt;br /&gt;
* Partition and format the (virtual) harddisks&lt;br /&gt;
* Install a stage3 image which was built by the build host&lt;br /&gt;
* Bootstrap puppet, enabling it to take over the individual installation and customization.&lt;br /&gt;
* Group hosts into&lt;br /&gt;
** environments (development, staging and production)&lt;br /&gt;
** architectures (such as x86, amd64 etc.)&lt;br /&gt;
** portage profiles (system profiles such as desktop and server)&lt;br /&gt;
** &amp;lt;s&amp;gt;stacks (comprising a complete product as a service with the underlying infrastructure)&amp;lt;/s&amp;gt; this is the task of Puppet --[[Benutzer:Chaf|Chaf]] ([[Benutzer Diskussion:Chaf|Diskussion]]) 09:42, 19. Dez. 2013 (CET)&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure requirements ==&lt;br /&gt;
* Local certificate authority for signing [http://en.wikipedia.org/wiki/X509 X.509] certificates.&lt;br /&gt;
* Master certificate authority root certificate which is only used to sign Sub-CA certificates&lt;br /&gt;
* Sub certificate authorities used for various cases such as&lt;br /&gt;
** Puppet certificates [http://docs.puppetlabs.com/puppet/3/reference/config_ssl_external_ca.html]&lt;br /&gt;
** User certificates&lt;br /&gt;
** Client certificates&lt;br /&gt;
** Host certificates&lt;br /&gt;
* Ability to sign, revoke and extend certificates&lt;br /&gt;
* Publish certificate revocation status either via [http://en.wikipedia.org/wiki/Certificate_revocation_list CRL] and/or [http://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol OCSP]&lt;br /&gt;
** CRL is not worth the hassle due to it not defining how often the CRL must be consulted. Since we are in the same physical net OCSP should be far superior here (thank to its live checking support). On the other hand puppet does not do OCSP yet (redmine: [http://projects.puppetlabs.com/issues/10111 #110111]) so we might need to implement both or implement OCSP as well as develop our own automated revocation for puppet.&lt;br /&gt;
* Choose DNs below &amp;lt;code&amp;gt;dc=rabe,dc=ch&amp;lt;/code&amp;gt;&lt;br /&gt;
* register a PEN-OID as issued by IANA if custom schema work is required&lt;br /&gt;
** Use a @rabe email when requesting a PEN at [http://pen.iana.org/pen/PenApplication.page IANA], last time the @purplehaze.ch was a problem!&lt;br /&gt;
* Some of the aforementioned sub-CAs might be implemented as robot CAs with a self service interface (ie for authorized users).&lt;br /&gt;
* Consider using [http://en.wikipedia.org/wiki/Certificate_Management_Protocol CMP] or [http://en.wikipedia.org/wiki/Certificate_Management_over_CMS CMC] as an API to signing, revoking et. al.&lt;br /&gt;
** Since the underlying RFCs of both these protocols are rather new they are not yet broadly supported.&lt;br /&gt;
* Keep local root CA offline!&lt;br /&gt;
** Maybe use an old netbook as root CA :P&lt;br /&gt;
* Support GPG keys for signing packages&lt;br /&gt;
&lt;br /&gt;
== Git hosting requirements ==&lt;br /&gt;
* Public repositories hosted on [http://www.github.com GitHub] (mainly) under the [https://github.com/organizations/radiorabe radiorabe organization] (almost anything which doesn&#039;t leak sensitive informations)&lt;br /&gt;
* Private repositories hosted on the internal infrastructure&lt;br /&gt;
** Accessible via https and a web interface&lt;br /&gt;
** contains some repos with uber-private data the gets compartmentalized even further (ie. hiera datafiles in different repos)&lt;br /&gt;
* One repository per component&lt;br /&gt;
* Daily backup of all repositories&lt;br /&gt;
* Branches for development, staging and production&lt;br /&gt;
** New features are added to the development branch only and later merged up to staging and production&lt;br /&gt;
* Must support pull-requests so we can implement a review process (when pulling through the envs)&lt;br /&gt;
** Sing-Offing might also be required&lt;br /&gt;
* Adhere to [http://semver.org/ Semantic Versioning] for version/release tags.&lt;br /&gt;
** Tag releases as &amp;lt;code&amp;gt;vX.Y.Z&amp;lt;/code&amp;gt; those will be automatically appear on GitHub as downloadable tarballs, which can be referenced within the corresponding ebuilds.&lt;br /&gt;
** Hit 1.0.0 as soon as code lands on production or earlier&lt;br /&gt;
** Commit .lock files when reaching 1.0.0 where applicable (Gemfile.lock, composer.lock) or earlier if needed&lt;br /&gt;
* Must be able to trigger remote events (ie. update master through mcollective after code was promoted to production in a PR)&lt;br /&gt;
* Support the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model&lt;br /&gt;
&lt;br /&gt;
== Messaging requirements ==&lt;br /&gt;
* I&#039;m talking AMPQ, JMS, STOMP, 0MQ and the likes&lt;br /&gt;
** not sure if we need something in this space for the infra&lt;br /&gt;
** it could facilitate comms between components&lt;br /&gt;
** stuff like mcollective and RadioDNS need something in this space&lt;br /&gt;
&lt;br /&gt;
== Monitoring, logging and alarming system requirements ==&lt;br /&gt;
@TODO&lt;br /&gt;
* centralized logging is used throughout&lt;br /&gt;
** with tools that help find and fix problems and do post mortems&lt;br /&gt;
* all systems are always monitored by a full monitoring suite&lt;br /&gt;
* the monitoring suite must support alarming users through multiple paths&lt;br /&gt;
** alarming should include a fallback strategy and a way to acknowledge alarms&lt;br /&gt;
** it must have a easy way to configure scheduled maintenance either before or while the maintenance is undergoing&lt;br /&gt;
* monitoring, logging and alarming are all automatically configured during regular provisioning of machines&lt;br /&gt;
* alerting uses jabber by default with fallbacks to email and sms-through-gsm depending on the site.&lt;br /&gt;
&lt;br /&gt;
= Implementation proposal =&lt;br /&gt;
== Build host proposal ==&lt;br /&gt;
The build host consists out of various chroots to build binary packages for multiple environments, architectures and build profiles.&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
&lt;br /&gt;
==== build orchestration ====&lt;br /&gt;
&lt;br /&gt;
==== package building ====&lt;br /&gt;
* [http://www.chromium.org/chromium-os/developer-guide/chromite-shell-quick-start chromite] build utility from chromium os ([https://chromium.googlesource.com/chromiumos/chromite/ source repo])&lt;br /&gt;
** as far as I recall chromium os does highly parallel building making their build really fast with a slight trade of in long termn stability (ie. build might fail due to dependencies being built out of oder), &lt;br /&gt;
** the [http://www.chromium.org/chromium-os/developer-guide chromium os developer guide] might also be of interest, among other things it shows that google do split the build into a package building part and an image creation part. &lt;br /&gt;
&lt;br /&gt;
==== &amp;quot;stage4&amp;quot;/box/iso building ====&lt;br /&gt;
* [http://packer.io packer.io] can be used to build stage4 (containing a kernel) images and seems to work for gentoo. Packer often gets used to build Vagrant boxes.&lt;br /&gt;
** [https://github.com/pierreozoux/packer-warehouse/blob/master/var-files/gentoo/generate_latest.sh gentoo script from packer-warehouse] used with packer to create a minimal gentoo vagrant box&lt;br /&gt;
** currently packer and packer-warehouse do not seem capable of building gentoo machines out of the box, I tested this with osx/virtualbox using gentoo stage3 and portage snapshots [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
* [https://github.com/jedi4ever/veewee veewee] vagrant box builder (builds stage4 images in a manner similar to packer&lt;br /&gt;
** has support for a massive amount of guest os types&lt;br /&gt;
*** installs puppet/chef using gem due to the oldish versions in gentoo (and probably elsewhere)&lt;br /&gt;
** supports kvm and others as host os&lt;br /&gt;
** while testing with osx/virtualbox I was able to build and export a vagrant box from gentoo stage3 and portage snapshots without any hiccups [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone proposal ==&lt;br /&gt;
== Portage overlay proposal ==&lt;br /&gt;
== Portage profile proposal ==&lt;br /&gt;
== Package and file mirror proposal ==&lt;br /&gt;
== Puppet proposal ==&lt;br /&gt;
* Adhere to Craig Dunns [http://www.craigdunn.org/2012/05/239/ architecture] [http://www.slideshare.net/PuppetLabs/roles-talk]&lt;br /&gt;
** on the system level (ie for each bar-metal or virtual machine)&lt;br /&gt;
*** roles contains the business view (ie. [https://github.com/radiorabe/puppet/blob/master/role/manifests/puppet/master.pp role::puppet::master])&lt;br /&gt;
*** profiles the implementation (such as [https://github.com/radiorabe/puppet/blob/master/profile/manifests/puppet/master.pp profile::puppet::master])&lt;br /&gt;
** on the architecture level (ie. in the cloud-fabric)&lt;br /&gt;
*** roles contains the business view (ie. role::cloud-storage, role::product1)&lt;br /&gt;
*** profiles contain the implementation (ie profile::storage-cluster, profile::storage-webinterface-farm)&lt;br /&gt;
* Keep profiles, roles (as per craig) and Puppetfile in [https://github.com/radiorabe/puppet github.com/radiorabe/puppet]&lt;br /&gt;
** This is where we keep feature/*, develop and master (ie staging) branches&lt;br /&gt;
** An internal clone then contains all these + production (what exactly is in prodution, ie. our release schedule is considered sensitive in this implementation)&lt;br /&gt;
** This lets us use the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model with almost no changes (the one change being us gating stuff into production on the closed clone)&lt;br /&gt;
** github may use hooks to push content to our internal git when they happen&lt;br /&gt;
* All other modules need their own repo and must be published to the puppet module forge&lt;br /&gt;
* Use librarian-puppet (or r10k) for composing the final puppet envs&lt;br /&gt;
** r10k eschews git submodule support we used in puppet-syslogng but has support for multiple envs out of the box&lt;br /&gt;
** librarian-puppet would need to be run once per environment to achieve what r10k does&lt;br /&gt;
* provide develop, master and production branches from private repo as puppet environments on master&lt;br /&gt;
&lt;br /&gt;
== Install host proposal ==&lt;br /&gt;
* use the existing server on [[tftp-01]] on the RaBe infra as a shortcut&lt;br /&gt;
** replace that instance with one native to the infra when it is ready for that&lt;br /&gt;
* iPXE [http://ipxe.org/]&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
* Tools that run puppet on freshly installed machines (and also do some provisioning)&lt;br /&gt;
** [https://forge.puppetlabs.com/puppetlabs/razor puppetlabs razor] bare metal/cloud provisioning tool&lt;br /&gt;
** [http://www.vagrantup.com/ vagrant] cloud provisioning aimed at provisioning developer boxes (with virtualbox). Has 3rd party support for various cloud systems. Vagrant might be interesting for creating dev clouds. I&#039;ve seen this being used on production sites.&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure proposal ==&lt;br /&gt;
* write [[certificate policy]] (in german!)&lt;br /&gt;
* hold a key ceremony for the root and level 1&lt;br /&gt;
** offline ceremony on an old netbook with centos or similar (not debian, probably not gentoo to make this happen soonish)&lt;br /&gt;
** Sign RaBe root cert and level 1 intermediate cert&lt;br /&gt;
** store root cert key on 2 sdcards and as 1 printout somewhere safely&lt;br /&gt;
** store level 1 intermediate key on sdcards for use by admins&lt;br /&gt;
* use level 1 intermediate key to sign level 2 cas as needed&lt;br /&gt;
** level 2 robot ca key for puppet (managed by &amp;lt;code&amp;gt;puppet ca&amp;lt;/code&amp;gt;)&lt;br /&gt;
** level 2 ca for client certs&lt;br /&gt;
** level 2 ca for host certs&lt;br /&gt;
** more level 2 certs&lt;br /&gt;
* use OpenSSL as default software for PKI&lt;br /&gt;
** ssl has the largest userbase which should make it easier on new admins&lt;br /&gt;
** features that openssl does not implement get used as soon as openssl catches up (ie. [http://cmpforopenssl.sourceforge.net/‎ CMP])&lt;br /&gt;
&lt;br /&gt;
== git hosting proposal ==&lt;br /&gt;
&lt;br /&gt;
* [http://gitlab.org/ gitlab] seems nice even though is is ruby on rails under the hood&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Binary_package_guide Gentoo Binary Package Guide]&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Preserve-libs Gentoo preserve-libs]&lt;br /&gt;
* [http://swift.siphos.be/aglara/ A Gentoo Linux Advanced Reference Architecture]&lt;br /&gt;
* [http://www.gentoo.org/proj/en/gentoo-alt/prefix/ Gentoo Prefix]&lt;br /&gt;
* man pages&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/portage.5.html portage(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/emerge.1.html emerge(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html make.conf(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.1.html ebuild(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.5.html ebuild(5)]&lt;br /&gt;
&lt;br /&gt;
[[Category: Infrastructure]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2799</id>
		<title>Gentoo Infrastructure</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2799"/>
		<updated>2014-01-11T10:19:21Z</updated>

		<summary type="html">&lt;p&gt;Lucas: /* Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This article describes how we plan on using gentoo as an infrastructure backbone for creating a complete and modern IT architecture.&lt;br /&gt;
&lt;br /&gt;
== Glossary ==&lt;br /&gt;
@TODO We need to clean up some terms already (for instance the portage vs puppet profile thing) A [[:Category:Glossary|glossary]] should help us define term more closely (and stick to the definitions).&lt;br /&gt;
&lt;br /&gt;
; portage profile&lt;br /&gt;
: A profile in gentoo portage. Defines either a system or application stack for portage.&lt;br /&gt;
; portage build profile&lt;br /&gt;
: A profile in gentoo portage. Based of a system profile but used during the build phase of the binary packages used in the final deploy.&lt;br /&gt;
; puppet profile&lt;br /&gt;
: A puppet profile contains the implementation logic of how to install and configure an aspect of a system.&lt;br /&gt;
; stack&lt;br /&gt;
: A stack contains a complete and deployable product that may be provisioned and used. Stack have very simple inheritance letting the admin create stack trees based on each other. For instance a Ruby on Rails stack will be based of of a ruby stack which is based off a linux stack.&lt;br /&gt;
&lt;br /&gt;
= Required components =&lt;br /&gt;
* Build host(s) for binary packages&lt;br /&gt;
* HTTP server for serving binary packages and distfiles (required by the ebuilds)&lt;br /&gt;
* Git clone of official portage tree&lt;br /&gt;
* Overlay(s)&lt;br /&gt;
* Own portage profile(s)&lt;br /&gt;
* rsync or Git server for serving the Overlay and the portage profiles&lt;br /&gt;
* Stage3 building system&lt;br /&gt;
* Puppet for configuration management and software installation&lt;br /&gt;
* Git version control for everything (overlays, portage profiles, puppet manifests and scripts/code)&lt;br /&gt;
* Install host (PXE boot / TFTP / DHCP)&lt;br /&gt;
** emc/puppetlabs [https://github.com/puppetlabs/Razor razor] can do this but needs some work for gentoo &lt;br /&gt;
* Automatic base installation script&lt;br /&gt;
** also in the scope of razor&lt;br /&gt;
* Separation of development, staging and production environments&lt;br /&gt;
** tagged and managed in git&lt;br /&gt;
* PKI environment (with dedicated sub CAs) for X509 certificates (used for Puppet, server and client certs etc.)&lt;br /&gt;
* git web interface (make dotfiles and frozen clones accessible to power-users)&lt;br /&gt;
* Central authentication service&lt;br /&gt;
* DNS, DHCP and NTP services&lt;br /&gt;
* Monitoring and alarming system&lt;br /&gt;
* Logging&lt;br /&gt;
* versioning for everything (if it is a committable file, use semver on its repo)&lt;br /&gt;
&lt;br /&gt;
== Binary package requirements ==&lt;br /&gt;
* Ability to build and install binary packages with the same version but different USE flags. For example, MySQL server package (&amp;lt;code&amp;gt;-minimal&amp;lt;/code&amp;gt; and MySQL client &amp;amp; libs package &amp;lt;code&amp;gt;minimal&amp;lt;/code&amp;gt;)&lt;br /&gt;
** don&#039;t go there: this imposes a significant amount of maintenance work and may still break. Rather provide large enough base sets and accept that some packages install too much (you can still disable them at runtime) and build the few deviations from the rule on the servers from source --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:39, 3 January 2014 (CET)&lt;br /&gt;
*** Yes, we need to and can go there :-) I agree with you, that we should do this only if necessary, apache for example can be built once and has the ability to turn features (module loading) on/off via its configuration. Other software does not provide such run-time configuration which results in unwanted server-software and dependencies on the installed hosts (&amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for example). I clearly do not want to have a dedicated build environment for each of those packages, I would rather see a build env, called minimal for example, which is used to build all those database packages with only lib and clients enabled (use the same env for PostgreSQL, OpenLDAP, MySQL etc.). As stated before, the whole build process needs to be automated, so I don&#039;t see a considerable increase of maintenance work coming up here. The dependency problem is mitigated through the fact that we have a frozen portage tree for all our build envs and therefore use the same versions everywhere. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 12:04, 6 January 2014 (CET)&lt;br /&gt;
*** Yes and no on this one. We clearly need to keep the list of packages that require this at bare minimum. &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for instance doesn&#039;t warrant this, we just won&#039;t start the server on non server nodes. Easy as cake. The server code and it&#039;s deps wont do any harm on say a desktop or other server box. Even though I can&#039;t think of example, I do believe we will be needing this possibility when we encounter packages that need to be built using different profiles for different use cases, things like having a php with-curlwrappers vs one with the curl module sans curlwrappers. The important point I take from this is that creating new profiles with small deviations from our default must be very easy (ie. not much work). Basically we need the infras support for n different build profiles to be fully automated and well documented. [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 19:52, 9 January 2014 (CET)&lt;br /&gt;
**** The &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; is definitely a good example, I don&#039;t want to install and maintain MySQL, Apache, PHP, snmpd (including all the deps) etc. on hosts which just need a Zabbix agent. I would also like to pragmatically avoid unused deps, in order to minimize reverse-updates and security updates (which must be provided nonetheless if the software is in use or not). --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 13:20, 10 January 2014 (CET)&lt;br /&gt;
* Providing binary packages for different major (and sometimes minor) versions, for example: &amp;lt;code&amp;gt;dev-db/mysql-5.X.Y&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;dev-db/mysql-6.X.Y&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Provide binary packages for pre-compiled Linux kernels and modules (not just a binary package of &amp;lt;code&amp;gt;sys-kernel/gentoo-sources&amp;lt;/code&amp;gt;)&lt;br /&gt;
** This makes it possible to build stage4 images from binary packages. &lt;br /&gt;
** Most likely there will be separate packages for servers and desktops built with different genkernel configs.&lt;br /&gt;
* Handle reverse dependency updates and ABI changes&lt;br /&gt;
&lt;br /&gt;
== Build host requirements ==&lt;br /&gt;
* Build binary package for all required software&lt;br /&gt;
* Support for multiple environments (development, staging and production)&lt;br /&gt;
* Support for multiple architectures (such as x86, amd64 etc.)&lt;br /&gt;
* Support for multiple build profiles&lt;br /&gt;
** system (or base) profile, such as desktop or server (stage3) (all the packages contained within the &amp;lt;code&amp;gt;/etc/portage/make.profile&amp;lt;/code&amp;gt; or via &amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;)&lt;br /&gt;
** application profiles, such as php5-app, django-app etc.)&lt;br /&gt;
** simple inheritance is used for things like python-app -&amp;gt; django-app&lt;br /&gt;
** stacks consist of one system profile and multiple application profiles&lt;br /&gt;
** don&#039;t do this: Gentoo itself has only a few profiles and even there issues arise when combining them (for example desktop + selinux-hardened) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:40, 3 January 2014 (CET)&lt;br /&gt;
*** Those are build-profiles (for example chroots or some sort of overlay-fs) not Gentoo (portage) profiles, we definitely need to clarify those terms ;) --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 20:01, 5 January 2014 (CET)&lt;br /&gt;
* All build profiles will use a system profile as their base profile&lt;br /&gt;
* Ability to update an existing build profile, without the need to build it from scratch&lt;br /&gt;
* Ability to do fully automated clean builds (ie. for new archs or new stacks)&lt;br /&gt;
* Ability to automatically update all development profiles on a predefined frequency such as daily, weekly or monthly an be notified about build failures&lt;br /&gt;
** [http://jenkins-ci.org/ jenkins ci] can do this using one jenkins master and a least one build slave per architecture.&lt;br /&gt;
** Other options would be [https://github.com/travis-ci/travis-ci travis ci] (not ready for in-house use) or [http://cruisecontrol.sourceforge.net/ cruise control]&lt;br /&gt;
** Rabe already has a jenkins instance: [http://intranet.rabe.ch/jenkins/]. The instance [[Jenkins-01]] is more or less modern and should be easy to reintegrate with puppet.&lt;br /&gt;
* Each build profile stores the built binary packages under a per-defined directory which will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Application build profiles stores only the extra packages within the above directory, packages included in a base profile won&#039;t be duplicated.&lt;br /&gt;
* Old or no longer supported packages will be removed automatically&lt;br /&gt;
* Build a stage 3 tarball, which can be used for the automatic installation via PXE/TFTP.&lt;br /&gt;
** must be able to build a stage tarball for each of the available environment-arch-system profile combinations&lt;br /&gt;
* Handle reverse dependency updates and ABI changes (aka &amp;lt;code&amp;gt;revdep-rebuild&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Handle perl and python (maybe more) dependency updates (aka &amp;lt;code&amp;gt;perl-cleaner&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;python-updater&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Ability to build kernel and modules&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone requirements ==&lt;br /&gt;
* The official portage tree needs to be cloned via Git, which basically enables one to:&lt;br /&gt;
** keep the control over portage tree updates&lt;br /&gt;
** provide an old version of the tree&lt;br /&gt;
** cherry pick updates&lt;br /&gt;
***  this should be avoided at all cost since it can lead to various sorts of breakages (ebuild &amp;lt;-&amp;gt; ebuild, ebuild &amp;lt;-&amp;gt; eclass, ebuild &amp;lt;-&amp;gt; profile, eclass &amp;lt;-&amp;gt; profile interaction) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:24, 3 January 2014 (CET)&lt;br /&gt;
**** Yes, I agree. Nonetheless, we need the &#039;&#039;possibility&#039;&#039; to do cherry picking, for example to react on zero-day exploits. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 19:53, 5 January 2014 (CET)&lt;br /&gt;
* Support for a development, staging and production branch&lt;br /&gt;
** Ability to automatically sync from upstream&lt;br /&gt;
** Easy merge support from one branch to the next &#039;&#039;higher&#039;&#039; one (staging -&amp;gt; production)&lt;br /&gt;
* Notification support for new [http://www.gentoo.org/security/en/glsa/index.xml GLSAs] which affect packages within the cloned trees.&lt;br /&gt;
** Either via automatic update and merge of &amp;lt;code&amp;gt;/usr/portage/metadata/glsa&amp;lt;/code&amp;gt; or via external mechanisms such as consulting the [http://www.gentoo.org/rdf/en/glsa-index.rdf RDF feed].&lt;br /&gt;
** Having an inventory by collecting puppet facts allows to check for security updates in a central location --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:31, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage overlay requirements ==&lt;br /&gt;
* One Git based portage [http://www.gentoo.org/proj/en/overlays/userguide.xml overlay]&lt;br /&gt;
** Contains own [[#Portage_profile_requirements|portage profiles]]&lt;br /&gt;
** Contains own or modified ebuilds or legacy ones removed from the official tree&lt;br /&gt;
* Support for development, staging and production environment (via Git branches)&lt;br /&gt;
* [http://layman.sourceforge.net/ Layman] compatibility&lt;br /&gt;
** Portage has now direct repository support (as has cave/paludis) and layman may be omitted --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:32, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage profile requirements ==&lt;br /&gt;
* Multiple [http://wiki.gentoo.org/wiki/Profile Portage profiles] stored within the [[#Overlay_requirements|overlay]].&lt;br /&gt;
** One for base, desktop and server (maybe more in the future, such as streambox)&lt;br /&gt;
*** desktop and server both inherit from the base profile which serves as the lowest common denominator.&lt;br /&gt;
* Support for multiple architectures (such as x86 and amd64)&lt;br /&gt;
** Avoid definition duplications via parent profile inheriting.&lt;br /&gt;
* All the profiles have an official Gentoo profile as their master&lt;br /&gt;
* Profiles include only packages belonging to a base system, not an application stack (those will be managed via puppet recipes)&lt;br /&gt;
* Profiles can be used to unmask packages required but not belonging to the base system&lt;br /&gt;
* Profiles sets all the default values for the client&#039;s [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html &amp;lt;code&amp;gt;make.conf&amp;lt;/code&amp;gt;], such as USE flags, BINHOSTS, GENTOO_MIRRORS, CFLAGS, CHOST etc.&lt;br /&gt;
** &#039;&#039;&#039;Warning&#039;&#039;&#039;: many such variables are not incremental and therefore need duplication of Gentoo base profile variables (requiring that someone tracks changes in those variables) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:29, 3 January 2014 (CET)&lt;br /&gt;
* keep the profiles (and the inheritance structure) as simple as possible, rather duplicate than inherit for small deviations to avoid inheritence issues --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:33, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Package host requirements ==&lt;br /&gt;
* Serving files via HTTPS&lt;br /&gt;
** Binary packages for all the clients (&amp;lt;code&amp;gt;PORTAGE_BINHOST&amp;lt;/code&amp;gt;), which were built by the [[#Build_host_requirements|build host]]&lt;br /&gt;
*** Binary packages will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
*** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== File mirror host requirements ==&lt;br /&gt;
* Hosts all the files required to build a package (&amp;lt;code&amp;gt;GENTOO_MIRRORS=mirror.example.com/public/gentoo/distfiles&amp;lt;/code&amp;gt;)&lt;br /&gt;
** Acts as a caching mirror for already downloaded packages from an official mirror&lt;br /&gt;
**  Serves fetch-restricted files (&amp;lt;code&amp;gt;dev-java/oracle-jdk-bin&amp;lt;/code&amp;gt; for example), to authorized clients&lt;br /&gt;
* Files are served via HTTPS&lt;br /&gt;
* Distinguishes between three groups of files&lt;br /&gt;
** &#039;&#039;&#039;public&#039;&#039;&#039;: Files which are available to all clients (theoretically even to the entire internet)&lt;br /&gt;
** &#039;&#039;&#039;site-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure (for example those which would put us into [http://www.bettercallsaul.com/ legal troubles] if available to the public)&lt;br /&gt;
** &#039;&#039;&#039;stack-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure and the software stack group (private files of a specific customer) &lt;br /&gt;
* Provides an easy way to let an administrator manually upload new files, for example via WebDAV-CGI, SFTP or a similar mechanism.&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== Puppet requirements ==&lt;br /&gt;
* moved to [[stoney_orchestra:_Requirements]], included below for reference.&lt;br /&gt;
&lt;br /&gt;
{{:stoney orchestra: Requirements}}&lt;br /&gt;
&lt;br /&gt;
== Install host requirements ==&lt;br /&gt;
* Ability to install physical and virtual machines&lt;br /&gt;
* Distinguish machines by their Ethernet MAC address&lt;br /&gt;
* Provide a PXE/TFTP boot mechanism&lt;br /&gt;
* Partition and format the (virtual) harddisks&lt;br /&gt;
* Install a stage3 image which was built by the build host&lt;br /&gt;
* Bootstrap puppet, enabling it to take over the individual installation and customization.&lt;br /&gt;
* Group hosts into&lt;br /&gt;
** environments (development, staging and production)&lt;br /&gt;
** architectures (such as x86, amd64 etc.)&lt;br /&gt;
** portage profiles (system profiles such as desktop and server)&lt;br /&gt;
** &amp;lt;s&amp;gt;stacks (comprising a complete product as a service with the underlying infrastructure)&amp;lt;/s&amp;gt; this is the task of Puppet --[[Benutzer:Chaf|Chaf]] ([[Benutzer Diskussion:Chaf|Diskussion]]) 09:42, 19. Dez. 2013 (CET)&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure requirements ==&lt;br /&gt;
* Local certificate authority for signing [http://en.wikipedia.org/wiki/X509 X.509] certificates.&lt;br /&gt;
* Master certificate authority root certificate which is only used to sign Sub-CA certificates&lt;br /&gt;
* Sub certificate authorities used for various cases such as&lt;br /&gt;
** Puppet certificates [http://docs.puppetlabs.com/puppet/3/reference/config_ssl_external_ca.html]&lt;br /&gt;
** User certificates&lt;br /&gt;
** Client certificates&lt;br /&gt;
** Host certificates&lt;br /&gt;
* Ability to sign, revoke and extend certificates&lt;br /&gt;
* Publish certificate revocation status either via [http://en.wikipedia.org/wiki/Certificate_revocation_list CRL] and/or [http://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol OCSP]&lt;br /&gt;
** CRL is not worth the hassle due to it not defining how often the CRL must be consulted. Since we are in the same physical net OCSP should be far superior here (thank to its live checking support). On the other hand puppet does not do OCSP yet (redmine: [http://projects.puppetlabs.com/issues/10111 #110111]) so we might need to implement both or implement OCSP as well as develop our own automated revocation for puppet.&lt;br /&gt;
* Choose DNs below &amp;lt;code&amp;gt;dc=rabe,dc=ch&amp;lt;/code&amp;gt;&lt;br /&gt;
* register a PEN-OID as issued by IANA if custom schema work is required&lt;br /&gt;
** Use a @rabe email when requesting a PEN at [http://pen.iana.org/pen/PenApplication.page IANA], last time the @purplehaze.ch was a problem!&lt;br /&gt;
* Some of the aforementioned sub-CAs might be implemented as robot CAs with a self service interface (ie for authorized users).&lt;br /&gt;
* Consider using [http://en.wikipedia.org/wiki/Certificate_Management_Protocol CMP] or [http://en.wikipedia.org/wiki/Certificate_Management_over_CMS CMC] as an API to signing, revoking et. al.&lt;br /&gt;
** Since the underlying RFCs of both these protocols are rather new they are not yet broadly supported.&lt;br /&gt;
* Keep local root CA offline!&lt;br /&gt;
** Maybe use an old netbook as root CA :P&lt;br /&gt;
* Support GPG keys for signing packages&lt;br /&gt;
&lt;br /&gt;
== Git hosting requirements ==&lt;br /&gt;
* Public repositories hosted on [http://www.github.com GitHub] (mainly) under the [https://github.com/organizations/radiorabe radiorabe organization] (almost anything which doesn&#039;t leak sensitive informations)&lt;br /&gt;
* Private repositories hosted on the internal infrastructure&lt;br /&gt;
** Accessible via https and a web interface&lt;br /&gt;
** contains some repos with uber-private data the gets compartmentalized even further (ie. hiera datafiles in different repos)&lt;br /&gt;
* One repository per component&lt;br /&gt;
* Daily backup of all repositories&lt;br /&gt;
* Branches for development, staging and production&lt;br /&gt;
** New features are added to the development branch only and later merged up to staging and production&lt;br /&gt;
* Must support pull-requests so we can implement a review process (when pulling through the envs)&lt;br /&gt;
** Sing-Offing might also be required&lt;br /&gt;
* Adhere to [http://semver.org/ Semantic Versioning] for version/release tags.&lt;br /&gt;
** Tag releases as &amp;lt;code&amp;gt;vX.Y.Z&amp;lt;/code&amp;gt; those will be automatically appear on GitHub as downloadable tarballs, which can be referenced within the corresponding ebuilds.&lt;br /&gt;
** Hit 1.0.0 as soon as code lands on production or earlier&lt;br /&gt;
** Commit .lock files when reaching 1.0.0 where applicable (Gemfile.lock, composer.lock) or earlier if needed&lt;br /&gt;
* Must be able to trigger remote events (ie. update master through mcollective after code was promoted to production in a PR)&lt;br /&gt;
* Support the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model&lt;br /&gt;
&lt;br /&gt;
== Messaging requirements ==&lt;br /&gt;
* I&#039;m talking AMPQ, JMS, STOMP, 0MQ and the likes&lt;br /&gt;
** not sure if we need something in this space for the infra&lt;br /&gt;
** it could facilitate comms between components&lt;br /&gt;
** stuff like mcollective and RadioDNS need something in this space&lt;br /&gt;
&lt;br /&gt;
== Monitoring, logging and alarming system requirements ==&lt;br /&gt;
@TODO&lt;br /&gt;
* centralized logging is used throughout&lt;br /&gt;
** with tools that help find and fix problems and do post mortems&lt;br /&gt;
* all systems are always monitored by a full monitoring suite&lt;br /&gt;
* the monitoring suite must support alarming users through multiple paths&lt;br /&gt;
** alarming should include a fallback strategy and a way to acknowledge alarms&lt;br /&gt;
** it must have a easy way to configure scheduled maintenance either before or while the maintenance is undergoing&lt;br /&gt;
* monitoring, logging and alarming are all automatically configured during regular provisioning of machines&lt;br /&gt;
* alerting uses jabber by default with fallbacks to email and sms-through-gsm depending on the site.&lt;br /&gt;
&lt;br /&gt;
= Implementation proposal =&lt;br /&gt;
== Build host proposal ==&lt;br /&gt;
The build host consists out of various chroots to build binary packages for multiple environments, architectures and build profiles.&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
* [http://packer.io packer.io] can be used to build stage4 (containing a kernel) images and seems to work for gentoo. Packer often gets used to build Vagrant boxes.&lt;br /&gt;
** [https://github.com/pierreozoux/packer-warehouse/blob/master/var-files/gentoo/generate_latest.sh gentoo script from packer-warehouse] used with packer to create a minimal gentoo vagrant box&lt;br /&gt;
** currently packer and packer-warehouse do not seem capable of building gentoo machines out of the box, I tested this with osx/virtualbox using gentoo stage3 and portage snapshots [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
* [https://github.com/jedi4ever/veewee veewee] vagrant box builder (builds stage4 images in a manner similar to packer&lt;br /&gt;
** has support for a massive amount of guest os types&lt;br /&gt;
*** installs puppet/chef using gem due to the oldish versions in gentoo (and probably elsewhere)&lt;br /&gt;
** supports kvm and others as host os&lt;br /&gt;
** while testing with osx/virtualbox I was able to build and export a vagrant box from gentoo stage3 and portage snapshots without any hiccups [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 11:19, 11 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone proposal ==&lt;br /&gt;
== Portage overlay proposal ==&lt;br /&gt;
== Portage profile proposal ==&lt;br /&gt;
== Package and file mirror proposal ==&lt;br /&gt;
== Puppet proposal ==&lt;br /&gt;
* Adhere to Craig Dunns [http://www.craigdunn.org/2012/05/239/ architecture] [http://www.slideshare.net/PuppetLabs/roles-talk]&lt;br /&gt;
** on the system level (ie for each bar-metal or virtual machine)&lt;br /&gt;
*** roles contains the business view (ie. [https://github.com/radiorabe/puppet/blob/master/role/manifests/puppet/master.pp role::puppet::master])&lt;br /&gt;
*** profiles the implementation (such as [https://github.com/radiorabe/puppet/blob/master/profile/manifests/puppet/master.pp profile::puppet::master])&lt;br /&gt;
** on the architecture level (ie. in the cloud-fabric)&lt;br /&gt;
*** roles contains the business view (ie. role::cloud-storage, role::product1)&lt;br /&gt;
*** profiles contain the implementation (ie profile::storage-cluster, profile::storage-webinterface-farm)&lt;br /&gt;
* Keep profiles, roles (as per craig) and Puppetfile in [https://github.com/radiorabe/puppet github.com/radiorabe/puppet]&lt;br /&gt;
** This is where we keep feature/*, develop and master (ie staging) branches&lt;br /&gt;
** An internal clone then contains all these + production (what exactly is in prodution, ie. our release schedule is considered sensitive in this implementation)&lt;br /&gt;
** This lets us use the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model with almost no changes (the one change being us gating stuff into production on the closed clone)&lt;br /&gt;
** github may use hooks to push content to our internal git when they happen&lt;br /&gt;
* All other modules need their own repo and must be published to the puppet module forge&lt;br /&gt;
* Use librarian-puppet (or r10k) for composing the final puppet envs&lt;br /&gt;
** r10k eschews git submodule support we used in puppet-syslogng but has support for multiple envs out of the box&lt;br /&gt;
** librarian-puppet would need to be run once per environment to achieve what r10k does&lt;br /&gt;
* provide develop, master and production branches from private repo as puppet environments on master&lt;br /&gt;
&lt;br /&gt;
== Install host proposal ==&lt;br /&gt;
* use the existing server on [[tftp-01]] on the RaBe infra as a shortcut&lt;br /&gt;
** replace that instance with one native to the infra when it is ready for that&lt;br /&gt;
* iPXE [http://ipxe.org/]&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
* Tools that run puppet on freshly installed machines (and also do some provisioning)&lt;br /&gt;
** [https://forge.puppetlabs.com/puppetlabs/razor puppetlabs razor] bare metal/cloud provisioning tool&lt;br /&gt;
** [http://www.vagrantup.com/ vagrant] cloud provisioning aimed at provisioning developer boxes (with virtualbox). Has 3rd party support for various cloud systems. Vagrant might be interesting for creating dev clouds. I&#039;ve seen this being used on production sites.&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure proposal ==&lt;br /&gt;
* write [[certificate policy]] (in german!)&lt;br /&gt;
* hold a key ceremony for the root and level 1&lt;br /&gt;
** offline ceremony on an old netbook with centos or similar (not debian, probably not gentoo to make this happen soonish)&lt;br /&gt;
** Sign RaBe root cert and level 1 intermediate cert&lt;br /&gt;
** store root cert key on 2 sdcards and as 1 printout somewhere safely&lt;br /&gt;
** store level 1 intermediate key on sdcards for use by admins&lt;br /&gt;
* use level 1 intermediate key to sign level 2 cas as needed&lt;br /&gt;
** level 2 robot ca key for puppet (managed by &amp;lt;code&amp;gt;puppet ca&amp;lt;/code&amp;gt;)&lt;br /&gt;
** level 2 ca for client certs&lt;br /&gt;
** level 2 ca for host certs&lt;br /&gt;
** more level 2 certs&lt;br /&gt;
* use OpenSSL as default software for PKI&lt;br /&gt;
** ssl has the largest userbase which should make it easier on new admins&lt;br /&gt;
** features that openssl does not implement get used as soon as openssl catches up (ie. [http://cmpforopenssl.sourceforge.net/‎ CMP])&lt;br /&gt;
&lt;br /&gt;
== git hosting proposal ==&lt;br /&gt;
&lt;br /&gt;
* [http://gitlab.org/ gitlab] seems nice even though is is ruby on rails under the hood&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Binary_package_guide Gentoo Binary Package Guide]&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Preserve-libs Gentoo preserve-libs]&lt;br /&gt;
* [http://swift.siphos.be/aglara/ A Gentoo Linux Advanced Reference Architecture]&lt;br /&gt;
* [http://www.gentoo.org/proj/en/gentoo-alt/prefix/ Gentoo Prefix]&lt;br /&gt;
* man pages&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/portage.5.html portage(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/emerge.1.html emerge(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html make.conf(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.1.html ebuild(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.5.html ebuild(5)]&lt;br /&gt;
&lt;br /&gt;
[[Category: Infrastructure]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2748</id>
		<title>Gentoo Infrastructure</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2748"/>
		<updated>2014-01-09T18:52:35Z</updated>

		<summary type="html">&lt;p&gt;Lucas: /* Binary package requirements */ my 2cts on the &amp;quot;different USE flags&amp;quot; thing&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This article describes how we plan on using gentoo as an infrastructure backbone for creating a complete and modern IT architecture.&lt;br /&gt;
&lt;br /&gt;
== Glossary ==&lt;br /&gt;
@TODO We need to clean up some terms already (for instance the portage vs puppet profile thing) A [[:Category:Glossary|glossary]] should help us define term more closely (and stick to the definitions).&lt;br /&gt;
&lt;br /&gt;
; portage profile&lt;br /&gt;
: A profile in gentoo portage. Defines either a system or application stack for portage.&lt;br /&gt;
; portage build profile&lt;br /&gt;
: A profile in gentoo portage. Based of a system profile but used during the build phase of the binary packages used in the final deploy.&lt;br /&gt;
; puppet profile&lt;br /&gt;
: A puppet profile contains the implementation logic of how to install and configure an aspect of a system.&lt;br /&gt;
; stack&lt;br /&gt;
: A stack contains a complete and deployable product that may be provisioned and used. Stack have very simple inheritance letting the admin create stack trees based on each other. For instance a Ruby on Rails stack will be based of of a ruby stack which is based off a linux stack.&lt;br /&gt;
&lt;br /&gt;
= Required components =&lt;br /&gt;
* Build host(s) for binary packages&lt;br /&gt;
* HTTP server for serving binary packages and distfiles (required by the ebuilds)&lt;br /&gt;
* Git clone of official portage tree&lt;br /&gt;
* Overlay(s)&lt;br /&gt;
* Own portage profile(s)&lt;br /&gt;
* rsync or Git server for serving the Overlay and the portage profiles&lt;br /&gt;
* Stage3 building system&lt;br /&gt;
* Puppet for configuration management and software installation&lt;br /&gt;
* Git version control for everything (overlays, portage profiles, puppet manifests and scripts/code)&lt;br /&gt;
* Install host (PXE boot / TFTP / DHCP)&lt;br /&gt;
** emc/puppetlabs [https://github.com/puppetlabs/Razor razor] can do this but needs some work for gentoo &lt;br /&gt;
* Automatic base installation script&lt;br /&gt;
** also in the scope of razor&lt;br /&gt;
* Separation of development, staging and production environments&lt;br /&gt;
** tagged and managed in git&lt;br /&gt;
* PKI environment (with dedicated sub CAs) for X509 certificates (used for Puppet, server and client certs etc.)&lt;br /&gt;
* git web interface (make dotfiles and frozen clones accessible to power-users)&lt;br /&gt;
* Central authentication service&lt;br /&gt;
* DNS, DHCP and NTP services&lt;br /&gt;
* Monitoring and alarming system&lt;br /&gt;
* Logging&lt;br /&gt;
* versioning for everything (if it is a committable file, use semver on its repo)&lt;br /&gt;
&lt;br /&gt;
== Binary package requirements ==&lt;br /&gt;
* Ability to build and install binary packages with the same version but different USE flags. For example, MySQL server package (&amp;lt;code&amp;gt;-minimal&amp;lt;/code&amp;gt; and MySQL client &amp;amp; libs package &amp;lt;code&amp;gt;minimal&amp;lt;/code&amp;gt;)&lt;br /&gt;
** don&#039;t go there: this imposes a significant amount of maintenance work and may still break. Rather provide large enough base sets and accept that some packages install too much (you can still disable them at runtime) and build the few deviations from the rule on the servers from source --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:39, 3 January 2014 (CET)&lt;br /&gt;
*** Yes, we need to and can go there :-) I agree with you, that we should do this only if necessary, apache for example can be built once and has the ability to turn features (module loading) on/off via its configuration. Other software does not provide such run-time configuration which results in unwanted server-software and dependencies on the installed hosts (&amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for example). I clearly do not want to have a dedicated build environment for each of those packages, I would rather see a build env, called minimal for example, which is used to build all those database packages with only lib and clients enabled (use the same env for PostgreSQL, OpenLDAP, MySQL etc.). As stated before, the whole build process needs to be automated, so I don&#039;t see a considerable increase of maintenance work coming up here. The dependency problem is mitigated through the fact that we have a frozen portage tree for all our build envs and therefore use the same versions everywhere. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 12:04, 6 January 2014 (CET)&lt;br /&gt;
*** Yes and no on this one. We clearly need to keep the list of packages that require this at bare minimum. &amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for instance doesn&#039;t warrant this, we just won&#039;t start the server on non server nodes. Easy as cake. The server code and it&#039;s deps wont do any harm on say a desktop or other server box. Even though I can&#039;t think of example, I do believe we will be needing this possibility when we encounter packages that need to be built using different profiles for different use cases, things like having a php with-curlwrappers vs one with the curl module sans curlwrappers. The important point I take from this is that creating new profiles with small deviations from our default must be very easy (ie. not much work). Basically we need the infras support for n different build profiles to be fully automated and well documented. [[User:Lucas|Lucas]] ([[User talk:Lucas|talk]]) 19:52, 9 January 2014 (CET)&lt;br /&gt;
* Providing binary packages for different major (and sometimes minor) versions, for example: &amp;lt;code&amp;gt;dev-db/mysql-5.X.Y&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;dev-db/mysql-6.X.Y&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Provide binary packages for pre-compiled Linux kernels and modules (not just a binary package of &amp;lt;code&amp;gt;sys-kernel/gentoo-sources&amp;lt;/code&amp;gt;)&lt;br /&gt;
** This makes it possible to build stage4 images from binary packages. &lt;br /&gt;
** Most likely there will be separate packages for servers and desktops built with different genkernel configs.&lt;br /&gt;
* Handle reverse dependency updates and ABI changes&lt;br /&gt;
&lt;br /&gt;
== Build host requirements ==&lt;br /&gt;
* Build binary package for all required software&lt;br /&gt;
* Support for multiple environments (development, staging and production)&lt;br /&gt;
* Support for multiple architectures (such as x86, amd64 etc.)&lt;br /&gt;
* Support for multiple build profiles&lt;br /&gt;
** system (or base) profile, such as desktop or server (stage3) (all the packages contained within the &amp;lt;code&amp;gt;/etc/portage/make.profile&amp;lt;/code&amp;gt; or via &amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;)&lt;br /&gt;
** application profiles, such as php5-app, django-app etc.)&lt;br /&gt;
** simple inheritance is used for things like python-app -&amp;gt; django-app&lt;br /&gt;
** stacks consist of one system profile and multiple application profiles&lt;br /&gt;
** don&#039;t do this: Gentoo itself has only a few profiles and even there issues arise when combining them (for example desktop + selinux-hardened) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:40, 3 January 2014 (CET)&lt;br /&gt;
*** Those are build-profiles (for example chroots or some sort of overlay-fs) not Gentoo (portage) profiles, we definitely need to clarify those terms ;) --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 20:01, 5 January 2014 (CET)&lt;br /&gt;
* All build profiles will use a system profile as their base profile&lt;br /&gt;
* Ability to update an existing build profile, without the need to build it from scratch&lt;br /&gt;
* Ability to do fully automated clean builds (ie. for new archs or new stacks)&lt;br /&gt;
* Ability to automatically update all development profiles on a predefined frequency such as daily, weekly or monthly an be notified about build failures&lt;br /&gt;
** [http://jenkins-ci.org/ jenkins ci] can do this using one jenkins master and a least one build slave per architecture.&lt;br /&gt;
** Other options would be [https://github.com/travis-ci/travis-ci travis ci] (not ready for in-house use) or [http://cruisecontrol.sourceforge.net/ cruise control]&lt;br /&gt;
** Rabe already has a jenkins instance: [http://intranet.rabe.ch/jenkins/]. The instance [[Jenkins-01]] is more or less modern and should be easy to reintegrate with puppet.&lt;br /&gt;
* Each build profile stores the built binary packages under a per-defined directory which will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Application build profiles stores only the extra packages within the above directory, packages included in a base profile won&#039;t be duplicated.&lt;br /&gt;
* Old or no longer supported packages will be removed automatically&lt;br /&gt;
* Build a stage 3 tarball, which can be used for the automatic installation via PXE/TFTP.&lt;br /&gt;
** must be able to build a stage tarball for each of the available environment-arch-system profile combinations&lt;br /&gt;
* Handle reverse dependency updates and ABI changes (aka &amp;lt;code&amp;gt;revdep-rebuild&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Handle perl and python (maybe more) dependency updates (aka &amp;lt;code&amp;gt;perl-cleaner&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;python-updater&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Ability to build kernel and modules&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone requirements ==&lt;br /&gt;
* The official portage tree needs to be cloned via Git, which basically enables one to:&lt;br /&gt;
** keep the control over portage tree updates&lt;br /&gt;
** provide an old version of the tree&lt;br /&gt;
** cherry pick updates&lt;br /&gt;
***  this should be avoided at all cost since it can lead to various sorts of breakages (ebuild &amp;lt;-&amp;gt; ebuild, ebuild &amp;lt;-&amp;gt; eclass, ebuild &amp;lt;-&amp;gt; profile, eclass &amp;lt;-&amp;gt; profile interaction) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:24, 3 January 2014 (CET)&lt;br /&gt;
**** Yes, I agree. Nonetheless, we need the &#039;&#039;possibility&#039;&#039; to do cherry picking, for example to react on zero-day exploits. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 19:53, 5 January 2014 (CET)&lt;br /&gt;
* Support for a development, staging and production branch&lt;br /&gt;
** Ability to automatically sync from upstream&lt;br /&gt;
** Easy merge support from one branch to the next &#039;&#039;higher&#039;&#039; one (staging -&amp;gt; production)&lt;br /&gt;
* Notification support for new [http://www.gentoo.org/security/en/glsa/index.xml GLSAs] which affect packages within the cloned trees.&lt;br /&gt;
** Either via automatic update and merge of &amp;lt;code&amp;gt;/usr/portage/metadata/glsa&amp;lt;/code&amp;gt; or via external mechanisms such as consulting the [http://www.gentoo.org/rdf/en/glsa-index.rdf RDF feed].&lt;br /&gt;
** Having an inventory by collecting puppet facts allows to check for security updates in a central location --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:31, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage overlay requirements ==&lt;br /&gt;
* One Git based portage [http://www.gentoo.org/proj/en/overlays/userguide.xml overlay]&lt;br /&gt;
** Contains own [[#Portage_profile_requirements|portage profiles]]&lt;br /&gt;
** Contains own or modified ebuilds or legacy ones removed from the official tree&lt;br /&gt;
* Support for development, staging and production environment (via Git branches)&lt;br /&gt;
* [http://layman.sourceforge.net/ Layman] compatibility&lt;br /&gt;
** Portage has now direct repository support (as has cave/paludis) and layman may be omitted --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:32, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage profile requirements ==&lt;br /&gt;
* Multiple [http://wiki.gentoo.org/wiki/Profile Portage profiles] stored within the [[#Overlay_requirements|overlay]].&lt;br /&gt;
** One for base, desktop and server (maybe more in the future, such as streambox)&lt;br /&gt;
*** desktop and server both inherit from the base profile which serves as the lowest common denominator.&lt;br /&gt;
* Support for multiple architectures (such as x86 and amd64)&lt;br /&gt;
** Avoid definition duplications via parent profile inheriting.&lt;br /&gt;
* All the profiles have an official Gentoo profile as their master&lt;br /&gt;
* Profiles include only packages belonging to a base system, not an application stack (those will be managed via puppet recipes)&lt;br /&gt;
* Profiles can be used to unmask packages required but not belonging to the base system&lt;br /&gt;
* Profiles sets all the default values for the client&#039;s [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html &amp;lt;code&amp;gt;make.conf&amp;lt;/code&amp;gt;], such as USE flags, BINHOSTS, GENTOO_MIRRORS, CFLAGS, CHOST etc.&lt;br /&gt;
** &#039;&#039;&#039;Warning&#039;&#039;&#039;: many such variables are not incremental and therefore need duplication of Gentoo base profile variables (requiring that someone tracks changes in those variables) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:29, 3 January 2014 (CET)&lt;br /&gt;
* keep the profiles (and the inheritance structure) as simple as possible, rather duplicate than inherit for small deviations to avoid inheritence issues --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:33, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Package host requirements ==&lt;br /&gt;
* Serving files via HTTPS&lt;br /&gt;
** Binary packages for all the clients (&amp;lt;code&amp;gt;PORTAGE_BINHOST&amp;lt;/code&amp;gt;), which were built by the [[#Build_host_requirements|build host]]&lt;br /&gt;
*** Binary packages will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
*** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== File mirror host requirements ==&lt;br /&gt;
* Hosts all the files required to build a package (&amp;lt;code&amp;gt;GENTOO_MIRRORS=mirror.example.com/public/gentoo/distfiles&amp;lt;/code&amp;gt;)&lt;br /&gt;
** Acts as a caching mirror for already downloaded packages from an official mirror&lt;br /&gt;
**  Serves fetch-restricted files (&amp;lt;code&amp;gt;dev-java/oracle-jdk-bin&amp;lt;/code&amp;gt; for example), to authorized clients&lt;br /&gt;
* Files are served via HTTPS&lt;br /&gt;
* Distinguishes between three groups of files&lt;br /&gt;
** &#039;&#039;&#039;public&#039;&#039;&#039;: Files which are available to all clients (theoretically even to the entire internet)&lt;br /&gt;
** &#039;&#039;&#039;site-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure (for example those which would put us into [http://www.bettercallsaul.com/ legal troubles] if available to the public)&lt;br /&gt;
** &#039;&#039;&#039;stack-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure and the software stack group (private files of a specific customer) &lt;br /&gt;
* Provides an easy way to let an administrator manually upload new files, for example via WebDAV-CGI, SFTP or a similar mechanism.&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== Puppet requirements ==&lt;br /&gt;
* moved to [[stoney_orchestra:_Requirements]], included below for reference.&lt;br /&gt;
&lt;br /&gt;
{{:stoney orchestra: Requirements}}&lt;br /&gt;
&lt;br /&gt;
== Install host requirements ==&lt;br /&gt;
* Ability to install physical and virtual machines&lt;br /&gt;
* Distinguish machines by their Ethernet MAC address&lt;br /&gt;
* Provide a PXE/TFTP boot mechanism&lt;br /&gt;
* Partition and format the (virtual) harddisks&lt;br /&gt;
* Install a stage3 image which was built by the build host&lt;br /&gt;
* Bootstrap puppet, enabling it to take over the individual installation and customization.&lt;br /&gt;
* Group hosts into&lt;br /&gt;
** environments (development, staging and production)&lt;br /&gt;
** architectures (such as x86, amd64 etc.)&lt;br /&gt;
** portage profiles (system profiles such as desktop and server)&lt;br /&gt;
** &amp;lt;s&amp;gt;stacks (comprising a complete product as a service with the underlying infrastructure)&amp;lt;/s&amp;gt; this is the task of Puppet --[[Benutzer:Chaf|Chaf]] ([[Benutzer Diskussion:Chaf|Diskussion]]) 09:42, 19. Dez. 2013 (CET)&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure requirements ==&lt;br /&gt;
* Local certificate authority for signing [http://en.wikipedia.org/wiki/X509 X.509] certificates.&lt;br /&gt;
* Master certificate authority root certificate which is only used to sign Sub-CA certificates&lt;br /&gt;
* Sub certificate authorities used for various cases such as&lt;br /&gt;
** Puppet certificates [http://docs.puppetlabs.com/puppet/3/reference/config_ssl_external_ca.html]&lt;br /&gt;
** User certificates&lt;br /&gt;
** Client certificates&lt;br /&gt;
** Host certificates&lt;br /&gt;
* Ability to sign, revoke and extend certificates&lt;br /&gt;
* Publish certificate revocation status either via [http://en.wikipedia.org/wiki/Certificate_revocation_list CRL] and/or [http://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol OCSP]&lt;br /&gt;
** CRL is not worth the hassle due to it not defining how often the CRL must be consulted. Since we are in the same physical net OCSP should be far superior here (thank to its live checking support). On the other hand puppet does not do OCSP yet (redmine: [http://projects.puppetlabs.com/issues/10111 #110111]) so we might need to implement both or implement OCSP as well as develop our own automated revocation for puppet.&lt;br /&gt;
* Choose DNs below &amp;lt;code&amp;gt;dc=rabe,dc=ch&amp;lt;/code&amp;gt;&lt;br /&gt;
* register a PEN-OID as issued by IANA if custom schema work is required&lt;br /&gt;
** Use a @rabe email when requesting a PEN at [http://pen.iana.org/pen/PenApplication.page IANA], last time the @purplehaze.ch was a problem!&lt;br /&gt;
* Some of the aforementioned sub-CAs might be implemented as robot CAs with a self service interface (ie for authorized users).&lt;br /&gt;
* Consider using [http://en.wikipedia.org/wiki/Certificate_Management_Protocol CMP] or [http://en.wikipedia.org/wiki/Certificate_Management_over_CMS CMC] as an API to signing, revoking et. al.&lt;br /&gt;
** Since the underlying RFCs of both these protocols are rather new they are not yet broadly supported.&lt;br /&gt;
* Keep local root CA offline!&lt;br /&gt;
** Maybe use an old netbook as root CA :P&lt;br /&gt;
* Support GPG keys for signing packages&lt;br /&gt;
&lt;br /&gt;
== Git hosting requirements ==&lt;br /&gt;
* Public repositories hosted on [http://www.github.com GitHub] (mainly) under the [https://github.com/organizations/radiorabe radiorabe organization] (almost anything which doesn&#039;t leak sensitive informations)&lt;br /&gt;
* Private repositories hosted on the internal infrastructure&lt;br /&gt;
** Accessible via https and a web interface&lt;br /&gt;
** contains some repos with uber-private data the gets compartmentalized even further (ie. hiera datafiles in different repos)&lt;br /&gt;
* One repository per component&lt;br /&gt;
* Daily backup of all repositories&lt;br /&gt;
* Branches for development, staging and production&lt;br /&gt;
** New features are added to the development branch only and later merged up to staging and production&lt;br /&gt;
* Must support pull-requests so we can implement a review process (when pulling through the envs)&lt;br /&gt;
** Sing-Offing might also be required&lt;br /&gt;
* Adhere to [http://semver.org/ Semantic Versioning] for version/release tags.&lt;br /&gt;
** Tag releases as &amp;lt;code&amp;gt;vX.Y.Z&amp;lt;/code&amp;gt; those will be automatically appear on GitHub as downloadable tarballs, which can be referenced within the corresponding ebuilds.&lt;br /&gt;
** Hit 1.0.0 as soon as code lands on production or earlier&lt;br /&gt;
** Commit .lock files when reaching 1.0.0 where applicable (Gemfile.lock, composer.lock) or earlier if needed&lt;br /&gt;
* Must be able to trigger remote events (ie. update master through mcollective after code was promoted to production in a PR)&lt;br /&gt;
* Support the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model&lt;br /&gt;
&lt;br /&gt;
== Messaging requirements ==&lt;br /&gt;
* I&#039;m talking AMPQ, JMS, STOMP, 0MQ and the likes&lt;br /&gt;
** not sure if we need something in this space for the infra&lt;br /&gt;
** it could facilitate comms between components&lt;br /&gt;
** stuff like mcollective and RadioDNS need something in this space&lt;br /&gt;
&lt;br /&gt;
== Monitoring, logging and alarming system requirements ==&lt;br /&gt;
@TODO&lt;br /&gt;
* centralized logging is used throughout&lt;br /&gt;
** with tools that help find and fix problems and do post mortems&lt;br /&gt;
* all systems are always monitored by a full monitoring suite&lt;br /&gt;
* the monitoring suite must support alarming users through multiple paths&lt;br /&gt;
** alarming should include a fallback strategy and a way to acknowledge alarms&lt;br /&gt;
** it must have a easy way to configure scheduled maintenance either before or while the maintenance is undergoing&lt;br /&gt;
* monitoring, logging and alarming are all automatically configured during regular provisioning of machines&lt;br /&gt;
* alerting uses jabber by default with fallbacks to email and sms-through-gsm depending on the site.&lt;br /&gt;
&lt;br /&gt;
= Implementation proposal =&lt;br /&gt;
== Build host proposal ==&lt;br /&gt;
The build host consists out of various chroots to build binary packages for multiple environments, architectures and build profiles.&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
* [http://packer.io packer.io] can be used to build stage4 (containing a kernel) images and seems to work for gentoo. Packer often gets used to build Vagrant boxes.&lt;br /&gt;
** [https://github.com/pierreozoux/packer-warehouse/blob/master/var-files/gentoo/generate_latest.sh gentoo script from packer-warehouse] used with packer to create a minimal gentoo vagrant box&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone proposal ==&lt;br /&gt;
== Portage overlay proposal ==&lt;br /&gt;
== Portage profile proposal ==&lt;br /&gt;
== Package and file mirror proposal ==&lt;br /&gt;
== Puppet proposal ==&lt;br /&gt;
* Adhere to Craig Dunns [http://www.craigdunn.org/2012/05/239/ architecture] [http://www.slideshare.net/PuppetLabs/roles-talk]&lt;br /&gt;
** on the system level (ie for each bar-metal or virtual machine)&lt;br /&gt;
*** roles contains the business view (ie. [https://github.com/radiorabe/puppet/blob/master/role/manifests/puppet/master.pp role::puppet::master])&lt;br /&gt;
*** profiles the implementation (such as [https://github.com/radiorabe/puppet/blob/master/profile/manifests/puppet/master.pp profile::puppet::master])&lt;br /&gt;
** on the architecture level (ie. in the cloud-fabric)&lt;br /&gt;
*** roles contains the business view (ie. role::cloud-storage, role::product1)&lt;br /&gt;
*** profiles contain the implementation (ie profile::storage-cluster, profile::storage-webinterface-farm)&lt;br /&gt;
* Keep profiles, roles (as per craig) and Puppetfile in [https://github.com/radiorabe/puppet github.com/radiorabe/puppet]&lt;br /&gt;
** This is where we keep feature/*, develop and master (ie staging) branches&lt;br /&gt;
** An internal clone then contains all these + production (what exactly is in prodution, ie. our release schedule is considered sensitive in this implementation)&lt;br /&gt;
** This lets us use the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model with almost no changes (the one change being us gating stuff into production on the closed clone)&lt;br /&gt;
** github may use hooks to push content to our internal git when they happen&lt;br /&gt;
* All other modules need their own repo and must be published to the puppet module forge&lt;br /&gt;
* Use librarian-puppet (or r10k) for composing the final puppet envs&lt;br /&gt;
** r10k eschews git submodule support we used in puppet-syslogng but has support for multiple envs out of the box&lt;br /&gt;
** librarian-puppet would need to be run once per environment to achieve what r10k does&lt;br /&gt;
* provide develop, master and production branches from private repo as puppet environments on master&lt;br /&gt;
&lt;br /&gt;
== Install host proposal ==&lt;br /&gt;
* use the existing server on [[tftp-01]] on the RaBe infra as a shortcut&lt;br /&gt;
** replace that instance with one native to the infra when it is ready for that&lt;br /&gt;
* iPXE [http://ipxe.org/]&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
* Tools that run puppet on freshly installed machines (and also do some provisioning)&lt;br /&gt;
** [https://forge.puppetlabs.com/puppetlabs/razor puppetlabs razor] bare metal/cloud provisioning tool&lt;br /&gt;
** [http://www.vagrantup.com/ vagrant] cloud provisioning aimed at provisioning developer boxes (with virtualbox). Has 3rd party support for various cloud systems. Vagrant might be interesting for creating dev clouds. I&#039;ve seen this being used on production sites.&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure proposal ==&lt;br /&gt;
* write [[certificate policy]] (in german!)&lt;br /&gt;
* hold a key ceremony for the root and level 1&lt;br /&gt;
** offline ceremony on an old netbook with centos or similar (not debian, probably not gentoo to make this happen soonish)&lt;br /&gt;
** Sign RaBe root cert and level 1 intermediate cert&lt;br /&gt;
** store root cert key on 2 sdcards and as 1 printout somewhere safely&lt;br /&gt;
** store level 1 intermediate key on sdcards for use by admins&lt;br /&gt;
* use level 1 intermediate key to sign level 2 cas as needed&lt;br /&gt;
** level 2 robot ca key for puppet (managed by &amp;lt;code&amp;gt;puppet ca&amp;lt;/code&amp;gt;)&lt;br /&gt;
** level 2 ca for client certs&lt;br /&gt;
** level 2 ca for host certs&lt;br /&gt;
** more level 2 certs&lt;br /&gt;
* use OpenSSL as default software for PKI&lt;br /&gt;
** ssl has the largest userbase which should make it easier on new admins&lt;br /&gt;
** features that openssl does not implement get used as soon as openssl catches up (ie. [http://cmpforopenssl.sourceforge.net/‎ CMP])&lt;br /&gt;
&lt;br /&gt;
== git hosting proposal ==&lt;br /&gt;
&lt;br /&gt;
* [http://gitlab.org/ gitlab] seems nice even though is is ruby on rails under the hood&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Binary_package_guide Gentoo Binary Package Guide]&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Preserve-libs Gentoo preserve-libs]&lt;br /&gt;
* [http://swift.siphos.be/aglara/ A Gentoo Linux Advanced Reference Architecture]&lt;br /&gt;
* [http://www.gentoo.org/proj/en/gentoo-alt/prefix/ Gentoo Prefix]&lt;br /&gt;
* man pages&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/portage.5.html portage(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/emerge.1.html emerge(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html make.conf(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.1.html ebuild(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.5.html ebuild(5)]&lt;br /&gt;
&lt;br /&gt;
[[Category: Infrastructure]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2747</id>
		<title>Gentoo Infrastructure</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2747"/>
		<updated>2014-01-09T18:38:43Z</updated>

		<summary type="html">&lt;p&gt;Lucas: /* Install host proposal */ add tools concerned with provisioning and handing control to puppet after&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This article describes how we plan on using gentoo as an infrastructure backbone for creating a complete and modern IT architecture.&lt;br /&gt;
&lt;br /&gt;
== Glossary ==&lt;br /&gt;
@TODO We need to clean up some terms already (for instance the portage vs puppet profile thing) A [[:Category:Glossary|glossary]] should help us define term more closely (and stick to the definitions).&lt;br /&gt;
&lt;br /&gt;
; portage profile&lt;br /&gt;
: A profile in gentoo portage. Defines either a system or application stack for portage.&lt;br /&gt;
; portage build profile&lt;br /&gt;
: A profile in gentoo portage. Based of a system profile but used during the build phase of the binary packages used in the final deploy.&lt;br /&gt;
; puppet profile&lt;br /&gt;
: A puppet profile contains the implementation logic of how to install and configure an aspect of a system.&lt;br /&gt;
; stack&lt;br /&gt;
: A stack contains a complete and deployable product that may be provisioned and used. Stack have very simple inheritance letting the admin create stack trees based on each other. For instance a Ruby on Rails stack will be based of of a ruby stack which is based off a linux stack.&lt;br /&gt;
&lt;br /&gt;
= Required components =&lt;br /&gt;
* Build host(s) for binary packages&lt;br /&gt;
* HTTP server for serving binary packages and distfiles (required by the ebuilds)&lt;br /&gt;
* Git clone of official portage tree&lt;br /&gt;
* Overlay(s)&lt;br /&gt;
* Own portage profile(s)&lt;br /&gt;
* rsync or Git server for serving the Overlay and the portage profiles&lt;br /&gt;
* Stage3 building system&lt;br /&gt;
* Puppet for configuration management and software installation&lt;br /&gt;
* Git version control for everything (overlays, portage profiles, puppet manifests and scripts/code)&lt;br /&gt;
* Install host (PXE boot / TFTP / DHCP)&lt;br /&gt;
** emc/puppetlabs [https://github.com/puppetlabs/Razor razor] can do this but needs some work for gentoo &lt;br /&gt;
* Automatic base installation script&lt;br /&gt;
** also in the scope of razor&lt;br /&gt;
* Separation of development, staging and production environments&lt;br /&gt;
** tagged and managed in git&lt;br /&gt;
* PKI environment (with dedicated sub CAs) for X509 certificates (used for Puppet, server and client certs etc.)&lt;br /&gt;
* git web interface (make dotfiles and frozen clones accessible to power-users)&lt;br /&gt;
* Central authentication service&lt;br /&gt;
* DNS, DHCP and NTP services&lt;br /&gt;
* Monitoring and alarming system&lt;br /&gt;
* Logging&lt;br /&gt;
* versioning for everything (if it is a committable file, use semver on its repo)&lt;br /&gt;
&lt;br /&gt;
== Binary package requirements ==&lt;br /&gt;
* Ability to build and install binary packages with the same version but different USE flags. For example, MySQL server package (&amp;lt;code&amp;gt;-minimal&amp;lt;/code&amp;gt; and MySQL client &amp;amp; libs package &amp;lt;code&amp;gt;minimal&amp;lt;/code&amp;gt;)&lt;br /&gt;
** don&#039;t go there: this imposes a significant amount of maintenance work and may still break. Rather provide large enough base sets and accept that some packages install too much (you can still disable them at runtime) and build the few deviations from the rule on the servers from source --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:39, 3 January 2014 (CET)&lt;br /&gt;
*** Yes, we need to and can go there :-) I agree with you, that we should do this only if necessary, apache for example can be built once and has the ability to turn features (module loading) on/off via its configuration. Other software does not provide such run-time configuration which results in unwanted server-software and dependencies on the installed hosts (&amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for example). I clearly do not want to have a dedicated build environment for each of those packages, I would rather see a build env, called minimal for example, which is used to build all those database packages with only lib and clients enabled (use the same env for PostgreSQL, OpenLDAP, MySQL etc.). As stated before, the whole build process needs to be automated, so I don&#039;t see a considerable increase of maintenance work coming up here. The dependency problem is mitigated through the fact that we have a frozen portage tree for all our build envs and therefore use the same versions everywhere. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 12:04, 6 January 2014 (CET)&lt;br /&gt;
* Providing binary packages for different major (and sometimes minor) versions, for example: &amp;lt;code&amp;gt;dev-db/mysql-5.X.Y&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;dev-db/mysql-6.X.Y&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Provide binary packages for pre-compiled Linux kernels and modules (not just a binary package of &amp;lt;code&amp;gt;sys-kernel/gentoo-sources&amp;lt;/code&amp;gt;)&lt;br /&gt;
** This makes it possible to build stage4 images from binary packages.&lt;br /&gt;
** Most likely there will be separate packages for servers and desktops built with different genkernel configs.&lt;br /&gt;
* Handle reverse dependency updates and ABI changes&lt;br /&gt;
&lt;br /&gt;
== Build host requirements ==&lt;br /&gt;
* Build binary package for all required software&lt;br /&gt;
* Support for multiple environments (development, staging and production)&lt;br /&gt;
* Support for multiple architectures (such as x86, amd64 etc.)&lt;br /&gt;
* Support for multiple build profiles&lt;br /&gt;
** system (or base) profile, such as desktop or server (stage3) (all the packages contained within the &amp;lt;code&amp;gt;/etc/portage/make.profile&amp;lt;/code&amp;gt; or via &amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;)&lt;br /&gt;
** application profiles, such as php5-app, django-app etc.)&lt;br /&gt;
** simple inheritance is used for things like python-app -&amp;gt; django-app&lt;br /&gt;
** stacks consist of one system profile and multiple application profiles&lt;br /&gt;
** don&#039;t do this: Gentoo itself has only a few profiles and even there issues arise when combining them (for example desktop + selinux-hardened) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:40, 3 January 2014 (CET)&lt;br /&gt;
*** Those are build-profiles (for example chroots or some sort of overlay-fs) not Gentoo (portage) profiles, we definitely need to clarify those terms ;) --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 20:01, 5 January 2014 (CET)&lt;br /&gt;
* All build profiles will use a system profile as their base profile&lt;br /&gt;
* Ability to update an existing build profile, without the need to build it from scratch&lt;br /&gt;
* Ability to do fully automated clean builds (ie. for new archs or new stacks)&lt;br /&gt;
* Ability to automatically update all development profiles on a predefined frequency such as daily, weekly or monthly an be notified about build failures&lt;br /&gt;
** [http://jenkins-ci.org/ jenkins ci] can do this using one jenkins master and a least one build slave per architecture.&lt;br /&gt;
** Other options would be [https://github.com/travis-ci/travis-ci travis ci] (not ready for in-house use) or [http://cruisecontrol.sourceforge.net/ cruise control]&lt;br /&gt;
** Rabe already has a jenkins instance: [http://intranet.rabe.ch/jenkins/]. The instance [[Jenkins-01]] is more or less modern and should be easy to reintegrate with puppet.&lt;br /&gt;
* Each build profile stores the built binary packages under a per-defined directory which will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Application build profiles stores only the extra packages within the above directory, packages included in a base profile won&#039;t be duplicated.&lt;br /&gt;
* Old or no longer supported packages will be removed automatically&lt;br /&gt;
* Build a stage 3 tarball, which can be used for the automatic installation via PXE/TFTP.&lt;br /&gt;
** must be able to build a stage tarball for each of the available environment-arch-system profile combinations&lt;br /&gt;
* Handle reverse dependency updates and ABI changes (aka &amp;lt;code&amp;gt;revdep-rebuild&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Handle perl and python (maybe more) dependency updates (aka &amp;lt;code&amp;gt;perl-cleaner&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;python-updater&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Ability to build kernel and modules&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone requirements ==&lt;br /&gt;
* The official portage tree needs to be cloned via Git, which basically enables one to:&lt;br /&gt;
** keep the control over portage tree updates&lt;br /&gt;
** provide an old version of the tree&lt;br /&gt;
** cherry pick updates&lt;br /&gt;
***  this should be avoided at all cost since it can lead to various sorts of breakages (ebuild &amp;lt;-&amp;gt; ebuild, ebuild &amp;lt;-&amp;gt; eclass, ebuild &amp;lt;-&amp;gt; profile, eclass &amp;lt;-&amp;gt; profile interaction) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:24, 3 January 2014 (CET)&lt;br /&gt;
**** Yes, I agree. Nonetheless, we need the &#039;&#039;possibility&#039;&#039; to do cherry picking, for example to react on zero-day exploits. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 19:53, 5 January 2014 (CET)&lt;br /&gt;
* Support for a development, staging and production branch&lt;br /&gt;
** Ability to automatically sync from upstream&lt;br /&gt;
** Easy merge support from one branch to the next &#039;&#039;higher&#039;&#039; one (staging -&amp;gt; production)&lt;br /&gt;
* Notification support for new [http://www.gentoo.org/security/en/glsa/index.xml GLSAs] which affect packages within the cloned trees.&lt;br /&gt;
** Either via automatic update and merge of &amp;lt;code&amp;gt;/usr/portage/metadata/glsa&amp;lt;/code&amp;gt; or via external mechanisms such as consulting the [http://www.gentoo.org/rdf/en/glsa-index.rdf RDF feed].&lt;br /&gt;
** Having an inventory by collecting puppet facts allows to check for security updates in a central location --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:31, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage overlay requirements ==&lt;br /&gt;
* One Git based portage [http://www.gentoo.org/proj/en/overlays/userguide.xml overlay]&lt;br /&gt;
** Contains own [[#Portage_profile_requirements|portage profiles]]&lt;br /&gt;
** Contains own or modified ebuilds or legacy ones removed from the official tree&lt;br /&gt;
* Support for development, staging and production environment (via Git branches)&lt;br /&gt;
* [http://layman.sourceforge.net/ Layman] compatibility&lt;br /&gt;
** Portage has now direct repository support (as has cave/paludis) and layman may be omitted --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:32, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage profile requirements ==&lt;br /&gt;
* Multiple [http://wiki.gentoo.org/wiki/Profile Portage profiles] stored within the [[#Overlay_requirements|overlay]].&lt;br /&gt;
** One for base, desktop and server (maybe more in the future, such as streambox)&lt;br /&gt;
*** desktop and server both inherit from the base profile which serves as the lowest common denominator.&lt;br /&gt;
* Support for multiple architectures (such as x86 and amd64)&lt;br /&gt;
** Avoid definition duplications via parent profile inheriting.&lt;br /&gt;
* All the profiles have an official Gentoo profile as their master&lt;br /&gt;
* Profiles include only packages belonging to a base system, not an application stack (those will be managed via puppet recipes)&lt;br /&gt;
* Profiles can be used to unmask packages required but not belonging to the base system&lt;br /&gt;
* Profiles sets all the default values for the client&#039;s [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html &amp;lt;code&amp;gt;make.conf&amp;lt;/code&amp;gt;], such as USE flags, BINHOSTS, GENTOO_MIRRORS, CFLAGS, CHOST etc.&lt;br /&gt;
** &#039;&#039;&#039;Warning&#039;&#039;&#039;: many such variables are not incremental and therefore need duplication of Gentoo base profile variables (requiring that someone tracks changes in those variables) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:29, 3 January 2014 (CET)&lt;br /&gt;
* keep the profiles (and the inheritance structure) as simple as possible, rather duplicate than inherit for small deviations to avoid inheritence issues --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:33, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Package host requirements ==&lt;br /&gt;
* Serving files via HTTPS&lt;br /&gt;
** Binary packages for all the clients (&amp;lt;code&amp;gt;PORTAGE_BINHOST&amp;lt;/code&amp;gt;), which were built by the [[#Build_host_requirements|build host]]&lt;br /&gt;
*** Binary packages will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
*** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== File mirror host requirements ==&lt;br /&gt;
* Hosts all the files required to build a package (&amp;lt;code&amp;gt;GENTOO_MIRRORS=mirror.example.com/public/gentoo/distfiles&amp;lt;/code&amp;gt;)&lt;br /&gt;
** Acts as a caching mirror for already downloaded packages from an official mirror&lt;br /&gt;
**  Serves fetch-restricted files (&amp;lt;code&amp;gt;dev-java/oracle-jdk-bin&amp;lt;/code&amp;gt; for example), to authorized clients&lt;br /&gt;
* Files are served via HTTPS&lt;br /&gt;
* Distinguishes between three groups of files&lt;br /&gt;
** &#039;&#039;&#039;public&#039;&#039;&#039;: Files which are available to all clients (theoretically even to the entire internet)&lt;br /&gt;
** &#039;&#039;&#039;site-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure (for example those which would put us into [http://www.bettercallsaul.com/ legal troubles] if available to the public)&lt;br /&gt;
** &#039;&#039;&#039;stack-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure and the software stack group (private files of a specific customer) &lt;br /&gt;
* Provides an easy way to let an administrator manually upload new files, for example via WebDAV-CGI, SFTP or a similar mechanism.&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== Puppet requirements ==&lt;br /&gt;
* moved to [[stoney_orchestra:_Requirements]], included below for reference.&lt;br /&gt;
&lt;br /&gt;
{{:stoney orchestra: Requirements}}&lt;br /&gt;
&lt;br /&gt;
== Install host requirements ==&lt;br /&gt;
* Ability to install physical and virtual machines&lt;br /&gt;
* Distinguish machines by their Ethernet MAC address&lt;br /&gt;
* Provide a PXE/TFTP boot mechanism&lt;br /&gt;
* Partition and format the (virtual) harddisks&lt;br /&gt;
* Install a stage3 image which was built by the build host&lt;br /&gt;
* Bootstrap puppet, enabling it to take over the individual installation and customization.&lt;br /&gt;
* Group hosts into&lt;br /&gt;
** environments (development, staging and production)&lt;br /&gt;
** architectures (such as x86, amd64 etc.)&lt;br /&gt;
** portage profiles (system profiles such as desktop and server)&lt;br /&gt;
** &amp;lt;s&amp;gt;stacks (comprising a complete product as a service with the underlying infrastructure)&amp;lt;/s&amp;gt; this is the task of Puppet --[[Benutzer:Chaf|Chaf]] ([[Benutzer Diskussion:Chaf|Diskussion]]) 09:42, 19. Dez. 2013 (CET)&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure requirements ==&lt;br /&gt;
* Local certificate authority for signing [http://en.wikipedia.org/wiki/X509 X.509] certificates.&lt;br /&gt;
* Master certificate authority root certificate which is only used to sign Sub-CA certificates&lt;br /&gt;
* Sub certificate authorities used for various cases such as&lt;br /&gt;
** Puppet certificates [http://docs.puppetlabs.com/puppet/3/reference/config_ssl_external_ca.html]&lt;br /&gt;
** User certificates&lt;br /&gt;
** Client certificates&lt;br /&gt;
** Host certificates&lt;br /&gt;
* Ability to sign, revoke and extend certificates&lt;br /&gt;
* Publish certificate revocation status either via [http://en.wikipedia.org/wiki/Certificate_revocation_list CRL] and/or [http://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol OCSP]&lt;br /&gt;
** CRL is not worth the hassle due to it not defining how often the CRL must be consulted. Since we are in the same physical net OCSP should be far superior here (thank to its live checking support). On the other hand puppet does not do OCSP yet (redmine: [http://projects.puppetlabs.com/issues/10111 #110111]) so we might need to implement both or implement OCSP as well as develop our own automated revocation for puppet.&lt;br /&gt;
* Choose DNs below &amp;lt;code&amp;gt;dc=rabe,dc=ch&amp;lt;/code&amp;gt;&lt;br /&gt;
* register a PEN-OID as issued by IANA if custom schema work is required&lt;br /&gt;
** Use a @rabe email when requesting a PEN at [http://pen.iana.org/pen/PenApplication.page IANA], last time the @purplehaze.ch was a problem!&lt;br /&gt;
* Some of the aforementioned sub-CAs might be implemented as robot CAs with a self service interface (ie for authorized users).&lt;br /&gt;
* Consider using [http://en.wikipedia.org/wiki/Certificate_Management_Protocol CMP] or [http://en.wikipedia.org/wiki/Certificate_Management_over_CMS CMC] as an API to signing, revoking et. al.&lt;br /&gt;
** Since the underlying RFCs of both these protocols are rather new they are not yet broadly supported.&lt;br /&gt;
* Keep local root CA offline!&lt;br /&gt;
** Maybe use an old netbook as root CA :P&lt;br /&gt;
* Support GPG keys for signing packages&lt;br /&gt;
&lt;br /&gt;
== Git hosting requirements ==&lt;br /&gt;
* Public repositories hosted on [http://www.github.com GitHub] (mainly) under the [https://github.com/organizations/radiorabe radiorabe organization] (almost anything which doesn&#039;t leak sensitive informations)&lt;br /&gt;
* Private repositories hosted on the internal infrastructure&lt;br /&gt;
** Accessible via https and a web interface&lt;br /&gt;
** contains some repos with uber-private data the gets compartmentalized even further (ie. hiera datafiles in different repos)&lt;br /&gt;
* One repository per component&lt;br /&gt;
* Daily backup of all repositories&lt;br /&gt;
* Branches for development, staging and production&lt;br /&gt;
** New features are added to the development branch only and later merged up to staging and production&lt;br /&gt;
* Must support pull-requests so we can implement a review process (when pulling through the envs)&lt;br /&gt;
** Sing-Offing might also be required&lt;br /&gt;
* Adhere to [http://semver.org/ Semantic Versioning] for version/release tags.&lt;br /&gt;
** Tag releases as &amp;lt;code&amp;gt;vX.Y.Z&amp;lt;/code&amp;gt; those will be automatically appear on GitHub as downloadable tarballs, which can be referenced within the corresponding ebuilds.&lt;br /&gt;
** Hit 1.0.0 as soon as code lands on production or earlier&lt;br /&gt;
** Commit .lock files when reaching 1.0.0 where applicable (Gemfile.lock, composer.lock) or earlier if needed&lt;br /&gt;
* Must be able to trigger remote events (ie. update master through mcollective after code was promoted to production in a PR)&lt;br /&gt;
* Support the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model&lt;br /&gt;
&lt;br /&gt;
== Messaging requirements ==&lt;br /&gt;
* I&#039;m talking AMPQ, JMS, STOMP, 0MQ and the likes&lt;br /&gt;
** not sure if we need something in this space for the infra&lt;br /&gt;
** it could facilitate comms between components&lt;br /&gt;
** stuff like mcollective and RadioDNS need something in this space&lt;br /&gt;
&lt;br /&gt;
== Monitoring, logging and alarming system requirements ==&lt;br /&gt;
@TODO&lt;br /&gt;
* centralized logging is used throughout&lt;br /&gt;
** with tools that help find and fix problems and do post mortems&lt;br /&gt;
* all systems are always monitored by a full monitoring suite&lt;br /&gt;
* the monitoring suite must support alarming users through multiple paths&lt;br /&gt;
** alarming should include a fallback strategy and a way to acknowledge alarms&lt;br /&gt;
** it must have a easy way to configure scheduled maintenance either before or while the maintenance is undergoing&lt;br /&gt;
* monitoring, logging and alarming are all automatically configured during regular provisioning of machines&lt;br /&gt;
* alerting uses jabber by default with fallbacks to email and sms-through-gsm depending on the site.&lt;br /&gt;
&lt;br /&gt;
= Implementation proposal =&lt;br /&gt;
== Build host proposal ==&lt;br /&gt;
The build host consists out of various chroots to build binary packages for multiple environments, architectures and build profiles.&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
* [http://packer.io packer.io] can be used to build stage4 (containing a kernel) images and seems to work for gentoo. Packer often gets used to build Vagrant boxes.&lt;br /&gt;
** [https://github.com/pierreozoux/packer-warehouse/blob/master/var-files/gentoo/generate_latest.sh gentoo script from packer-warehouse] used with packer to create a minimal gentoo vagrant box&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone proposal ==&lt;br /&gt;
== Portage overlay proposal ==&lt;br /&gt;
== Portage profile proposal ==&lt;br /&gt;
== Package and file mirror proposal ==&lt;br /&gt;
== Puppet proposal ==&lt;br /&gt;
* Adhere to Craig Dunns [http://www.craigdunn.org/2012/05/239/ architecture] [http://www.slideshare.net/PuppetLabs/roles-talk]&lt;br /&gt;
** on the system level (ie for each bar-metal or virtual machine)&lt;br /&gt;
*** roles contains the business view (ie. [https://github.com/radiorabe/puppet/blob/master/role/manifests/puppet/master.pp role::puppet::master])&lt;br /&gt;
*** profiles the implementation (such as [https://github.com/radiorabe/puppet/blob/master/profile/manifests/puppet/master.pp profile::puppet::master])&lt;br /&gt;
** on the architecture level (ie. in the cloud-fabric)&lt;br /&gt;
*** roles contains the business view (ie. role::cloud-storage, role::product1)&lt;br /&gt;
*** profiles contain the implementation (ie profile::storage-cluster, profile::storage-webinterface-farm)&lt;br /&gt;
* Keep profiles, roles (as per craig) and Puppetfile in [https://github.com/radiorabe/puppet github.com/radiorabe/puppet]&lt;br /&gt;
** This is where we keep feature/*, develop and master (ie staging) branches&lt;br /&gt;
** An internal clone then contains all these + production (what exactly is in prodution, ie. our release schedule is considered sensitive in this implementation)&lt;br /&gt;
** This lets us use the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model with almost no changes (the one change being us gating stuff into production on the closed clone)&lt;br /&gt;
** github may use hooks to push content to our internal git when they happen&lt;br /&gt;
* All other modules need their own repo and must be published to the puppet module forge&lt;br /&gt;
* Use librarian-puppet (or r10k) for composing the final puppet envs&lt;br /&gt;
** r10k eschews git submodule support we used in puppet-syslogng but has support for multiple envs out of the box&lt;br /&gt;
** librarian-puppet would need to be run once per environment to achieve what r10k does&lt;br /&gt;
* provide develop, master and production branches from private repo as puppet environments on master&lt;br /&gt;
&lt;br /&gt;
== Install host proposal ==&lt;br /&gt;
* use the existing server on [[tftp-01]] on the RaBe infra as a shortcut&lt;br /&gt;
** replace that instance with one native to the infra when it is ready for that&lt;br /&gt;
* iPXE [http://ipxe.org/]&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
* Tools that run puppet on freshly installed machines (and also do some provisioning)&lt;br /&gt;
** [https://forge.puppetlabs.com/puppetlabs/razor puppetlabs razor] bare metal/cloud provisioning tool&lt;br /&gt;
** [http://www.vagrantup.com/ vagrant] cloud provisioning aimed at provisioning developer boxes (with virtualbox). Has 3rd party support for various cloud systems. Vagrant might be interesting for creating dev clouds. I&#039;ve seen this being used on production sites.&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure proposal ==&lt;br /&gt;
* write [[certificate policy]] (in german!)&lt;br /&gt;
* hold a key ceremony for the root and level 1&lt;br /&gt;
** offline ceremony on an old netbook with centos or similar (not debian, probably not gentoo to make this happen soonish)&lt;br /&gt;
** Sign RaBe root cert and level 1 intermediate cert&lt;br /&gt;
** store root cert key on 2 sdcards and as 1 printout somewhere safely&lt;br /&gt;
** store level 1 intermediate key on sdcards for use by admins&lt;br /&gt;
* use level 1 intermediate key to sign level 2 cas as needed&lt;br /&gt;
** level 2 robot ca key for puppet (managed by &amp;lt;code&amp;gt;puppet ca&amp;lt;/code&amp;gt;)&lt;br /&gt;
** level 2 ca for client certs&lt;br /&gt;
** level 2 ca for host certs&lt;br /&gt;
** more level 2 certs&lt;br /&gt;
* use OpenSSL as default software for PKI&lt;br /&gt;
** ssl has the largest userbase which should make it easier on new admins&lt;br /&gt;
** features that openssl does not implement get used as soon as openssl catches up (ie. [http://cmpforopenssl.sourceforge.net/‎ CMP])&lt;br /&gt;
&lt;br /&gt;
== git hosting proposal ==&lt;br /&gt;
&lt;br /&gt;
* [http://gitlab.org/ gitlab] seems nice even though is is ruby on rails under the hood&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Binary_package_guide Gentoo Binary Package Guide]&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Preserve-libs Gentoo preserve-libs]&lt;br /&gt;
* [http://swift.siphos.be/aglara/ A Gentoo Linux Advanced Reference Architecture]&lt;br /&gt;
* [http://www.gentoo.org/proj/en/gentoo-alt/prefix/ Gentoo Prefix]&lt;br /&gt;
* man pages&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/portage.5.html portage(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/emerge.1.html emerge(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html make.conf(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.1.html ebuild(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.5.html ebuild(5)]&lt;br /&gt;
&lt;br /&gt;
[[Category: Infrastructure]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2746</id>
		<title>Gentoo Infrastructure</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2746"/>
		<updated>2014-01-09T18:33:04Z</updated>

		<summary type="html">&lt;p&gt;Lucas: /* Links */ gentoo example&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This article describes how we plan on using gentoo as an infrastructure backbone for creating a complete and modern IT architecture.&lt;br /&gt;
&lt;br /&gt;
== Glossary ==&lt;br /&gt;
@TODO We need to clean up some terms already (for instance the portage vs puppet profile thing) A [[:Category:Glossary|glossary]] should help us define term more closely (and stick to the definitions).&lt;br /&gt;
&lt;br /&gt;
; portage profile&lt;br /&gt;
: A profile in gentoo portage. Defines either a system or application stack for portage.&lt;br /&gt;
; portage build profile&lt;br /&gt;
: A profile in gentoo portage. Based of a system profile but used during the build phase of the binary packages used in the final deploy.&lt;br /&gt;
; puppet profile&lt;br /&gt;
: A puppet profile contains the implementation logic of how to install and configure an aspect of a system.&lt;br /&gt;
; stack&lt;br /&gt;
: A stack contains a complete and deployable product that may be provisioned and used. Stack have very simple inheritance letting the admin create stack trees based on each other. For instance a Ruby on Rails stack will be based of of a ruby stack which is based off a linux stack.&lt;br /&gt;
&lt;br /&gt;
= Required components =&lt;br /&gt;
* Build host(s) for binary packages&lt;br /&gt;
* HTTP server for serving binary packages and distfiles (required by the ebuilds)&lt;br /&gt;
* Git clone of official portage tree&lt;br /&gt;
* Overlay(s)&lt;br /&gt;
* Own portage profile(s)&lt;br /&gt;
* rsync or Git server for serving the Overlay and the portage profiles&lt;br /&gt;
* Stage3 building system&lt;br /&gt;
* Puppet for configuration management and software installation&lt;br /&gt;
* Git version control for everything (overlays, portage profiles, puppet manifests and scripts/code)&lt;br /&gt;
* Install host (PXE boot / TFTP / DHCP)&lt;br /&gt;
** emc/puppetlabs [https://github.com/puppetlabs/Razor razor] can do this but needs some work for gentoo &lt;br /&gt;
* Automatic base installation script&lt;br /&gt;
** also in the scope of razor&lt;br /&gt;
* Separation of development, staging and production environments&lt;br /&gt;
** tagged and managed in git&lt;br /&gt;
* PKI environment (with dedicated sub CAs) for X509 certificates (used for Puppet, server and client certs etc.)&lt;br /&gt;
* git web interface (make dotfiles and frozen clones accessible to power-users)&lt;br /&gt;
* Central authentication service&lt;br /&gt;
* DNS, DHCP and NTP services&lt;br /&gt;
* Monitoring and alarming system&lt;br /&gt;
* Logging&lt;br /&gt;
* versioning for everything (if it is a committable file, use semver on its repo)&lt;br /&gt;
&lt;br /&gt;
== Binary package requirements ==&lt;br /&gt;
* Ability to build and install binary packages with the same version but different USE flags. For example, MySQL server package (&amp;lt;code&amp;gt;-minimal&amp;lt;/code&amp;gt; and MySQL client &amp;amp; libs package &amp;lt;code&amp;gt;minimal&amp;lt;/code&amp;gt;)&lt;br /&gt;
** don&#039;t go there: this imposes a significant amount of maintenance work and may still break. Rather provide large enough base sets and accept that some packages install too much (you can still disable them at runtime) and build the few deviations from the rule on the servers from source --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:39, 3 January 2014 (CET)&lt;br /&gt;
*** Yes, we need to and can go there :-) I agree with you, that we should do this only if necessary, apache for example can be built once and has the ability to turn features (module loading) on/off via its configuration. Other software does not provide such run-time configuration which results in unwanted server-software and dependencies on the installed hosts (&amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for example). I clearly do not want to have a dedicated build environment for each of those packages, I would rather see a build env, called minimal for example, which is used to build all those database packages with only lib and clients enabled (use the same env for PostgreSQL, OpenLDAP, MySQL etc.). As stated before, the whole build process needs to be automated, so I don&#039;t see a considerable increase of maintenance work coming up here. The dependency problem is mitigated through the fact that we have a frozen portage tree for all our build envs and therefore use the same versions everywhere. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 12:04, 6 January 2014 (CET)&lt;br /&gt;
* Providing binary packages for different major (and sometimes minor) versions, for example: &amp;lt;code&amp;gt;dev-db/mysql-5.X.Y&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;dev-db/mysql-6.X.Y&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Provide binary packages for pre-compiled Linux kernels and modules (not just a binary package of &amp;lt;code&amp;gt;sys-kernel/gentoo-sources&amp;lt;/code&amp;gt;)&lt;br /&gt;
** This makes it possible to build stage4 images from binary packages.&lt;br /&gt;
** Most likely there will be separate packages for servers and desktops built with different genkernel configs.&lt;br /&gt;
* Handle reverse dependency updates and ABI changes&lt;br /&gt;
&lt;br /&gt;
== Build host requirements ==&lt;br /&gt;
* Build binary package for all required software&lt;br /&gt;
* Support for multiple environments (development, staging and production)&lt;br /&gt;
* Support for multiple architectures (such as x86, amd64 etc.)&lt;br /&gt;
* Support for multiple build profiles&lt;br /&gt;
** system (or base) profile, such as desktop or server (stage3) (all the packages contained within the &amp;lt;code&amp;gt;/etc/portage/make.profile&amp;lt;/code&amp;gt; or via &amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;)&lt;br /&gt;
** application profiles, such as php5-app, django-app etc.)&lt;br /&gt;
** simple inheritance is used for things like python-app -&amp;gt; django-app&lt;br /&gt;
** stacks consist of one system profile and multiple application profiles&lt;br /&gt;
** don&#039;t do this: Gentoo itself has only a few profiles and even there issues arise when combining them (for example desktop + selinux-hardened) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:40, 3 January 2014 (CET)&lt;br /&gt;
*** Those are build-profiles (for example chroots or some sort of overlay-fs) not Gentoo (portage) profiles, we definitely need to clarify those terms ;) --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 20:01, 5 January 2014 (CET)&lt;br /&gt;
* All build profiles will use a system profile as their base profile&lt;br /&gt;
* Ability to update an existing build profile, without the need to build it from scratch&lt;br /&gt;
* Ability to do fully automated clean builds (ie. for new archs or new stacks)&lt;br /&gt;
* Ability to automatically update all development profiles on a predefined frequency such as daily, weekly or monthly an be notified about build failures&lt;br /&gt;
** [http://jenkins-ci.org/ jenkins ci] can do this using one jenkins master and a least one build slave per architecture.&lt;br /&gt;
** Other options would be [https://github.com/travis-ci/travis-ci travis ci] (not ready for in-house use) or [http://cruisecontrol.sourceforge.net/ cruise control]&lt;br /&gt;
** Rabe already has a jenkins instance: [http://intranet.rabe.ch/jenkins/]. The instance [[Jenkins-01]] is more or less modern and should be easy to reintegrate with puppet.&lt;br /&gt;
* Each build profile stores the built binary packages under a per-defined directory which will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Application build profiles stores only the extra packages within the above directory, packages included in a base profile won&#039;t be duplicated.&lt;br /&gt;
* Old or no longer supported packages will be removed automatically&lt;br /&gt;
* Build a stage 3 tarball, which can be used for the automatic installation via PXE/TFTP.&lt;br /&gt;
** must be able to build a stage tarball for each of the available environment-arch-system profile combinations&lt;br /&gt;
* Handle reverse dependency updates and ABI changes (aka &amp;lt;code&amp;gt;revdep-rebuild&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Handle perl and python (maybe more) dependency updates (aka &amp;lt;code&amp;gt;perl-cleaner&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;python-updater&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Ability to build kernel and modules&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone requirements ==&lt;br /&gt;
* The official portage tree needs to be cloned via Git, which basically enables one to:&lt;br /&gt;
** keep the control over portage tree updates&lt;br /&gt;
** provide an old version of the tree&lt;br /&gt;
** cherry pick updates&lt;br /&gt;
***  this should be avoided at all cost since it can lead to various sorts of breakages (ebuild &amp;lt;-&amp;gt; ebuild, ebuild &amp;lt;-&amp;gt; eclass, ebuild &amp;lt;-&amp;gt; profile, eclass &amp;lt;-&amp;gt; profile interaction) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:24, 3 January 2014 (CET)&lt;br /&gt;
**** Yes, I agree. Nonetheless, we need the &#039;&#039;possibility&#039;&#039; to do cherry picking, for example to react on zero-day exploits. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 19:53, 5 January 2014 (CET)&lt;br /&gt;
* Support for a development, staging and production branch&lt;br /&gt;
** Ability to automatically sync from upstream&lt;br /&gt;
** Easy merge support from one branch to the next &#039;&#039;higher&#039;&#039; one (staging -&amp;gt; production)&lt;br /&gt;
* Notification support for new [http://www.gentoo.org/security/en/glsa/index.xml GLSAs] which affect packages within the cloned trees.&lt;br /&gt;
** Either via automatic update and merge of &amp;lt;code&amp;gt;/usr/portage/metadata/glsa&amp;lt;/code&amp;gt; or via external mechanisms such as consulting the [http://www.gentoo.org/rdf/en/glsa-index.rdf RDF feed].&lt;br /&gt;
** Having an inventory by collecting puppet facts allows to check for security updates in a central location --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:31, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage overlay requirements ==&lt;br /&gt;
* One Git based portage [http://www.gentoo.org/proj/en/overlays/userguide.xml overlay]&lt;br /&gt;
** Contains own [[#Portage_profile_requirements|portage profiles]]&lt;br /&gt;
** Contains own or modified ebuilds or legacy ones removed from the official tree&lt;br /&gt;
* Support for development, staging and production environment (via Git branches)&lt;br /&gt;
* [http://layman.sourceforge.net/ Layman] compatibility&lt;br /&gt;
** Portage has now direct repository support (as has cave/paludis) and layman may be omitted --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:32, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage profile requirements ==&lt;br /&gt;
* Multiple [http://wiki.gentoo.org/wiki/Profile Portage profiles] stored within the [[#Overlay_requirements|overlay]].&lt;br /&gt;
** One for base, desktop and server (maybe more in the future, such as streambox)&lt;br /&gt;
*** desktop and server both inherit from the base profile which serves as the lowest common denominator.&lt;br /&gt;
* Support for multiple architectures (such as x86 and amd64)&lt;br /&gt;
** Avoid definition duplications via parent profile inheriting.&lt;br /&gt;
* All the profiles have an official Gentoo profile as their master&lt;br /&gt;
* Profiles include only packages belonging to a base system, not an application stack (those will be managed via puppet recipes)&lt;br /&gt;
* Profiles can be used to unmask packages required but not belonging to the base system&lt;br /&gt;
* Profiles sets all the default values for the client&#039;s [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html &amp;lt;code&amp;gt;make.conf&amp;lt;/code&amp;gt;], such as USE flags, BINHOSTS, GENTOO_MIRRORS, CFLAGS, CHOST etc.&lt;br /&gt;
** &#039;&#039;&#039;Warning&#039;&#039;&#039;: many such variables are not incremental and therefore need duplication of Gentoo base profile variables (requiring that someone tracks changes in those variables) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:29, 3 January 2014 (CET)&lt;br /&gt;
* keep the profiles (and the inheritance structure) as simple as possible, rather duplicate than inherit for small deviations to avoid inheritence issues --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:33, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Package host requirements ==&lt;br /&gt;
* Serving files via HTTPS&lt;br /&gt;
** Binary packages for all the clients (&amp;lt;code&amp;gt;PORTAGE_BINHOST&amp;lt;/code&amp;gt;), which were built by the [[#Build_host_requirements|build host]]&lt;br /&gt;
*** Binary packages will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
*** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== File mirror host requirements ==&lt;br /&gt;
* Hosts all the files required to build a package (&amp;lt;code&amp;gt;GENTOO_MIRRORS=mirror.example.com/public/gentoo/distfiles&amp;lt;/code&amp;gt;)&lt;br /&gt;
** Acts as a caching mirror for already downloaded packages from an official mirror&lt;br /&gt;
**  Serves fetch-restricted files (&amp;lt;code&amp;gt;dev-java/oracle-jdk-bin&amp;lt;/code&amp;gt; for example), to authorized clients&lt;br /&gt;
* Files are served via HTTPS&lt;br /&gt;
* Distinguishes between three groups of files&lt;br /&gt;
** &#039;&#039;&#039;public&#039;&#039;&#039;: Files which are available to all clients (theoretically even to the entire internet)&lt;br /&gt;
** &#039;&#039;&#039;site-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure (for example those which would put us into [http://www.bettercallsaul.com/ legal troubles] if available to the public)&lt;br /&gt;
** &#039;&#039;&#039;stack-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure and the software stack group (private files of a specific customer) &lt;br /&gt;
* Provides an easy way to let an administrator manually upload new files, for example via WebDAV-CGI, SFTP or a similar mechanism.&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== Puppet requirements ==&lt;br /&gt;
* moved to [[stoney_orchestra:_Requirements]], included below for reference.&lt;br /&gt;
&lt;br /&gt;
{{:stoney orchestra: Requirements}}&lt;br /&gt;
&lt;br /&gt;
== Install host requirements ==&lt;br /&gt;
* Ability to install physical and virtual machines&lt;br /&gt;
* Distinguish machines by their Ethernet MAC address&lt;br /&gt;
* Provide a PXE/TFTP boot mechanism&lt;br /&gt;
* Partition and format the (virtual) harddisks&lt;br /&gt;
* Install a stage3 image which was built by the build host&lt;br /&gt;
* Bootstrap puppet, enabling it to take over the individual installation and customization.&lt;br /&gt;
* Group hosts into&lt;br /&gt;
** environments (development, staging and production)&lt;br /&gt;
** architectures (such as x86, amd64 etc.)&lt;br /&gt;
** portage profiles (system profiles such as desktop and server)&lt;br /&gt;
** &amp;lt;s&amp;gt;stacks (comprising a complete product as a service with the underlying infrastructure)&amp;lt;/s&amp;gt; this is the task of Puppet --[[Benutzer:Chaf|Chaf]] ([[Benutzer Diskussion:Chaf|Diskussion]]) 09:42, 19. Dez. 2013 (CET)&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure requirements ==&lt;br /&gt;
* Local certificate authority for signing [http://en.wikipedia.org/wiki/X509 X.509] certificates.&lt;br /&gt;
* Master certificate authority root certificate which is only used to sign Sub-CA certificates&lt;br /&gt;
* Sub certificate authorities used for various cases such as&lt;br /&gt;
** Puppet certificates [http://docs.puppetlabs.com/puppet/3/reference/config_ssl_external_ca.html]&lt;br /&gt;
** User certificates&lt;br /&gt;
** Client certificates&lt;br /&gt;
** Host certificates&lt;br /&gt;
* Ability to sign, revoke and extend certificates&lt;br /&gt;
* Publish certificate revocation status either via [http://en.wikipedia.org/wiki/Certificate_revocation_list CRL] and/or [http://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol OCSP]&lt;br /&gt;
** CRL is not worth the hassle due to it not defining how often the CRL must be consulted. Since we are in the same physical net OCSP should be far superior here (thank to its live checking support). On the other hand puppet does not do OCSP yet (redmine: [http://projects.puppetlabs.com/issues/10111 #110111]) so we might need to implement both or implement OCSP as well as develop our own automated revocation for puppet.&lt;br /&gt;
* Choose DNs below &amp;lt;code&amp;gt;dc=rabe,dc=ch&amp;lt;/code&amp;gt;&lt;br /&gt;
* register a PEN-OID as issued by IANA if custom schema work is required&lt;br /&gt;
** Use a @rabe email when requesting a PEN at [http://pen.iana.org/pen/PenApplication.page IANA], last time the @purplehaze.ch was a problem!&lt;br /&gt;
* Some of the aforementioned sub-CAs might be implemented as robot CAs with a self service interface (ie for authorized users).&lt;br /&gt;
* Consider using [http://en.wikipedia.org/wiki/Certificate_Management_Protocol CMP] or [http://en.wikipedia.org/wiki/Certificate_Management_over_CMS CMC] as an API to signing, revoking et. al.&lt;br /&gt;
** Since the underlying RFCs of both these protocols are rather new they are not yet broadly supported.&lt;br /&gt;
* Keep local root CA offline!&lt;br /&gt;
** Maybe use an old netbook as root CA :P&lt;br /&gt;
* Support GPG keys for signing packages&lt;br /&gt;
&lt;br /&gt;
== Git hosting requirements ==&lt;br /&gt;
* Public repositories hosted on [http://www.github.com GitHub] (mainly) under the [https://github.com/organizations/radiorabe radiorabe organization] (almost anything which doesn&#039;t leak sensitive informations)&lt;br /&gt;
* Private repositories hosted on the internal infrastructure&lt;br /&gt;
** Accessible via https and a web interface&lt;br /&gt;
** contains some repos with uber-private data the gets compartmentalized even further (ie. hiera datafiles in different repos)&lt;br /&gt;
* One repository per component&lt;br /&gt;
* Daily backup of all repositories&lt;br /&gt;
* Branches for development, staging and production&lt;br /&gt;
** New features are added to the development branch only and later merged up to staging and production&lt;br /&gt;
* Must support pull-requests so we can implement a review process (when pulling through the envs)&lt;br /&gt;
** Sing-Offing might also be required&lt;br /&gt;
* Adhere to [http://semver.org/ Semantic Versioning] for version/release tags.&lt;br /&gt;
** Tag releases as &amp;lt;code&amp;gt;vX.Y.Z&amp;lt;/code&amp;gt; those will be automatically appear on GitHub as downloadable tarballs, which can be referenced within the corresponding ebuilds.&lt;br /&gt;
** Hit 1.0.0 as soon as code lands on production or earlier&lt;br /&gt;
** Commit .lock files when reaching 1.0.0 where applicable (Gemfile.lock, composer.lock) or earlier if needed&lt;br /&gt;
* Must be able to trigger remote events (ie. update master through mcollective after code was promoted to production in a PR)&lt;br /&gt;
* Support the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model&lt;br /&gt;
&lt;br /&gt;
== Messaging requirements ==&lt;br /&gt;
* I&#039;m talking AMPQ, JMS, STOMP, 0MQ and the likes&lt;br /&gt;
** not sure if we need something in this space for the infra&lt;br /&gt;
** it could facilitate comms between components&lt;br /&gt;
** stuff like mcollective and RadioDNS need something in this space&lt;br /&gt;
&lt;br /&gt;
== Monitoring, logging and alarming system requirements ==&lt;br /&gt;
@TODO&lt;br /&gt;
* centralized logging is used throughout&lt;br /&gt;
** with tools that help find and fix problems and do post mortems&lt;br /&gt;
* all systems are always monitored by a full monitoring suite&lt;br /&gt;
* the monitoring suite must support alarming users through multiple paths&lt;br /&gt;
** alarming should include a fallback strategy and a way to acknowledge alarms&lt;br /&gt;
** it must have a easy way to configure scheduled maintenance either before or while the maintenance is undergoing&lt;br /&gt;
* monitoring, logging and alarming are all automatically configured during regular provisioning of machines&lt;br /&gt;
* alerting uses jabber by default with fallbacks to email and sms-through-gsm depending on the site.&lt;br /&gt;
&lt;br /&gt;
= Implementation proposal =&lt;br /&gt;
== Build host proposal ==&lt;br /&gt;
The build host consists out of various chroots to build binary packages for multiple environments, architectures and build profiles.&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
* [http://packer.io packer.io] can be used to build stage4 (containing a kernel) images and seems to work for gentoo. Packer often gets used to build Vagrant boxes.&lt;br /&gt;
** [https://github.com/pierreozoux/packer-warehouse/blob/master/var-files/gentoo/generate_latest.sh gentoo script from packer-warehouse] used with packer to create a minimal gentoo vagrant box&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone proposal ==&lt;br /&gt;
== Portage overlay proposal ==&lt;br /&gt;
== Portage profile proposal ==&lt;br /&gt;
== Package and file mirror proposal ==&lt;br /&gt;
== Puppet proposal ==&lt;br /&gt;
* Adhere to Craig Dunns [http://www.craigdunn.org/2012/05/239/ architecture] [http://www.slideshare.net/PuppetLabs/roles-talk]&lt;br /&gt;
** on the system level (ie for each bar-metal or virtual machine)&lt;br /&gt;
*** roles contains the business view (ie. [https://github.com/radiorabe/puppet/blob/master/role/manifests/puppet/master.pp role::puppet::master])&lt;br /&gt;
*** profiles the implementation (such as [https://github.com/radiorabe/puppet/blob/master/profile/manifests/puppet/master.pp profile::puppet::master])&lt;br /&gt;
** on the architecture level (ie. in the cloud-fabric)&lt;br /&gt;
*** roles contains the business view (ie. role::cloud-storage, role::product1)&lt;br /&gt;
*** profiles contain the implementation (ie profile::storage-cluster, profile::storage-webinterface-farm)&lt;br /&gt;
* Keep profiles, roles (as per craig) and Puppetfile in [https://github.com/radiorabe/puppet github.com/radiorabe/puppet]&lt;br /&gt;
** This is where we keep feature/*, develop and master (ie staging) branches&lt;br /&gt;
** An internal clone then contains all these + production (what exactly is in prodution, ie. our release schedule is considered sensitive in this implementation)&lt;br /&gt;
** This lets us use the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model with almost no changes (the one change being us gating stuff into production on the closed clone)&lt;br /&gt;
** github may use hooks to push content to our internal git when they happen&lt;br /&gt;
* All other modules need their own repo and must be published to the puppet module forge&lt;br /&gt;
* Use librarian-puppet (or r10k) for composing the final puppet envs&lt;br /&gt;
** r10k eschews git submodule support we used in puppet-syslogng but has support for multiple envs out of the box&lt;br /&gt;
** librarian-puppet would need to be run once per environment to achieve what r10k does&lt;br /&gt;
* provide develop, master and production branches from private repo as puppet environments on master&lt;br /&gt;
&lt;br /&gt;
== Install host proposal ==&lt;br /&gt;
* use the existing server on [[tftp-01]] on the RaBe infra as a shortcut&lt;br /&gt;
** replace that instance with one native to the infra when it is ready for that&lt;br /&gt;
* iPXE [http://ipxe.org/]&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure proposal ==&lt;br /&gt;
* write [[certificate policy]] (in german!)&lt;br /&gt;
* hold a key ceremony for the root and level 1&lt;br /&gt;
** offline ceremony on an old netbook with centos or similar (not debian, probably not gentoo to make this happen soonish)&lt;br /&gt;
** Sign RaBe root cert and level 1 intermediate cert&lt;br /&gt;
** store root cert key on 2 sdcards and as 1 printout somewhere safely&lt;br /&gt;
** store level 1 intermediate key on sdcards for use by admins&lt;br /&gt;
* use level 1 intermediate key to sign level 2 cas as needed&lt;br /&gt;
** level 2 robot ca key for puppet (managed by &amp;lt;code&amp;gt;puppet ca&amp;lt;/code&amp;gt;)&lt;br /&gt;
** level 2 ca for client certs&lt;br /&gt;
** level 2 ca for host certs&lt;br /&gt;
** more level 2 certs&lt;br /&gt;
* use OpenSSL as default software for PKI&lt;br /&gt;
** ssl has the largest userbase which should make it easier on new admins&lt;br /&gt;
** features that openssl does not implement get used as soon as openssl catches up (ie. [http://cmpforopenssl.sourceforge.net/‎ CMP])&lt;br /&gt;
&lt;br /&gt;
== git hosting proposal ==&lt;br /&gt;
&lt;br /&gt;
* [http://gitlab.org/ gitlab] seems nice even though is is ruby on rails under the hood&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Binary_package_guide Gentoo Binary Package Guide]&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Preserve-libs Gentoo preserve-libs]&lt;br /&gt;
* [http://swift.siphos.be/aglara/ A Gentoo Linux Advanced Reference Architecture]&lt;br /&gt;
* [http://www.gentoo.org/proj/en/gentoo-alt/prefix/ Gentoo Prefix]&lt;br /&gt;
* man pages&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/portage.5.html portage(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/emerge.1.html emerge(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html make.conf(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.1.html ebuild(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.5.html ebuild(5)]&lt;br /&gt;
&lt;br /&gt;
[[Category: Infrastructure]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2745</id>
		<title>Gentoo Infrastructure</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2745"/>
		<updated>2014-01-09T18:30:41Z</updated>

		<summary type="html">&lt;p&gt;Lucas: /* Build host proposal */ link and &amp;#039;splain packer.io&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This article describes how we plan on using gentoo as an infrastructure backbone for creating a complete and modern IT architecture.&lt;br /&gt;
&lt;br /&gt;
== Glossary ==&lt;br /&gt;
@TODO We need to clean up some terms already (for instance the portage vs puppet profile thing) A [[:Category:Glossary|glossary]] should help us define term more closely (and stick to the definitions).&lt;br /&gt;
&lt;br /&gt;
; portage profile&lt;br /&gt;
: A profile in gentoo portage. Defines either a system or application stack for portage.&lt;br /&gt;
; portage build profile&lt;br /&gt;
: A profile in gentoo portage. Based of a system profile but used during the build phase of the binary packages used in the final deploy.&lt;br /&gt;
; puppet profile&lt;br /&gt;
: A puppet profile contains the implementation logic of how to install and configure an aspect of a system.&lt;br /&gt;
; stack&lt;br /&gt;
: A stack contains a complete and deployable product that may be provisioned and used. Stack have very simple inheritance letting the admin create stack trees based on each other. For instance a Ruby on Rails stack will be based of of a ruby stack which is based off a linux stack.&lt;br /&gt;
&lt;br /&gt;
= Required components =&lt;br /&gt;
* Build host(s) for binary packages&lt;br /&gt;
* HTTP server for serving binary packages and distfiles (required by the ebuilds)&lt;br /&gt;
* Git clone of official portage tree&lt;br /&gt;
* Overlay(s)&lt;br /&gt;
* Own portage profile(s)&lt;br /&gt;
* rsync or Git server for serving the Overlay and the portage profiles&lt;br /&gt;
* Stage3 building system&lt;br /&gt;
* Puppet for configuration management and software installation&lt;br /&gt;
* Git version control for everything (overlays, portage profiles, puppet manifests and scripts/code)&lt;br /&gt;
* Install host (PXE boot / TFTP / DHCP)&lt;br /&gt;
** emc/puppetlabs [https://github.com/puppetlabs/Razor razor] can do this but needs some work for gentoo &lt;br /&gt;
* Automatic base installation script&lt;br /&gt;
** also in the scope of razor&lt;br /&gt;
* Separation of development, staging and production environments&lt;br /&gt;
** tagged and managed in git&lt;br /&gt;
* PKI environment (with dedicated sub CAs) for X509 certificates (used for Puppet, server and client certs etc.)&lt;br /&gt;
* git web interface (make dotfiles and frozen clones accessible to power-users)&lt;br /&gt;
* Central authentication service&lt;br /&gt;
* DNS, DHCP and NTP services&lt;br /&gt;
* Monitoring and alarming system&lt;br /&gt;
* Logging&lt;br /&gt;
* versioning for everything (if it is a committable file, use semver on its repo)&lt;br /&gt;
&lt;br /&gt;
== Binary package requirements ==&lt;br /&gt;
* Ability to build and install binary packages with the same version but different USE flags. For example, MySQL server package (&amp;lt;code&amp;gt;-minimal&amp;lt;/code&amp;gt; and MySQL client &amp;amp; libs package &amp;lt;code&amp;gt;minimal&amp;lt;/code&amp;gt;)&lt;br /&gt;
** don&#039;t go there: this imposes a significant amount of maintenance work and may still break. Rather provide large enough base sets and accept that some packages install too much (you can still disable them at runtime) and build the few deviations from the rule on the servers from source --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:39, 3 January 2014 (CET)&lt;br /&gt;
*** Yes, we need to and can go there :-) I agree with you, that we should do this only if necessary, apache for example can be built once and has the ability to turn features (module loading) on/off via its configuration. Other software does not provide such run-time configuration which results in unwanted server-software and dependencies on the installed hosts (&amp;lt;code&amp;gt;net-analyzer/zabbix&amp;lt;/code&amp;gt; for example). I clearly do not want to have a dedicated build environment for each of those packages, I would rather see a build env, called minimal for example, which is used to build all those database packages with only lib and clients enabled (use the same env for PostgreSQL, OpenLDAP, MySQL etc.). As stated before, the whole build process needs to be automated, so I don&#039;t see a considerable increase of maintenance work coming up here. The dependency problem is mitigated through the fact that we have a frozen portage tree for all our build envs and therefore use the same versions everywhere. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 12:04, 6 January 2014 (CET)&lt;br /&gt;
* Providing binary packages for different major (and sometimes minor) versions, for example: &amp;lt;code&amp;gt;dev-db/mysql-5.X.Y&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;dev-db/mysql-6.X.Y&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Provide binary packages for pre-compiled Linux kernels and modules (not just a binary package of &amp;lt;code&amp;gt;sys-kernel/gentoo-sources&amp;lt;/code&amp;gt;)&lt;br /&gt;
** This makes it possible to build stage4 images from binary packages.&lt;br /&gt;
** Most likely there will be separate packages for servers and desktops built with different genkernel configs.&lt;br /&gt;
* Handle reverse dependency updates and ABI changes&lt;br /&gt;
&lt;br /&gt;
== Build host requirements ==&lt;br /&gt;
* Build binary package for all required software&lt;br /&gt;
* Support for multiple environments (development, staging and production)&lt;br /&gt;
* Support for multiple architectures (such as x86, amd64 etc.)&lt;br /&gt;
* Support for multiple build profiles&lt;br /&gt;
** system (or base) profile, such as desktop or server (stage3) (all the packages contained within the &amp;lt;code&amp;gt;/etc/portage/make.profile&amp;lt;/code&amp;gt; or via &amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;)&lt;br /&gt;
** application profiles, such as php5-app, django-app etc.)&lt;br /&gt;
** simple inheritance is used for things like python-app -&amp;gt; django-app&lt;br /&gt;
** stacks consist of one system profile and multiple application profiles&lt;br /&gt;
** don&#039;t do this: Gentoo itself has only a few profiles and even there issues arise when combining them (for example desktop + selinux-hardened) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:40, 3 January 2014 (CET)&lt;br /&gt;
*** Those are build-profiles (for example chroots or some sort of overlay-fs) not Gentoo (portage) profiles, we definitely need to clarify those terms ;) --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 20:01, 5 January 2014 (CET)&lt;br /&gt;
* All build profiles will use a system profile as their base profile&lt;br /&gt;
* Ability to update an existing build profile, without the need to build it from scratch&lt;br /&gt;
* Ability to do fully automated clean builds (ie. for new archs or new stacks)&lt;br /&gt;
* Ability to automatically update all development profiles on a predefined frequency such as daily, weekly or monthly an be notified about build failures&lt;br /&gt;
** [http://jenkins-ci.org/ jenkins ci] can do this using one jenkins master and a least one build slave per architecture.&lt;br /&gt;
** Other options would be [https://github.com/travis-ci/travis-ci travis ci] (not ready for in-house use) or [http://cruisecontrol.sourceforge.net/ cruise control]&lt;br /&gt;
** Rabe already has a jenkins instance: [http://intranet.rabe.ch/jenkins/]. The instance [[Jenkins-01]] is more or less modern and should be easy to reintegrate with puppet.&lt;br /&gt;
* Each build profile stores the built binary packages under a per-defined directory which will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Application build profiles stores only the extra packages within the above directory, packages included in a base profile won&#039;t be duplicated.&lt;br /&gt;
* Old or no longer supported packages will be removed automatically&lt;br /&gt;
* Build a stage 3 tarball, which can be used for the automatic installation via PXE/TFTP.&lt;br /&gt;
** must be able to build a stage tarball for each of the available environment-arch-system profile combinations&lt;br /&gt;
* Handle reverse dependency updates and ABI changes (aka &amp;lt;code&amp;gt;revdep-rebuild&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Handle perl and python (maybe more) dependency updates (aka &amp;lt;code&amp;gt;perl-cleaner&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;python-updater&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Ability to build kernel and modules&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone requirements ==&lt;br /&gt;
* The official portage tree needs to be cloned via Git, which basically enables one to:&lt;br /&gt;
** keep the control over portage tree updates&lt;br /&gt;
** provide an old version of the tree&lt;br /&gt;
** cherry pick updates&lt;br /&gt;
***  this should be avoided at all cost since it can lead to various sorts of breakages (ebuild &amp;lt;-&amp;gt; ebuild, ebuild &amp;lt;-&amp;gt; eclass, ebuild &amp;lt;-&amp;gt; profile, eclass &amp;lt;-&amp;gt; profile interaction) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:24, 3 January 2014 (CET)&lt;br /&gt;
**** Yes, I agree. Nonetheless, we need the &#039;&#039;possibility&#039;&#039; to do cherry picking, for example to react on zero-day exploits. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 19:53, 5 January 2014 (CET)&lt;br /&gt;
* Support for a development, staging and production branch&lt;br /&gt;
** Ability to automatically sync from upstream&lt;br /&gt;
** Easy merge support from one branch to the next &#039;&#039;higher&#039;&#039; one (staging -&amp;gt; production)&lt;br /&gt;
* Notification support for new [http://www.gentoo.org/security/en/glsa/index.xml GLSAs] which affect packages within the cloned trees.&lt;br /&gt;
** Either via automatic update and merge of &amp;lt;code&amp;gt;/usr/portage/metadata/glsa&amp;lt;/code&amp;gt; or via external mechanisms such as consulting the [http://www.gentoo.org/rdf/en/glsa-index.rdf RDF feed].&lt;br /&gt;
** Having an inventory by collecting puppet facts allows to check for security updates in a central location --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:31, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage overlay requirements ==&lt;br /&gt;
* One Git based portage [http://www.gentoo.org/proj/en/overlays/userguide.xml overlay]&lt;br /&gt;
** Contains own [[#Portage_profile_requirements|portage profiles]]&lt;br /&gt;
** Contains own or modified ebuilds or legacy ones removed from the official tree&lt;br /&gt;
* Support for development, staging and production environment (via Git branches)&lt;br /&gt;
* [http://layman.sourceforge.net/ Layman] compatibility&lt;br /&gt;
** Portage has now direct repository support (as has cave/paludis) and layman may be omitted --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:32, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage profile requirements ==&lt;br /&gt;
* Multiple [http://wiki.gentoo.org/wiki/Profile Portage profiles] stored within the [[#Overlay_requirements|overlay]].&lt;br /&gt;
** One for base, desktop and server (maybe more in the future, such as streambox)&lt;br /&gt;
*** desktop and server both inherit from the base profile which serves as the lowest common denominator.&lt;br /&gt;
* Support for multiple architectures (such as x86 and amd64)&lt;br /&gt;
** Avoid definition duplications via parent profile inheriting.&lt;br /&gt;
* All the profiles have an official Gentoo profile as their master&lt;br /&gt;
* Profiles include only packages belonging to a base system, not an application stack (those will be managed via puppet recipes)&lt;br /&gt;
* Profiles can be used to unmask packages required but not belonging to the base system&lt;br /&gt;
* Profiles sets all the default values for the client&#039;s [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html &amp;lt;code&amp;gt;make.conf&amp;lt;/code&amp;gt;], such as USE flags, BINHOSTS, GENTOO_MIRRORS, CFLAGS, CHOST etc.&lt;br /&gt;
** &#039;&#039;&#039;Warning&#039;&#039;&#039;: many such variables are not incremental and therefore need duplication of Gentoo base profile variables (requiring that someone tracks changes in those variables) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:29, 3 January 2014 (CET)&lt;br /&gt;
* keep the profiles (and the inheritance structure) as simple as possible, rather duplicate than inherit for small deviations to avoid inheritence issues --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:33, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Package host requirements ==&lt;br /&gt;
* Serving files via HTTPS&lt;br /&gt;
** Binary packages for all the clients (&amp;lt;code&amp;gt;PORTAGE_BINHOST&amp;lt;/code&amp;gt;), which were built by the [[#Build_host_requirements|build host]]&lt;br /&gt;
*** Binary packages will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
*** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== File mirror host requirements ==&lt;br /&gt;
* Hosts all the files required to build a package (&amp;lt;code&amp;gt;GENTOO_MIRRORS=mirror.example.com/public/gentoo/distfiles&amp;lt;/code&amp;gt;)&lt;br /&gt;
** Acts as a caching mirror for already downloaded packages from an official mirror&lt;br /&gt;
**  Serves fetch-restricted files (&amp;lt;code&amp;gt;dev-java/oracle-jdk-bin&amp;lt;/code&amp;gt; for example), to authorized clients&lt;br /&gt;
* Files are served via HTTPS&lt;br /&gt;
* Distinguishes between three groups of files&lt;br /&gt;
** &#039;&#039;&#039;public&#039;&#039;&#039;: Files which are available to all clients (theoretically even to the entire internet)&lt;br /&gt;
** &#039;&#039;&#039;site-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure (for example those which would put us into [http://www.bettercallsaul.com/ legal troubles] if available to the public)&lt;br /&gt;
** &#039;&#039;&#039;stack-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure and the software stack group (private files of a specific customer) &lt;br /&gt;
* Provides an easy way to let an administrator manually upload new files, for example via WebDAV-CGI, SFTP or a similar mechanism.&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== Puppet requirements ==&lt;br /&gt;
* moved to [[stoney_orchestra:_Requirements]], included below for reference.&lt;br /&gt;
&lt;br /&gt;
{{:stoney orchestra: Requirements}}&lt;br /&gt;
&lt;br /&gt;
== Install host requirements ==&lt;br /&gt;
* Ability to install physical and virtual machines&lt;br /&gt;
* Distinguish machines by their Ethernet MAC address&lt;br /&gt;
* Provide a PXE/TFTP boot mechanism&lt;br /&gt;
* Partition and format the (virtual) harddisks&lt;br /&gt;
* Install a stage3 image which was built by the build host&lt;br /&gt;
* Bootstrap puppet, enabling it to take over the individual installation and customization.&lt;br /&gt;
* Group hosts into&lt;br /&gt;
** environments (development, staging and production)&lt;br /&gt;
** architectures (such as x86, amd64 etc.)&lt;br /&gt;
** portage profiles (system profiles such as desktop and server)&lt;br /&gt;
** &amp;lt;s&amp;gt;stacks (comprising a complete product as a service with the underlying infrastructure)&amp;lt;/s&amp;gt; this is the task of Puppet --[[Benutzer:Chaf|Chaf]] ([[Benutzer Diskussion:Chaf|Diskussion]]) 09:42, 19. Dez. 2013 (CET)&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure requirements ==&lt;br /&gt;
* Local certificate authority for signing [http://en.wikipedia.org/wiki/X509 X.509] certificates.&lt;br /&gt;
* Master certificate authority root certificate which is only used to sign Sub-CA certificates&lt;br /&gt;
* Sub certificate authorities used for various cases such as&lt;br /&gt;
** Puppet certificates [http://docs.puppetlabs.com/puppet/3/reference/config_ssl_external_ca.html]&lt;br /&gt;
** User certificates&lt;br /&gt;
** Client certificates&lt;br /&gt;
** Host certificates&lt;br /&gt;
* Ability to sign, revoke and extend certificates&lt;br /&gt;
* Publish certificate revocation status either via [http://en.wikipedia.org/wiki/Certificate_revocation_list CRL] and/or [http://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol OCSP]&lt;br /&gt;
** CRL is not worth the hassle due to it not defining how often the CRL must be consulted. Since we are in the same physical net OCSP should be far superior here (thank to its live checking support). On the other hand puppet does not do OCSP yet (redmine: [http://projects.puppetlabs.com/issues/10111 #110111]) so we might need to implement both or implement OCSP as well as develop our own automated revocation for puppet.&lt;br /&gt;
* Choose DNs below &amp;lt;code&amp;gt;dc=rabe,dc=ch&amp;lt;/code&amp;gt;&lt;br /&gt;
* register a PEN-OID as issued by IANA if custom schema work is required&lt;br /&gt;
** Use a @rabe email when requesting a PEN at [http://pen.iana.org/pen/PenApplication.page IANA], last time the @purplehaze.ch was a problem!&lt;br /&gt;
* Some of the aforementioned sub-CAs might be implemented as robot CAs with a self service interface (ie for authorized users).&lt;br /&gt;
* Consider using [http://en.wikipedia.org/wiki/Certificate_Management_Protocol CMP] or [http://en.wikipedia.org/wiki/Certificate_Management_over_CMS CMC] as an API to signing, revoking et. al.&lt;br /&gt;
** Since the underlying RFCs of both these protocols are rather new they are not yet broadly supported.&lt;br /&gt;
* Keep local root CA offline!&lt;br /&gt;
** Maybe use an old netbook as root CA :P&lt;br /&gt;
* Support GPG keys for signing packages&lt;br /&gt;
&lt;br /&gt;
== Git hosting requirements ==&lt;br /&gt;
* Public repositories hosted on [http://www.github.com GitHub] (mainly) under the [https://github.com/organizations/radiorabe radiorabe organization] (almost anything which doesn&#039;t leak sensitive informations)&lt;br /&gt;
* Private repositories hosted on the internal infrastructure&lt;br /&gt;
** Accessible via https and a web interface&lt;br /&gt;
** contains some repos with uber-private data the gets compartmentalized even further (ie. hiera datafiles in different repos)&lt;br /&gt;
* One repository per component&lt;br /&gt;
* Daily backup of all repositories&lt;br /&gt;
* Branches for development, staging and production&lt;br /&gt;
** New features are added to the development branch only and later merged up to staging and production&lt;br /&gt;
* Must support pull-requests so we can implement a review process (when pulling through the envs)&lt;br /&gt;
** Sing-Offing might also be required&lt;br /&gt;
* Adhere to [http://semver.org/ Semantic Versioning] for version/release tags.&lt;br /&gt;
** Tag releases as &amp;lt;code&amp;gt;vX.Y.Z&amp;lt;/code&amp;gt; those will be automatically appear on GitHub as downloadable tarballs, which can be referenced within the corresponding ebuilds.&lt;br /&gt;
** Hit 1.0.0 as soon as code lands on production or earlier&lt;br /&gt;
** Commit .lock files when reaching 1.0.0 where applicable (Gemfile.lock, composer.lock) or earlier if needed&lt;br /&gt;
* Must be able to trigger remote events (ie. update master through mcollective after code was promoted to production in a PR)&lt;br /&gt;
* Support the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model&lt;br /&gt;
&lt;br /&gt;
== Messaging requirements ==&lt;br /&gt;
* I&#039;m talking AMPQ, JMS, STOMP, 0MQ and the likes&lt;br /&gt;
** not sure if we need something in this space for the infra&lt;br /&gt;
** it could facilitate comms between components&lt;br /&gt;
** stuff like mcollective and RadioDNS need something in this space&lt;br /&gt;
&lt;br /&gt;
== Monitoring, logging and alarming system requirements ==&lt;br /&gt;
@TODO&lt;br /&gt;
* centralized logging is used throughout&lt;br /&gt;
** with tools that help find and fix problems and do post mortems&lt;br /&gt;
* all systems are always monitored by a full monitoring suite&lt;br /&gt;
* the monitoring suite must support alarming users through multiple paths&lt;br /&gt;
** alarming should include a fallback strategy and a way to acknowledge alarms&lt;br /&gt;
** it must have a easy way to configure scheduled maintenance either before or while the maintenance is undergoing&lt;br /&gt;
* monitoring, logging and alarming are all automatically configured during regular provisioning of machines&lt;br /&gt;
* alerting uses jabber by default with fallbacks to email and sms-through-gsm depending on the site.&lt;br /&gt;
&lt;br /&gt;
= Implementation proposal =&lt;br /&gt;
== Build host proposal ==&lt;br /&gt;
The build host consists out of various chroots to build binary packages for multiple environments, architectures and build profiles.&lt;br /&gt;
&lt;br /&gt;
=== Links ===&lt;br /&gt;
* [http://packer.io packer.io] can be used to build stage4 (containing a kernel) images and seems to work for gentoo. Packer often gets used to build Vagrant boxes.&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone proposal ==&lt;br /&gt;
== Portage overlay proposal ==&lt;br /&gt;
== Portage profile proposal ==&lt;br /&gt;
== Package and file mirror proposal ==&lt;br /&gt;
== Puppet proposal ==&lt;br /&gt;
* Adhere to Craig Dunns [http://www.craigdunn.org/2012/05/239/ architecture] [http://www.slideshare.net/PuppetLabs/roles-talk]&lt;br /&gt;
** on the system level (ie for each bar-metal or virtual machine)&lt;br /&gt;
*** roles contains the business view (ie. [https://github.com/radiorabe/puppet/blob/master/role/manifests/puppet/master.pp role::puppet::master])&lt;br /&gt;
*** profiles the implementation (such as [https://github.com/radiorabe/puppet/blob/master/profile/manifests/puppet/master.pp profile::puppet::master])&lt;br /&gt;
** on the architecture level (ie. in the cloud-fabric)&lt;br /&gt;
*** roles contains the business view (ie. role::cloud-storage, role::product1)&lt;br /&gt;
*** profiles contain the implementation (ie profile::storage-cluster, profile::storage-webinterface-farm)&lt;br /&gt;
* Keep profiles, roles (as per craig) and Puppetfile in [https://github.com/radiorabe/puppet github.com/radiorabe/puppet]&lt;br /&gt;
** This is where we keep feature/*, develop and master (ie staging) branches&lt;br /&gt;
** An internal clone then contains all these + production (what exactly is in prodution, ie. our release schedule is considered sensitive in this implementation)&lt;br /&gt;
** This lets us use the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model with almost no changes (the one change being us gating stuff into production on the closed clone)&lt;br /&gt;
** github may use hooks to push content to our internal git when they happen&lt;br /&gt;
* All other modules need their own repo and must be published to the puppet module forge&lt;br /&gt;
* Use librarian-puppet (or r10k) for composing the final puppet envs&lt;br /&gt;
** r10k eschews git submodule support we used in puppet-syslogng but has support for multiple envs out of the box&lt;br /&gt;
** librarian-puppet would need to be run once per environment to achieve what r10k does&lt;br /&gt;
* provide develop, master and production branches from private repo as puppet environments on master&lt;br /&gt;
&lt;br /&gt;
== Install host proposal ==&lt;br /&gt;
* use the existing server on [[tftp-01]] on the RaBe infra as a shortcut&lt;br /&gt;
** replace that instance with one native to the infra when it is ready for that&lt;br /&gt;
* iPXE [http://ipxe.org/]&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure proposal ==&lt;br /&gt;
* write [[certificate policy]] (in german!)&lt;br /&gt;
* hold a key ceremony for the root and level 1&lt;br /&gt;
** offline ceremony on an old netbook with centos or similar (not debian, probably not gentoo to make this happen soonish)&lt;br /&gt;
** Sign RaBe root cert and level 1 intermediate cert&lt;br /&gt;
** store root cert key on 2 sdcards and as 1 printout somewhere safely&lt;br /&gt;
** store level 1 intermediate key on sdcards for use by admins&lt;br /&gt;
* use level 1 intermediate key to sign level 2 cas as needed&lt;br /&gt;
** level 2 robot ca key for puppet (managed by &amp;lt;code&amp;gt;puppet ca&amp;lt;/code&amp;gt;)&lt;br /&gt;
** level 2 ca for client certs&lt;br /&gt;
** level 2 ca for host certs&lt;br /&gt;
** more level 2 certs&lt;br /&gt;
* use OpenSSL as default software for PKI&lt;br /&gt;
** ssl has the largest userbase which should make it easier on new admins&lt;br /&gt;
** features that openssl does not implement get used as soon as openssl catches up (ie. [http://cmpforopenssl.sourceforge.net/‎ CMP])&lt;br /&gt;
&lt;br /&gt;
== git hosting proposal ==&lt;br /&gt;
&lt;br /&gt;
* [http://gitlab.org/ gitlab] seems nice even though is is ruby on rails under the hood&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Binary_package_guide Gentoo Binary Package Guide]&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Preserve-libs Gentoo preserve-libs]&lt;br /&gt;
* [http://swift.siphos.be/aglara/ A Gentoo Linux Advanced Reference Architecture]&lt;br /&gt;
* [http://www.gentoo.org/proj/en/gentoo-alt/prefix/ Gentoo Prefix]&lt;br /&gt;
* man pages&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/portage.5.html portage(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/emerge.1.html emerge(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html make.conf(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.1.html ebuild(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.5.html ebuild(5)]&lt;br /&gt;
&lt;br /&gt;
[[Category: Infrastructure]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_orchestra:_Roadmap&amp;diff=2642</id>
		<title>stoney orchestra: Roadmap</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_orchestra:_Roadmap&amp;diff=2642"/>
		<updated>2014-01-06T19:34:31Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== backlog ==&lt;br /&gt;
&lt;br /&gt;
=== unsorted ===&lt;br /&gt;
These might be rather epic and need breaking down until they are sensible user-stories.&lt;br /&gt;
&lt;br /&gt;
* investigate, document and implement hooks and git-scripts needed for continuous-(integration|development|.*)&lt;br /&gt;
* evaluate and decide between puppet-librarian, r10k or byoc solution for applying Puppetfile/Modulefile&lt;br /&gt;
* develop and establish &amp;quot;frontend&amp;quot; tooling like git-* scripts or puppet-rake tasks for devs and admins&lt;br /&gt;
* mcollective&lt;br /&gt;
&lt;br /&gt;
== links ==&lt;br /&gt;
&lt;br /&gt;
* Architecture&lt;br /&gt;
** [[Gentoo_Infrastructure#Puppet_proposal]]&lt;br /&gt;
** [http://www.craigdunn.org/2012/05/239/ Craig Dunn&#039;s Blog: Designing Puppet – Roles and Profiles]&lt;br /&gt;
* Puppet Modules&lt;br /&gt;
** [http://forge.puppetlabs.com/gentoo/portage gentoo/portage puppet module]&lt;br /&gt;
** [http://forge.puppetlabs.com/purplehazech/syslogng syslogng module]&lt;br /&gt;
** [http://forge.puppetlabs.com/purplehazech/ccache ccache module] ;)&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney orchestra]][[Category:Roadmap]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_orchestra:_Roadmap&amp;diff=2641</id>
		<title>stoney orchestra: Roadmap</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_orchestra:_Roadmap&amp;diff=2641"/>
		<updated>2014-01-06T19:33:15Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== backlog ==&lt;br /&gt;
&lt;br /&gt;
=== unsorted ===&lt;br /&gt;
These might be rather epic and need breaking down until they are sensible user-stories.&lt;br /&gt;
&lt;br /&gt;
* investigate, document and implement hooks and git-scripts needed for continuous-(integration|development|.*)&lt;br /&gt;
* evaluate and decide between puppet-librarian, r10k or byoc solution for applying Puppetfile/Modulefile&lt;br /&gt;
* develop and establish &amp;quot;frontend&amp;quot; tooling like git-* scripts or puppet-rake tasks for devs and admins&lt;br /&gt;
&lt;br /&gt;
== links ==&lt;br /&gt;
&lt;br /&gt;
* Architecture&lt;br /&gt;
** [[Gentoo_Infrastructure#Puppet_proposal]]&lt;br /&gt;
** [http://www.craigdunn.org/2012/05/239/ Craig Dunn&#039;s Blog: Designing Puppet – Roles and Profiles]&lt;br /&gt;
* Puppet Modules&lt;br /&gt;
** [http://forge.puppetlabs.com/gentoo/portage gentoo/portage puppet module]&lt;br /&gt;
** [http://forge.puppetlabs.com/purplehazech/syslogng syslogng module]&lt;br /&gt;
** [http://forge.puppetlabs.com/purplehazech/ccache ccache module] ;)&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney orchestra]][[Category:Roadmap]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_orchestra:_Roadmap&amp;diff=2640</id>
		<title>stoney orchestra: Roadmap</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_orchestra:_Roadmap&amp;diff=2640"/>
		<updated>2014-01-06T19:27:27Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== backlog ==&lt;br /&gt;
&lt;br /&gt;
=== unsorted ===&lt;br /&gt;
These might be rather epic and need breaking down until they are sensible user-stories.&lt;br /&gt;
&lt;br /&gt;
* investigate, document and implement hooks and git-scripts needed for continuous-(integration|development|.*)&lt;br /&gt;
* evaluate and decide between puppet-librarian, r10k or byoc solution for using Puppetfile/Modulefile&lt;br /&gt;
* develop and establish &amp;quot;frontend&amp;quot; tooling like git-* scripts or puppet-rake tasks for devs and admins&lt;br /&gt;
&lt;br /&gt;
== links ==&lt;br /&gt;
&lt;br /&gt;
* Architecture&lt;br /&gt;
** [[Gentoo_Infrastructure#Puppet_proposal]]&lt;br /&gt;
** [http://www.craigdunn.org/2012/05/239/ Craig Dunn&#039;s Blog: Designing Puppet – Roles and Profiles]&lt;br /&gt;
* Puppet Modules&lt;br /&gt;
** [http://forge.puppetlabs.com/gentoo/portage gentoo/portage puppet module]&lt;br /&gt;
** [http://forge.puppetlabs.com/purplehazech/syslogng syslogng module]&lt;br /&gt;
** [http://forge.puppetlabs.com/purplehazech/ccache ccache module] ;)&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney orchestra]][[Category:Roadmap]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_orchestra:_Roadmap&amp;diff=2639</id>
		<title>stoney orchestra: Roadmap</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_orchestra:_Roadmap&amp;diff=2639"/>
		<updated>2014-01-06T19:15:18Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
* Architecture&lt;br /&gt;
** [http://www.craigdunn.org/2012/05/239/ Craig Dunn&#039;s Blog: Designing Puppet – Roles and Profiles]&lt;br /&gt;
* Puppet Modules&lt;br /&gt;
** [http://forge.puppetlabs.com/gentoo/portage gentoo/portage puppet module]&lt;br /&gt;
** [http://forge.puppetlabs.com/purplehazech/syslogng syslogng module]&lt;br /&gt;
** [http://forge.puppetlabs.com/purplehazech/ccache ccache module] ;)&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney orchestra]][[Category:Roadmap]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_orchestra:_Roadmap&amp;diff=2638</id>
		<title>stoney orchestra: Roadmap</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_orchestra:_Roadmap&amp;diff=2638"/>
		<updated>2014-01-06T19:13:12Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
* [http://forge.puppetlabs.com/gentoo/portage gentoo/portage puppet module]&lt;br /&gt;
* [http://www.craigdunn.org/2012/05/239/ Craig Dunn&#039;s Blog: Designing Puppet – Roles and Profiles]&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney orchestra]][[Category:Roadmap]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2632</id>
		<title>Gentoo Infrastructure</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Gentoo_Infrastructure&amp;diff=2632"/>
		<updated>2014-01-05T21:37:32Z</updated>

		<summary type="html">&lt;p&gt;Lucas: /* Puppet requirements */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Overview =&lt;br /&gt;
This article describes how we plan on using gentoo as an infrastructure backbone for creating a complete and modern IT architecture.&lt;br /&gt;
&lt;br /&gt;
== Glossary ==&lt;br /&gt;
@TODO We need to clean up some terms already (for instance the portage vs puppet profile thing) A [[:Category:Glossary|glossary]] should help us define term more closely (and stick to the definitions).&lt;br /&gt;
&lt;br /&gt;
; portage profile&lt;br /&gt;
: A profile in gentoo portage. Defines either a system or application stack for portage.&lt;br /&gt;
; portage build profile&lt;br /&gt;
: A profile in gentoo portage. Based of a system profile but used during the build phase of the binary packages used in the final deploy.&lt;br /&gt;
; puppet profile&lt;br /&gt;
: A puppet profile contains the implementation logic of how to install and configure an aspect of a system.&lt;br /&gt;
; stack&lt;br /&gt;
: A stack contains a complete and deployable product that may be provisioned and used. Stack have very simple inheritance letting the admin create stack trees based on each other. For instance a Ruby on Rails stack will be based of of a ruby stack which is based off a linux stack.&lt;br /&gt;
&lt;br /&gt;
= Required components =&lt;br /&gt;
* Build host(s) for binary packages&lt;br /&gt;
* HTTP server for serving binary packages and distfiles (required by the ebuilds)&lt;br /&gt;
* Git clone of official portage tree&lt;br /&gt;
* Overlay(s)&lt;br /&gt;
* Own portage profile(s)&lt;br /&gt;
* rsync or Git server for serving the Overlay and the portage profiles&lt;br /&gt;
* Stage3 building system&lt;br /&gt;
* Puppet for configuration management and software installation&lt;br /&gt;
* Git version control for everything (overlays, portage profiles, puppet manifests and scripts/code)&lt;br /&gt;
* Install host (PXE boot / TFTP / DHCP)&lt;br /&gt;
** emc/puppetlabs [https://github.com/puppetlabs/Razor razor] can do this but needs some work for gentoo &lt;br /&gt;
* Automatic base installation script&lt;br /&gt;
** also in the scope of razor&lt;br /&gt;
* Separation of development, staging and production environments&lt;br /&gt;
** tagged and managed in git&lt;br /&gt;
* PKI environment (with dedicated sub CAs) for X509 certificates (used for Puppet, server and client certs etc.)&lt;br /&gt;
* git web interface (make dotfiles and frozen clones accessible to power-users)&lt;br /&gt;
* Central authentication service&lt;br /&gt;
* DNS, DHCP and NTP services&lt;br /&gt;
* Monitoring and alarming system&lt;br /&gt;
* Logging&lt;br /&gt;
* versioning for everything (if it is a committable file, use semver on its repo)&lt;br /&gt;
&lt;br /&gt;
== Binary package requirements ==&lt;br /&gt;
* Ability to build and install binary packages with the same version but different USE flags. For example, MySQL server package (&amp;lt;code&amp;gt;-minimal&amp;lt;/code&amp;gt; and MySQL client &amp;amp; libs package &amp;lt;code&amp;gt;minimal&amp;lt;/code&amp;gt;)&lt;br /&gt;
** don&#039;t go there: this imposes a significant amount of maintenance work and may still break. Rather provide large enough base sets and accept that some packages install too much (you can still disable them at runtime) and build the few deviations from the rule on the servers from source --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:39, 3 January 2014 (CET)&lt;br /&gt;
* Providing binary packages for different major (and sometimes minor) versions, for example: &amp;lt;code&amp;gt;dev-db/mysql-5.X.Y&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;dev-db/mysql-6.X.Y&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Provide binary packages for pre-compiled Linux kernels and modules (not just a binary package of &amp;lt;code&amp;gt;sys-kernel/gentoo-sources&amp;lt;/code&amp;gt;)&lt;br /&gt;
** This makes it possible to build stage4 images from binary packages.&lt;br /&gt;
** Most likely there will be separate packages for servers and desktops built with different genkernel configs.&lt;br /&gt;
* Handle reverse dependency updates and ABI changes&lt;br /&gt;
&lt;br /&gt;
== Build host requirements ==&lt;br /&gt;
* Build binary package for all required software&lt;br /&gt;
* Support for multiple environments (development, staging and production)&lt;br /&gt;
* Support for multiple architectures (such as x86, amd64 etc.)&lt;br /&gt;
* Support for multiple build profiles&lt;br /&gt;
** system (or base) profile, such as desktop or server (stage3) (all the packages contained within the &amp;lt;code&amp;gt;/etc/portage/make.profile&amp;lt;/code&amp;gt; or via &amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;)&lt;br /&gt;
** application profiles, such as php5-app, django-app etc.)&lt;br /&gt;
** simple inheritance is used for things like python-app -&amp;gt; django-app&lt;br /&gt;
** stacks consist of one system profile and multiple application profiles&lt;br /&gt;
** don&#039;t do this: Gentoo itself has only a few profiles and even there issues arise when combining them (for example desktop + selinux-hardened) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:40, 3 January 2014 (CET)&lt;br /&gt;
*** Those are build-profiles (for example chroots or some sort of overlay-fs) not Gentoo (portage) profiles, we definitely need to clarify those terms ;) --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 20:01, 5 January 2014 (CET)&lt;br /&gt;
* All portage build profiles will use a system profile as their base profile&lt;br /&gt;
* Ability to update an existing build profile, without the need to build it from scratch&lt;br /&gt;
* Ability to do fully automated clean builds (ie. for new archs or new stacks)&lt;br /&gt;
* Ability to automatically update all development profiles on a predefined frequency such as daily, weekly or monthly an be notified about build failures&lt;br /&gt;
** [http://jenkins-ci.org/ jenkins ci] can do this using one jenkins master and a least one build slave per architecture.&lt;br /&gt;
** Other options would be [https://github.com/travis-ci/travis-ci travis ci] (not ready for in-house use) or [http://cruisecontrol.sourceforge.net/ cruise control]&lt;br /&gt;
** Rabe already has a jenkins instance: [http://intranet.rabe.ch/jenkins/]. The instance [[Jenkins-01]] is more or less modern and should be easy to reintegrate with puppet.&lt;br /&gt;
* Each build profile stores the built binary packages under a per-defined directory which will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Application build profiles stores only the extra packages within the above directory, packages included in a base profile won&#039;t be duplicated.&lt;br /&gt;
* Old or no longer supported packages will be removed automatically&lt;br /&gt;
* Build a stage 3 tarball, which can be used for the automatic installation via PXE/TFTP.&lt;br /&gt;
** must be able to build a stage tarball for each of the available environment-arch-system profile combinations&lt;br /&gt;
* Handle reverse dependency updates and ABI changes (aka &amp;lt;code&amp;gt;revdep-rebuild&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Handle perl and python (maybe more) dependency updates (aka &amp;lt;code&amp;gt;perl-cleaner&amp;lt;/code&amp;gt; &amp;amp; &amp;lt;code&amp;gt;python-updater&amp;lt;/code&amp;gt;)&lt;br /&gt;
* Ability to build kernel and modules&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone requirements ==&lt;br /&gt;
* The official portage tree needs to be cloned via Git, which basically enables one to:&lt;br /&gt;
** keep the control over portage tree updates&lt;br /&gt;
** provide an old version of the tree&lt;br /&gt;
** cherry pick updates&lt;br /&gt;
***  this should be avoided at all cost since it can lead to various sorts of breakages (ebuild &amp;lt;-&amp;gt; ebuild, ebuild &amp;lt;-&amp;gt; eclass, ebuild &amp;lt;-&amp;gt; profile, eclass &amp;lt;-&amp;gt; profile interaction) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:24, 3 January 2014 (CET)&lt;br /&gt;
**** Yes, I agree. Nonetheless, we need the &#039;&#039;possibility&#039;&#039; to do cherry picking, for example to react on zero-day exploits. --[[User:Chrigu|Chrigu]] ([[User talk:Chrigu|talk]]) 19:53, 5 January 2014 (CET)&lt;br /&gt;
* Support for a development, staging and production branch&lt;br /&gt;
** Ability to automatically sync from upstream&lt;br /&gt;
** Easy merge support from one branch to the next &#039;&#039;higher&#039;&#039; one (staging -&amp;gt; production)&lt;br /&gt;
* Notification support for new [http://www.gentoo.org/security/en/glsa/index.xml GLSAs] which affect packages within the cloned trees.&lt;br /&gt;
** Either via automatic update and merge of &amp;lt;code&amp;gt;/usr/portage/metadata/glsa&amp;lt;/code&amp;gt; or via external mechanisms such as consulting the [http://www.gentoo.org/rdf/en/glsa-index.rdf RDF feed].&lt;br /&gt;
** Having an inventory by collecting puppet facts allows to check for security updates in a central location --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:31, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage overlay requirements ==&lt;br /&gt;
* One Git based portage [http://www.gentoo.org/proj/en/overlays/userguide.xml overlay]&lt;br /&gt;
** Contains own [[#Portage_profile_requirements|portage profiles]]&lt;br /&gt;
** Contains own or modified ebuilds or legacy ones removed from the official tree&lt;br /&gt;
* Support for development, staging and production environment (via Git branches)&lt;br /&gt;
* [http://layman.sourceforge.net/ Layman] compatibility&lt;br /&gt;
** Portage has now direct repository support (as has cave/paludis) and layman may be omitted --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:32, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Portage profile requirements ==&lt;br /&gt;
* Multiple [http://wiki.gentoo.org/wiki/Profile Portage profiles] stored within the [[#Overlay_requirements|overlay]].&lt;br /&gt;
** One for base, desktop and server (maybe more in the future, such as streambox)&lt;br /&gt;
*** desktop and server both inherit from the base profile which serves as the lowest common denominator.&lt;br /&gt;
* Support for multiple architectures (such as x86 and amd64)&lt;br /&gt;
** Avoid definition duplications via parent profile inheriting.&lt;br /&gt;
* All the profiles have an official Gentoo profile as their master&lt;br /&gt;
* Profiles include only packages belonging to a base system, not an application stack (those will be managed via puppet recipes)&lt;br /&gt;
* Profiles can be used to unmask packages required but not belonging to the base system&lt;br /&gt;
* Profiles sets all the default values for the client&#039;s [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html &amp;lt;code&amp;gt;make.conf&amp;lt;/code&amp;gt;], such as USE flags, BINHOSTS, GENTOO_MIRRORS, CFLAGS, CHOST etc.&lt;br /&gt;
** &#039;&#039;&#039;Warning&#039;&#039;&#039;: many such variables are not incremental and therefore need duplication of Gentoo base profile variables (requiring that someone tracks changes in those variables) --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:29, 3 January 2014 (CET)&lt;br /&gt;
* keep the profiles (and the inheritance structure) as simple as possible, rather duplicate than inherit for small deviations to avoid inheritence issues --[[User:Tiziano|Tiziano]] ([[User talk:Tiziano|talk]]) 14:33, 3 January 2014 (CET)&lt;br /&gt;
&lt;br /&gt;
== Package host requirements ==&lt;br /&gt;
* Serving files via HTTPS&lt;br /&gt;
** Binary packages for all the clients (&amp;lt;code&amp;gt;PORTAGE_BINHOST&amp;lt;/code&amp;gt;), which were built by the [[#Build_host_requirements|build host]]&lt;br /&gt;
*** Binary packages will be accessible via a HTTP URL such as &amp;lt;code&amp;gt;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/BUILD-PROFILE-NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
*** Clients will have &amp;lt;code&amp;gt;PORTAGE_BINHOST=&amp;quot;https://packages.example.com/ENVIRONMENT/gentoo/ARCH/SYSTEM-PROFILE-NAME https://packages.example.com/ENVIRONMENT/gentoo/ARCH/APP-STACK-PROFILE-NAME ...&amp;quot;&amp;lt;/code&amp;gt; set in their &amp;lt;code&amp;gt;/etc/portage/make.conf&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== File mirror host requirements ==&lt;br /&gt;
* Hosts all the files required to build a package (&amp;lt;code&amp;gt;GENTOO_MIRRORS=mirror.example.com/public/gentoo/distfiles&amp;lt;/code&amp;gt;)&lt;br /&gt;
** Acts as a caching mirror for already downloaded packages from an official mirror&lt;br /&gt;
**  Serves fetch-restricted files (&amp;lt;code&amp;gt;dev-java/oracle-jdk-bin&amp;lt;/code&amp;gt; for example), to authorized clients&lt;br /&gt;
* Files are served via HTTPS&lt;br /&gt;
* Distinguishes between three groups of files&lt;br /&gt;
** &#039;&#039;&#039;public&#039;&#039;&#039;: Files which are available to all clients (theoretically even to the entire internet)&lt;br /&gt;
** &#039;&#039;&#039;site-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure (for example those which would put us into [http://www.bettercallsaul.com/ legal troubles] if available to the public)&lt;br /&gt;
** &#039;&#039;&#039;stack-local&#039;&#039;&#039;: Files which are only available to authenticated clients belonging to the same infrastructure and the software stack group (private files of a specific customer) &lt;br /&gt;
* Provides an easy way to let an administrator manually upload new files, for example via WebDAV-CGI, SFTP or a similar mechanism.&lt;br /&gt;
* Possibility to authenticate clients either via HTTP basic auth or client certificates.&lt;br /&gt;
* Old or no longer supported files will be removed automatically&lt;br /&gt;
* Can be implemented on the [[#Build_host_requirements|build host]]&lt;br /&gt;
&lt;br /&gt;
== Puppet requirements ==&lt;br /&gt;
* moved to [[stoney_orchestra:_Requirements]], included below for reference.&lt;br /&gt;
&lt;br /&gt;
{{:stoney orchestra: Requirements}}&lt;br /&gt;
&lt;br /&gt;
== Install host requirements ==&lt;br /&gt;
* Ability to install physical and virtual machines&lt;br /&gt;
* Distinguish machines by their Ethernet MAC address&lt;br /&gt;
* Provide a PXE/TFTP boot mechanism&lt;br /&gt;
* Partition and format the (virtual) harddisks&lt;br /&gt;
* Install a stage3 image which was built by the build host&lt;br /&gt;
* Bootstrap puppet, enabling it to take over the individual installation and customization.&lt;br /&gt;
* Group hosts into&lt;br /&gt;
** environments (development, staging and production)&lt;br /&gt;
** architectures (such as x86, amd64 etc.)&lt;br /&gt;
** portage profiles (system profiles such as desktop and server)&lt;br /&gt;
** &amp;lt;s&amp;gt;stacks (comprising a complete product as a service with the underlying infrastructure)&amp;lt;/s&amp;gt; this is the task of Puppet --[[Benutzer:Chaf|Chaf]] ([[Benutzer Diskussion:Chaf|Diskussion]]) 09:42, 19. Dez. 2013 (CET)&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure requirements ==&lt;br /&gt;
* Local certificate authority for signing [http://en.wikipedia.org/wiki/X509 X.509] certificates.&lt;br /&gt;
* Master certificate authority root certificate which is only used to sign Sub-CA certificates&lt;br /&gt;
* Sub certificate authorities used for various cases such as&lt;br /&gt;
** Puppet certificates [http://docs.puppetlabs.com/puppet/3/reference/config_ssl_external_ca.html]&lt;br /&gt;
** User certificates&lt;br /&gt;
** Client certificates&lt;br /&gt;
** Host certificates&lt;br /&gt;
* Ability to sign, revoke and extend certificates&lt;br /&gt;
* Publish certificate revocation status either via [http://en.wikipedia.org/wiki/Certificate_revocation_list CRL] and/or [http://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol OCSP]&lt;br /&gt;
** CRL is not worth the hassle due to it not defining how often the CRL must be consulted. Since we are in the same physical net OCSP should be far superior here (thank to its live checking support). On the other hand puppet does not do OCSP yet (redmine: [http://projects.puppetlabs.com/issues/10111 #110111]) so we might need to implement both or implement OCSP as well as develop our own automated revocation for puppet.&lt;br /&gt;
* Choose DNs below &amp;lt;code&amp;gt;dc=rabe,dc=ch&amp;lt;/code&amp;gt;&lt;br /&gt;
* register a PEN-OID as issued by IANA if custom schema work is required&lt;br /&gt;
** Use a @rabe email when requesting a PEN at [http://pen.iana.org/pen/PenApplication.page IANA], last time the @purplehaze.ch was a problem!&lt;br /&gt;
* Some of the aforementioned sub-CAs might be implemented as robot CAs with a self service interface (ie for authorized users).&lt;br /&gt;
* Consider using [http://en.wikipedia.org/wiki/Certificate_Management_Protocol CMP] or [http://en.wikipedia.org/wiki/Certificate_Management_over_CMS CMC] as an API to signing, revoking et. al.&lt;br /&gt;
** Since the underlying RFCs of both these protocols are rather new they are not yet broadly supported.&lt;br /&gt;
* Keep local root CA offline!&lt;br /&gt;
** Maybe use an old netbook as root CA :P&lt;br /&gt;
* Support GPG keys for signing packages&lt;br /&gt;
&lt;br /&gt;
== Git hosting requirements ==&lt;br /&gt;
* Public repositories hosted on [http://www.github.com GitHub] (mainly) under the [https://github.com/organizations/radiorabe radiorabe organization] (almost anything which doesn&#039;t leak sensitive informations)&lt;br /&gt;
* Private repositories hosted on the internal infrastructure&lt;br /&gt;
** Accessible via https and a web interface&lt;br /&gt;
** contains some repos with uber-private data the gets compartmentalized even further (ie. hiera datafiles in different repos)&lt;br /&gt;
* One repository per component&lt;br /&gt;
* Daily backup of all repositories&lt;br /&gt;
* Branches for development, staging and production&lt;br /&gt;
** New features are added to the development branch only and later merged up to staging and production&lt;br /&gt;
* Must support pull-requests so we can implement a review process (when pulling through the envs)&lt;br /&gt;
** Sing-Offing might also be required&lt;br /&gt;
* Adhere to [http://semver.org/ Semantic Versioning] for version/release tags.&lt;br /&gt;
** Tag releases as &amp;lt;code&amp;gt;vX.Y.Z&amp;lt;/code&amp;gt; those will be automatically appear on GitHub as downloadable tarballs, which can be referenced within the corresponding ebuilds.&lt;br /&gt;
** Hit 1.0.0 as soon as code lands on production or earlier&lt;br /&gt;
** Commit .lock files when reaching 1.0.0 where applicable (Gemfile.lock, composer.lock) or earlier if needed&lt;br /&gt;
* Must be able to trigger remote events (ie. update master through mcollective after code was promoted to production in a PR)&lt;br /&gt;
* Support the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model&lt;br /&gt;
&lt;br /&gt;
== Messaging requirements ==&lt;br /&gt;
* I&#039;m talking AMPQ, JMS, STOMP, 0MQ and the likes&lt;br /&gt;
** not sure if we need something in this space for the infra&lt;br /&gt;
** it could facilitate comms between components&lt;br /&gt;
** stuff like mcollective and RadioDNS need something in this space&lt;br /&gt;
&lt;br /&gt;
== Monitoring, logging and alarming system requirements ==&lt;br /&gt;
@TODO&lt;br /&gt;
* centralized logging is used throughout&lt;br /&gt;
** with tools that help find and fix problems and do post mortems&lt;br /&gt;
* all systems are always monitored by a full monitoring suite&lt;br /&gt;
* the monitoring suite must support alarming users through multiple paths&lt;br /&gt;
** alarming should include a fallback strategy and a way to acknowledge alarms&lt;br /&gt;
** it must have a easy way to configure scheduled maintenance either before or while the maintenance is undergoing&lt;br /&gt;
* monitoring, logging and alarming are all automatically configured during regular provisioning of machines&lt;br /&gt;
* alerting uses jabber by default with fallbacks to email and sms-through-gsm depending on the site.&lt;br /&gt;
&lt;br /&gt;
= Implementation proposal =&lt;br /&gt;
== Build host proposal ==&lt;br /&gt;
The build host consists out of various chroots to build binary packages for multiple environments, architectures and build profiles.&lt;br /&gt;
&lt;br /&gt;
== Portage tree clone proposal ==&lt;br /&gt;
== Portage overlay proposal ==&lt;br /&gt;
== Portage profile proposal ==&lt;br /&gt;
== Package and file mirror proposal ==&lt;br /&gt;
== Puppet proposal ==&lt;br /&gt;
* Adhere to Craig Dunns [http://www.craigdunn.org/2012/05/239/ architecture] [http://www.slideshare.net/PuppetLabs/roles-talk]&lt;br /&gt;
** on the system level (ie for each bar-metal or virtual machine)&lt;br /&gt;
*** roles contains the business view (ie. [https://github.com/radiorabe/puppet/blob/master/role/manifests/puppet/master.pp role::puppet::master])&lt;br /&gt;
*** profiles the implementation (such as [https://github.com/radiorabe/puppet/blob/master/profile/manifests/puppet/master.pp profile::puppet::master])&lt;br /&gt;
** on the architecture level (ie. in the cloud-fabric)&lt;br /&gt;
*** roles contains the business view (ie. role::cloud-storage, role::product1)&lt;br /&gt;
*** profiles contain the implementation (ie profile::storage-cluster, profile::storage-webinterface-farm)&lt;br /&gt;
* Keep profiles, roles (as per craig) and Puppetfile in [https://github.com/radiorabe/puppet github.com/radiorabe/puppet]&lt;br /&gt;
** This is where we keep feature/*, develop and master (ie staging) branches&lt;br /&gt;
** An internal clone then contains all these + production (what exactly is in prodution, ie. our release schedule is considered sensitive in this implementation)&lt;br /&gt;
** This lets us use the [http://nvie.com/posts/a-successful-git-branching-model/ git-flow] branching model with almost no changes (the one change being us gating stuff into production on the closed clone)&lt;br /&gt;
** github may use hooks to push content to our internal git when they happen&lt;br /&gt;
* All other modules need their own repo and must be published to the puppet module forge&lt;br /&gt;
* Use librarian-puppet (or r10k) for composing the final puppet envs&lt;br /&gt;
** r10k eschews git submodule support we used in puppet-syslogng but has support for multiple envs out of the box&lt;br /&gt;
** librarian-puppet would need to be run once per environment to achieve what r10k does&lt;br /&gt;
* provide develop, master and production branches from private repo as puppet environments on master&lt;br /&gt;
&lt;br /&gt;
== Install host proposal ==&lt;br /&gt;
* use the existing server on [[tftp-01]] on the RaBe infra as a shortcut&lt;br /&gt;
** replace that instance with one native to the infra when it is ready for that&lt;br /&gt;
* iPXE [http://ipxe.org/]&lt;br /&gt;
&lt;br /&gt;
== Public key infrastructure proposal ==&lt;br /&gt;
* write [[certificate policy]] (in german!)&lt;br /&gt;
* hold a key ceremony for the root and level 1&lt;br /&gt;
** offline ceremony on an old netbook with centos or similar (not debian, probably not gentoo to make this happen soonish)&lt;br /&gt;
** Sign RaBe root cert and level 1 intermediate cert&lt;br /&gt;
** store root cert key on 2 sdcards and as 1 printout somewhere safely&lt;br /&gt;
** store level 1 intermediate key on sdcards for use by admins&lt;br /&gt;
* use level 1 intermediate key to sign level 2 cas as needed&lt;br /&gt;
** level 2 robot ca key for puppet (managed by &amp;lt;code&amp;gt;puppet ca&amp;lt;/code&amp;gt;)&lt;br /&gt;
** level 2 ca for client certs&lt;br /&gt;
** level 2 ca for host certs&lt;br /&gt;
** more level 2 certs&lt;br /&gt;
* use OpenSSL as default software for PKI&lt;br /&gt;
** ssl has the largest userbase which should make it easier on new admins&lt;br /&gt;
** features that openssl does not implement get used as soon as openssl catches up (ie. [http://cmpforopenssl.sourceforge.net/‎ CMP])&lt;br /&gt;
&lt;br /&gt;
== git hosting proposal ==&lt;br /&gt;
&lt;br /&gt;
* [http://gitlab.org/ gitlab] seems nice even though is is ruby on rails under the hood&lt;br /&gt;
&lt;br /&gt;
= Links =&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Binary_package_guide Gentoo Binary Package Guide]&lt;br /&gt;
* [http://wiki.gentoo.org/wiki/Preserve-libs Gentoo preserve-libs]&lt;br /&gt;
* [http://swift.siphos.be/aglara/ A Gentoo Linux Advanced Reference Architecture]&lt;br /&gt;
* man pages&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/portage.5.html portage(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/emerge.1.html emerge(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/make.conf.5.html make.conf(5)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.1.html ebuild(1)]&lt;br /&gt;
** [http://dev.gentoo.org/~zmedico/portage/doc/man/ebuild.5.html ebuild(5)]&lt;br /&gt;
&lt;br /&gt;
[[Category: Infrastructure]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_orchestra:_Requirements&amp;diff=2631</id>
		<title>stoney orchestra: Requirements</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_orchestra:_Requirements&amp;diff=2631"/>
		<updated>2014-01-05T21:37:02Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;noinclude&amp;gt;&lt;br /&gt;
== Overview ==&lt;br /&gt;
== Requirements ==&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Version controlled via Git&lt;br /&gt;
* ENC support&lt;br /&gt;
* Puppet recipes for &lt;br /&gt;
** installing, updating, removing and (re-)configuring specific software belonging to an application stack (see [[#Build_host_requirements|build host]]).&lt;br /&gt;
** (re-)configuring software belonging to a system stack&lt;br /&gt;
** Updating the system stack (&amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;) aka system update.&lt;br /&gt;
** installing, updating and removing of kernel packages (including the handling of the ensuing reboot)&lt;br /&gt;
* use best-of-breed tools like hiera and augeas (this might mean targeting 3.3.x due to module data support in [https://github.com/puppetlabs/armatures/blob/master/arm-9.data_in_modules/index.md ARM-9])&lt;br /&gt;
* Use a sane prexisting puppet architecture concept&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
[[Category:stoney orchestra]]&lt;br /&gt;
[[Category:Requirements]]&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=stoney_orchestra:_Requirements&amp;diff=2630</id>
		<title>stoney orchestra: Requirements</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=stoney_orchestra:_Requirements&amp;diff=2630"/>
		<updated>2014-01-05T21:34:49Z</updated>

		<summary type="html">&lt;p&gt;Lucas: Created page with &amp;quot;== Overview == == Requirements == * Support for all three environments (development, staging and production) * Version controlled via Git * ENC support * Puppet recipes for  *...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
== Requirements ==&lt;br /&gt;
* Support for all three environments (development, staging and production)&lt;br /&gt;
* Version controlled via Git&lt;br /&gt;
* ENC support&lt;br /&gt;
* Puppet recipes for &lt;br /&gt;
** installing, updating, removing and (re-)configuring specific software belonging to an application stack (see [[#Build_host_requirements|build host]]).&lt;br /&gt;
** (re-)configuring software belonging to a system stack&lt;br /&gt;
** Updating the system stack (&amp;lt;code&amp;gt;emerge @system&amp;lt;/code&amp;gt;) aka system update.&lt;br /&gt;
** installing, updating and removing of kernel packages (including the handling of the ensuing reboot)&lt;br /&gt;
* use best-of-breed tools like hiera and augeas (this might mean targeting 3.3.x due to module data support in [https://github.com/puppetlabs/armatures/blob/master/arm-9.data_in_modules/index.md ARM-9])&lt;br /&gt;
* Use a sane prexisting puppet architecture concept&lt;br /&gt;
&lt;br /&gt;
[[Category:stoney orchestra]]&lt;br /&gt;
[[Category:Requirements]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas&amp;diff=2629</id>
		<title>User:Lucas</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=User:Lucas&amp;diff=2629"/>
		<updated>2014-01-05T21:31:13Z</updated>

		<summary type="html">&lt;p&gt;Lucas: Created page with &amp;quot;Itsa Mee :)&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Itsa Mee :)&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Hiera_Example&amp;diff=2628</id>
		<title>Hiera Example</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Hiera_Example&amp;diff=2628"/>
		<updated>2014-01-05T21:29:51Z</updated>

		<summary type="html">&lt;p&gt;Lucas: explain a bit what this is&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
---&lt;br /&gt;
:backends:&lt;br /&gt;
  - ldap&lt;br /&gt;
  - yaml&lt;br /&gt;
  - json&lt;br /&gt;
:ldap:&lt;br /&gt;
  :url:ldaps://ldap.stoney-cloud.org:636/&lt;br /&gt;
  :binddn:cn=Manager,dc=stoney-cloud,dc=org&lt;br /&gt;
  :bindpw:secret&lt;br /&gt;
  :basedn:dc=stoney-cloud,dc=org&lt;br /&gt;
:yaml:&lt;br /&gt;
  :datadir: /etc/puppet/hieradata&lt;br /&gt;
:json:&lt;br /&gt;
  :datadir: /etc/puppet/hieradata&lt;br /&gt;
:hierarchy:&lt;br /&gt;
  - &amp;quot;ou=virtual machines,ou=services?sub?(&amp;amp;(sstNetworkHostName=%{::hostname})(sstNetworkDomainName=%{::domainname}))&amp;quot;&lt;br /&gt;
  - &amp;quot;ou=software stack,ou=configuration?sub?(uid=%{::rzUid})&amp;quot;&lt;br /&gt;
  - &amp;quot;%{::clientcert}&amp;quot;&lt;br /&gt;
  - &amp;quot;%{::custom_location}&amp;quot;&lt;br /&gt;
  - common&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notes:&lt;br /&gt;
* This is an example of how a hiera config file might look with an mock ldap backend. The backend in question still needs to be found or written.&lt;br /&gt;
* mapping from a DN to directory structure would be nice, so we would rather have to write: &#039;&#039;ou=virtual machines/ou=services&#039;&#039; instead to be compatible with the already existing yaml/json backends or something different entirely&lt;br /&gt;
&lt;br /&gt;
[[Category: Documentation]]&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Talk:stoney_web:_Requirements&amp;diff=2091</id>
		<title>Talk:stoney web: Requirements</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Talk:stoney_web:_Requirements&amp;diff=2091"/>
		<updated>2013-12-01T23:18:00Z</updated>

		<summary type="html">&lt;p&gt;Lucas: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== DB Quota? ==&lt;br /&gt;
&lt;br /&gt;
I&#039;m assuming the DB-Quota here is due to the fact that you are also planning on using the MySQL DB for Zabbix data. Otherwise it would seem to be overkill as I can&#039;t see stoney web using that much space on mysql. 00:18, 2 December 2013 (CET)&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
	<entry>
		<id>https://wiki.stoney-cloud.org/w/index.php?title=Talk:stoney_web:_Requirements&amp;diff=2090</id>
		<title>Talk:stoney web: Requirements</title>
		<link rel="alternate" type="text/html" href="https://wiki.stoney-cloud.org/w/index.php?title=Talk:stoney_web:_Requirements&amp;diff=2090"/>
		<updated>2013-12-01T21:46:39Z</updated>

		<summary type="html">&lt;p&gt;Lucas: Created page with &amp;quot;== DB Quota? ==  I&amp;#039;m assuming the DB-Quota here is due to the fact that you are also planning on using the MySQL DB for Zabbix data. Otherwise it would seem to be overkill as ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== DB Quota? ==&lt;br /&gt;
&lt;br /&gt;
I&#039;m assuming the DB-Quota here is due to the fact that you are also planning on using the MySQL DB for Zabbix data. Otherwise it would seem to be overkill as I can&#039;t see stoney web using that much space on mysql.&lt;/div&gt;</summary>
		<author><name>Lucas</name></author>
	</entry>
</feed>