More a log, than instructions. Double check which openstack release to use before starting.

* Networking

Find a similar cloud node and generate the /etc/sysconfig/network-scripts/ifcfg-*.[12,13,1982] config files accordingly.
systemctl restart network.service

* Prepare a cloud node

yum install -y yum-plugin-priorities
check if epel has an include rather than an exclude list and if not, copy it off another cloud node and whinge at Simon
yum install -y centos-release-openstack-stein (for this you might have to enable extras in CentOS-Base.repo, this now also seems to be needed to install openstack-nova-compute)
add puppet and facter to exclude list in CentOS-OpenStack-stein.repo
exclude=sip,PyQt4,puppet,facter
add priority=10 to CentOS-OpenStack-stein.repo
yum upgrade
yum install python-openstackclient
yum install openstack-selinux
yum install openstack-nova-compute
yum install openstack-neutron-linuxbridge
yum install ebtables ipset ceph-common
yum install openstack-ceilometer-compute
increase the number of max open files for openstack-neutron-linuxbridge:
mkdir /etc/systemd/system/neutron-linuxbridge-agent.service.d
copy /etc/systemd/system/neutron-linuxbridge-agent.service.d/10_nofiles.conf from any existing cloud node
systemctl daemon-reload

* Config files

The following config files are identical on all the nodes and should be in puppet ;-)
/etc/hosts
/etc/sysconfig/iptables
/etc/ceph/ceph.conf

* Secrets

Step 1:
pdsh -w root@clc[01-14] -w root@clb[00-11] "systemctl enable libvirtd.service"
pdsh -w root@clc[01-14] -w root@clb[00-11] "systemctl start libvirtd.service"

Step 2:
Copy /root/secret.xml from clc00 to all the nodes
Step 3:
pdsh -w root@clc[01-14] -w root@clb[00-11] "virsh secret-define --file secret.xml"
pdsh -w root@clc[01-14] -w root@clb[00-11] "virsh secret-set-value --secret uuid_from_xml --base64 string_from_keyring_on clc00"

Step 4:
copy /etc/ceph/ceph.client.cinder.keyring from cla02 to all new nodes. This is the same keyring as on clc00, except for some reason on clc00 it is located in the /root directory.
Add the new node to the /root/.ssh/known_hosts file on your favourite cloud node and copy it and /root/.ssh/id_rsa to the new and *all* of the other nodes.

* nova

copy nova.conf from clc00, check owner goup and then change my_ip in nova.conf
pdsh -w root@clc[03-14] -w root@clb[00-11] 'HOSTNAME=`hostname -s`.cloud; MYIP=`grep ${HOSTNAME} /etc/hosts | cut -d" " -f1`; sed -e "s#^my_ip =.*\$#my_ip = ${MYIP}#" -i /etc/nova/nova.conf'

Notes from the last round:

* neutron

neutron.conf:
Due to change in owner this config file have to be installed after the corresponding packages are installed:
for i in `seq -w 01 14`; do scp -3 root@clc00:/etc/neutron/neutron.conf root@clc$i:/etc/neutron/neutron.conf; done
Owner: -rw-r-----. 1 root neutron 71749 Jul 10 15:12 /etc/neutron/neutron.conf

copy /etc/neutron/plugins/ml2/linuxbridge_agent.ini from clc00 to all the nodes, check ownership: -rw-r-----. 1 root neutron 10226 Jul 10 12:47 /etc/neutron/plugins/ml2/linuxbridge_agent.ini
in linuxbridge_agent.ini set local_ip to the machine's private IP (10.1.N.N) and check that physical_interface_mappings matches the interface defined in /etc/sysconfig/network-scripts/ .
pdsh -w root@clc[01-14] -w root@clb[00-11] "systemctl enable neutron-linuxbridge-agent.service openstack-nova-compute.service"
pdsh -w root@clc[01-14] -w root@clb[00-11] "systemctl start neutron-linuxbridge-agent.service openstack-nova-compute.service"

* ceilometer

pdsh -w root@clc[00-14] -w root@clb[00-11] -w root@cla[00-10] "yum -y install openstack-ceilometer-compute"
for i in `seq -w 00 14`; do scp -3 root@osceil:/etc/ceilometer/ceilometer.conf root@clc$i:/etc/ceilometer/ceilometer.conf; done (etc)
patch /etc/nova/nova.conf with:
instance_usage_audit_period=hour
instance_usage_audit=True
notify_on_state_change=vm_and_task_state
driver=messagingv2 (in [oslo_messaging_notifications])
systemctl enable openstack-ceilometer-compute.service
systemctl start openstack-ceilometer-compute.service
systemctl restart openstack-nova-compute.service

Optimize settings

tuned-adm profile virtual-host

* osnova

To add now cloud nodes to the cluster, on osnova
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

* add to aggregate

I did that as admin on the webinterface: Admin -> Compute -> Host Aggregates. There is probably a programmtic way. The output (on osbase) of nova help aggregate-add-host seems relevant:
[root@osbase ~]# nova help aggregate-add-host
usage: nova aggregate-add-host
Add the host to the specified aggregate.
Positional arguments:
Name or ID of aggregate.
The host to add to the aggregate.
(Enable service in admin/hypservisors if necessary.)

* Removing a cloud node

Remove from aggregate: add to aggregate in reverse. The node should already be disabled at that point.
on osbase:
openstack compute service list
openstack compute service delete [id from previous command]
openstack network agent list
openstack network agent delete [id from previous command]