This guide will walk you through configuring a basic puppet script to install OpenShift Origin from RPMs.

The OpenShift Origin RPMs can be available from:

For OpenShift Origin broker/nodes to be configured properly you will need the host to be configured with a DNS resolvable hostname and static IP Address.

1. Puppet Setup

1.1. Enterprise Prerequisites

For Enterprise Linux systems you will need some extra repositories to install the necessary version of puppet. This is unnecessary for Fedora 19 systems.
yum install -y --nogpgcheck ${url_of_the_latest_epel-release_rpm}
yum install -y --nogpgcheck ${url_of_the_latest_puppet-release_rpm}
  • Add exclude=*mcollective* activemq to the puppet repo entries in the file /etc/yum.repos.d/puppetlabs.repo

  • "Optional" repository - enable depending on your source:

    • yum-config-manager --enable rhel-6-server-optional-rpms

    • Under RHN classic, enable the rhel-x86_64-server-optional-6 channel

1.2. Install Puppet

Run the following to install puppet and facter

yum install -y puppet facter tar

Create the puppet module directory:

mkdir -p /etc/puppet/modules

Install the openshift puppet module into this directory:

puppet module install openshift/openshift_origin --version 3.0.1
If you would like to work from the puppet module source instead, you can just clone the github repository into the same directory:
git clone https://github.com/openshift/puppet-openshift_origin.git /etc/puppet/modules/openshift_origin

2. Generating BIND TSIG Key

This key will be used to update DNS records in the BIND server that will be installed, both for managing application DNS and (by default) for creating host DNS records.

Install the BIND package

yum install -y bind

Generate a TSIG Key

#Using example.com as the cloud domain
/usr/sbin/dnssec-keygen -a HMAC-MD5 -b 512 -n USER -r /dev/urandom -K /var/named example.com
cat /var/named/Kexample.com.*.key  | awk '{print $8}'

The TSIG key should look like CNk+wjszKi9da9nL/1gkMY7H+GuUng==. We will use this in the following steps.

3. Update the Host’s Name

Choose a hostname and substitute it where needed for "broker.example.com" below. This sets the host’s name locally, not in DNS. For nodes, this is used as the server identity. Generally it is best to use a name that matches how the host will resolve in DNS.

Fedora
echo "broker.example.com" > /etc/hostname
hostname broker.example.com
RHEL
cat<<EOF>/etc/sysconfig/network
NETWORKING=yes
HOSTNAME=broker.example.com
EOF

hostname broker.example.com

4. Puppet Configuration

You will create a file for puppet’s installation parameters for this host. This file will define one class (openshift_origin) to install any component(s) of OpenShift Origin on the host.

4.1. Configuring an all-in-one host

Create a puppet configuration file configure_origin.pp using the following template (more examples follow):

Example: Single host (broker+console+node) with local BIND DNS and the htpasswd file Auth plugin:
    class { 'openshift_origin' :
      # Components to install on this host:
      roles			 => ['broker','named','activemq','datastore','node'],

      # The FQDNs of the OpenShift component hosts; for a single-host
      # system, make all values identical.
      broker_hostname            => 'broker.example.com',
      node_hostname              => 'broker.example.com',
      named_hostname             => 'broker.example.com',
      datastore_hostname         => 'broker.example.com',
      activemq_hostname          => 'broker.example.com',

      # BIND / named config
      # This is the key for updating the OpenShift BIND server
      bind_key                   => 'CNk+wjszKi9da9nL/1gkMY7H+GuUng==',
      # The domain under which applications should be created.
      domain                     => 'example.com',
      # Apps would be named <app>-<namespace>.example.com
      # This also creates hostnames for local components under our domain
      register_host_with_named   => true,
      # Forward requests for other domains (to Google by default)
      conf_named_upstream_dns    => ['8.8.8.8'],

      # Auth OpenShift users created with htpasswd tool in /etc/openshift/htpasswd
      broker_auth_plugin         => 'htpasswd',
      # Username and password for initial openshift user
      openshift_user1            => 'openshift',
      openshift_password1        => 'password',

      # To enable installing the Jenkins cartridge:
      install_method             => 'yum',
      jenkins_repo_base          => 'http://pkg.jenkins-ci.org/redhat',

      #Enable development mode for more verbose logs
      development_mode           => true,

      # Set if using an external-facing ethernet device other than eth0
      #conf_node_external_eth_dev => 'eth0',

      #If using with GDM, or have users with UID 500 or greater, put in this list
      #node_unmanaged_users       => ['user1'],
    }

In this configuration, the host will run the broker, node, ActiveMQ, MongoDB and BIND servers. You will need to substitute the BIND DNS key that was generated above; you may wish to adjust other parameters as well, such as the domain, host names, and initial user.

Execute the puppet script:

puppet apply --verbose configure_origin.pp

Assuming everything runs cleanly, installation is complete. Otherwise, you can resolve the errors shown (warnings can often be ignored) and re-run puppet until it runs cleanly.

Puppet is supposed to register the host DNS entries for you, but you may find this isn’t working. If you have not already arranged for the DNS resolution of this host, you can now use the oo-register-dns tool to do so:

# oo-register-dns --domain example.com --with-node-hostname broker --with-node-ip <broker IP>
# ping broker.example.com
PING broker.example.com (172.x.x.x) 56(84) bytes of data.
64 bytes from 172.x.x.x: icmp_seq=1 ttl=64 time=0.020 ms

Assuming everything runs cleanly and host DNS resolves, reboot the system for all settings and services to go into effect.

4.2. Configuring separate hosts for broker/node

A single host is nice for just getting started with OpenShift; but a more representative deployment would at least separate out the node onto a different host as below. For this example, prepare at least two hosts to configure with puppet.

4.2.1. Broker host

In this configuration, the first host will run the broker, ActiveMQ, MongoDB, and BIND servers.

Create a file configure_origin.pp with the following template. As with the all-in-one host configuration file, parameters should be modified as necessary, particularly the bind_key.

    class { 'openshift_origin' :
      # Components to install on this host:
      roles			 => ['broker','named','activemq','datastore'],

      # BIND / named config
      # This is the key for updating the OpenShift BIND server
      bind_key                   => 'CNk+wjszKi9da9nL/1gkMY7H+GuUng==',
      # The domain under which applications should be created.
      domain                     => 'example.com',
      # Apps would be named <app>-<namespace>.example.com
      # This also creates hostnames for local components under our domain
      register_host_with_named   => true,
      # Forward requests for other domains (to Google by default)
      conf_named_upstream_dns    => ['8.8.8.8'],

      # The FQDNs of the OpenShift component hosts
      broker_hostname            => 'broker.example.com',
      named_hostname             => 'broker.example.com',
      datastore_hostname         => 'broker.example.com',
      activemq_hostname          => 'broker.example.com',

      # Auth OpenShift users created with htpasswd tool in /etc/openshift/htpasswd
      broker_auth_plugin         => 'htpasswd',
      # Username and password for initial openshift user
      openshift_user1            => 'openshift',
      openshift_password1        => 'password',

      #Enable development mode for more verbose logs
      development_mode           => true,
    }

Execute the puppet script:

puppet apply --verbose configure_origin.pp

As with the all-in-one host, ensure puppet runs cleanly and the host DNS resolves, then reboot.

4.2.2. Node host

The second host will be configured as a node, which is where applications actually run. Be sure to set the local hostname differently; in our example it should be "node1.example.com".

    class { 'openshift_origin' :
      # Components to install on this host:
      roles			 => ['node'],

      # BIND / named config
      # This is the IP address for OpenShift BIND server - here, the broker.
      named_ip_addr              => '<broker IP address>',
      # This is the key for updating the OpenShift BIND server
      bind_key                   => 'CNk+wjszKi9da9nL/1gkMY7H+GuUng==',
      # The domain under which applications should be created.
      domain                     => 'example.com',
      # Apps would be named <app>-<namespace>.example.com
      # This also creates hostnames for local components under our domain
      register_host_with_named   => true,

      # The FQDNs of the OpenShift component hosts we will need
      broker_hostname            => 'broker.example.com',
      activemq_hostname          => 'broker.example.com',
      node_hostname              => 'node1.example.com',

      # To enable installing the Jenkins cartridge:
      install_method             => 'yum',
      jenkins_repo_base          => 'http://pkg.jenkins-ci.org/redhat',

      #Enable development mode for more verbose logs
      development_mode           => true,

      # Set if using an external-facing ethernet device other than eth0
      #conf_node_external_eth_dev => 'eth0',

      #If using with GDM, or have users with UID 500 or greater, put in this list
      #node_unmanaged_users       => ['user1'],
    }

Execute the puppet script:

puppet apply --verbose configure_origin.pp

If you have not already arranged for the DNS resolution of this host, you can now use the oo-register-dns tool on the broker host to do so:

oo-register-dns --with-node-hostname broker --with-node-ip <broker IP> --domain example.com

As with the all-in-one host, ensure puppet runs cleanly and the host DNS resolves, then reboot.

This should give you a working OpenShift deployment separated into two hosts, one for broker components and one for a node. You may add as many more node hosts as you like.

4.3. Different plugins: mDNS and Mongo auth

This is just an example configuration demonstrating using the mDNS plugin (so that hosts on the same LAN can resolve the host and applications without altering resolv.conf) and the Mongo auth plugin (which stores user credentials in MongoDB).

Example: Single host (broker+console+node) using Avahi mDNS and Mongo auth plugins:
class { 'openshift_origin' :
  domain                     => 'openshift.local',
  register_host_with_named   => true,
  conf_named_upstream_dns    => ['8.8.8.8'],
  install_method             => 'yum',
  jenkins_repo_base          => 'http://pkg.jenkins-ci.org/redhat',
  broker_auth_plugin         => 'mongo',
  broker_dns_plugin          => 'avahi',
  development_mode           => true,
  conf_node_external_eth_dev => 'eth0',
  node_unmanaged_users       => ['root'],
}

Apply the puppet config and reboot as before.

You may access the broker at broker.openshift.local; the initial user/pass is admin/admin.

4.4. Different plugins: Kerberos auth and DNS

This example uses Kerberos for user authentication, and a Kerberos keytab for making authenticated updates to a remote nameserver.

Example: Single host (broker+console+node) which uses the Kerberos Auth plugin and GSS-TSIG.
class { 'openshift_origin' :
  domain                     => 'example.com',
  install_method             => 'yum',
  jenkins_repo_base          => 'http://pkg.jenkins-ci.org/redhat',
  development_mode           => true,
  conf_node_external_eth_dev => 'eth0',
  node_unmanaged_users       => ['root'],

  # broker authenticates updates to BIND server with keytab
  broker_dns_plugin          => 'named',
  named_ip_addr              => '<BIND server IP address>',
  bind_krb_principal         => $hostname,
  bind_krb_keytab            => '/etc/dns.keytab'
  register_host_with_named   => true,

  # authenticate OpenShift users with kerberos
  broker_auth_plugin         => 'kerberos',
  broker_krb_keytab          => '/etc/http.keytab',
  broker_krb_auth_realms     => 'EXAMPLE.COM',
  broker_krb_service_name    => $hostname,
}

Please note:

  • The Broker needs to be enrolled in the KDC as a host, host/node_fqdn as well as a service, HTTP/node_fqdn

  • Keytab should be generated, is located on the Broker machine, and Apache should be able to access it (chown apache <kerberos_keytab>)

  • Like the example config below:

    • set broker_auth_plugin to 'kerberos'

    • set broker_krb_keytab and bind_krb_keytab to the absolute file location of the keytab

    • set broker_krb_auth_realms to the kerberos realm that the Broker host is enrolled with

    • set broker_krb_service_name to the FQDN of the enrolled kerberos service, e.g. $hostname

  • After setup, to test:

    • authentication: kinit <user> then curl -Ik --negotiate -u : <node_fqdn>

    • GSS-TSIG (should return nil):

Use the Rails console on the broker to access the DNS plugin and test that it creates application records.

# cd /var/www/openshift/broker
# scl enable ruby193 bash  # (needed for Enterprise Linux only)
# bundle --local
# rails console
# d = OpenShift::DnsService.instance
# d.register_application "appname", "namespace", "node_fqdn"
  => nil

For any errors, on the Broker, check /var/log/openshift/broker/httpd/error_log.

4.5. Puppet Parameters

An exhaustive list of the parameters you can specify with puppet configuration follows.

4.5.1. roles

Choose from the following roles to be configured on this node.

  • broker - Installs the broker and console.

  • node - Installs the node and cartridges.

  • activemq - Installs activemq message broker.

  • datastore - Installs MongoDB (not sharded/replicated)

  • named - Installs a BIND dns server configured with a TSIG key for updates.

Default: [broker,node,activemq,datastore,named]

4.5.2. install_method

Choose from the following ways to provide packages:

  1. none - install sources are already set up when the script executes (default)

  2. yum - set up yum repos manually

    • repos_base

    • os_repo

    • os_updates_repo

    • jboss_repo_base

    • jenkins_repo_base

    • optional_repo

Default: yum

4.5.4. override_install_repo

Repository path override. Uses dependencies from repos_base but uses override_install_repo path for OpenShift RPMs. Used when doing local builds.

Default: none

4.5.5. os_repo

The URL for a Fedora 19/RHEL 6 yum repository used with the "yum" install method. Should end in x86_64/os/.

Default: no change

4.5.6. os_updates

The URL for a Fedora 19/RHEL 6 yum updates repository used with the "yum" install method. Should end in x86_64/.

Default: no change

4.5.7. jboss_repo_base

The URL for a JBoss repositories used with the "yum" install method. Does not install repository if not specified.

4.5.8. jenkins_repo_base

The URL for a Jenkins repositories used with the "yum" install method. Does not install repository if not specified.

4.5.9. optional_repo

The URL for a EPEL or optional repositories used with the "yum" install method. Does not install repository if not specified.

4.5.10. domain

The network domain under which apps and hosts will be placed.

Default: example.com

4.5.11. broker_hostname

4.5.12. node_hostname

4.5.13. named_hostname

4.5.14. activemq_hostname

4.5.15. datastore_hostname

Default: the root plus the domain, e.g. broker.example.com.

These supply the FQDN of the hosts containing these components. Used for configuring the host’s name at install, and also for configuring the broker application to reach the services needed.

if installing a nameserver, the script will create DNS entries for the hostnames of the other components being installed on this host as well. If you are using a nameserver set up separately, you are responsible for all necessary DNS entries.

4.5.16. named_ip_addr

Default: IP of a named instance or current IP if installing on this host. This is used by every host to configure its primary name server.

Default: the current IP (at install)

4.5.17. bind_key

When the nameserver is remote, use this to specify the HMAC-MD5 key for updates. This is the "Key:" field from the .private key file generated by dnssec-keygen. This field is required on all nodes.

4.5.18. bind_krb_keytab

When the nameserver is remote, Kerberos keytab together with principal can be used instead of the HMAC-MD5 key for updates.

4.5.19. bind_krb_principal

When the nameserver is remote, this Kerberos principal together with Kerberos keytab can be used instead of the HMAC-MD5 key for updates.

4.5.20. conf_named_upstream_dns

List of upstream DNS servers to use when installing named on this node.

Default: [8.8.8.8]

4.5.21. broker_ip_addr

This is used for the node to record its broker. Also is the default for the nameserver IP if none is given.

Default: the current IP (at install)

4.5.22. node_ip_addr

This is used for the node to give a public IP, if different from the one on its NIC.

Default: the current IP (at install)

4.5.23. configure_ntp

Enabling this configures NTP. It is important that the time be synchronized across hosts because MCollective messages have a TTL of 60 seconds and may be dropped if the clocks are too far out of synch. However, NTP is not necessary if the clock will be kept in synch by some other means.

Default: true

Passwords used to secure various services. You are advised to specify only alphanumeric values in this script as others may cause syntax errors depending on context. If non-alphanumeric values are required, update them separately after installation.

4.5.24. activemq_admin_password

This is the admin password for the ActiveMQ admin console, which is not needed by OpenShift but might be useful in troubleshooting.

Default: scrambled

4.5.25. mcollective_user

4.5.26. mcollective_password

This is the user and password shared between broker and node for communicating over the mcollective topic channels in ActiveMQ. Must be the same on all broker and node hosts.

Default: mcollective/marionette

4.5.27. mongodb_admin_user

4.5.28. mongodb_admin_password

These are the username and password of the administrative user that will be created in the MongoDB datastore. These credentials are not used by in this script or by OpenShift, but an administrative user must be added to MongoDB in order for it to enforce authentication. Default: admin/mongopass

4.5.29. mongodb_broker_user

4.5.30. mongodb_broker_password

These are the username and password of the normal user that will be created for the broker to connect to the MongoDB datastore. The broker application’s MongoDB plugin is also configured with these values.

Default: openshift/mongopass

4.5.31. mongodb_name

This is the name of the database in MongoDB in which the broker will store data.

Default: openshift_broker

4.5.32. openshift_user1

4.5.33. openshift_password1

This user and password are entered in the /etc/openshift/htpasswd file as a demo/test user. You will likely want to remove it after installation (or just use a different auth method).

Default: demo/changeme

4.5.34. conf_broker_auth_salt

4.5.35. conf_broker_auth_public_key

4.5.36. conf_broker_auth_private_key

4.5.37. conf_broker_auth_key_password

Salt, public and private keys used when generating secure authentication tokens for Application to Broker communication. Requests like scale up/down and jenkins builds use these authentication tokens. This value must be the same on all broker nodes.

Default: Self signed keys are generated. Will not work with multi-broker setup.

4.5.38. conf_broker_session_secret

4.5.39. conf_console_session_secret

Session secrets used to encode cookies used by console and broker. This value must be the same on all broker nodes.

4.5.40. conf_valid_gear_sizes

List of all gear sizes this will be used in this OpenShift installation.

Default: [small]

4.5.41. broker_dns_plugin

DNS plugin used by the broker to register application DNS entries. Options:

  • nsupdate - nsupdate based plugin. Supports TSIG and GSS-TSIG based authentication. Uses bind_key for TSIG and bind_krb_keytab, bind_krb_principal for GSS_TSIG auth.

  • avahi - sets up a MDNS based DNS resolution. Works only for all-in-one installations.

4.5.42. broker_auth_plugin

Authentication setup for users of the OpenShift service. Options:

  • mongo - Stores username and password in mongo.

  • kerberos - Kerberos based authentication. Uses broker_krb_service_name, broker_krb_auth_realms, broker_krb_keytab values.

  • htpasswd - Stores username/password in a htaccess file.

  • ldap - LDAP based authentication. Uses broker_ldap_uri.

Default: htpasswd

4.5.43. broker_krb_service_name

The KrbServiceName value for mod_auth_kerb configuration

4.5.44. broker_krb_auth_realms

The KrbAuthRealms value for mod_auth_kerb configuration

4.5.45. broker_krb_keytab

The Krb5KeyTab value of mod_auth_kerb is not configurable — the keytab is expected in /var/www/openshift/broker/httpd/conf.d/http.keytab

4.5.46. broker_ldap_uri

URI to the LDAP server (e.g. ldap://ldap.example.com:389/ou=People,dc=my-domain,dc=com). Set <code>broker_auth_plugin</code> to <code>ldap</code> to enable this feature.

4.5.47. node_container_plugin

Specify the container type to use on the node. Options:

  • selinux - This is the default OpenShift Origin container type.

4.5.48. node_frontend_plugins

Specify one or more plugins to use register HTTP and web-socket connections for applications. Options:

  • apache-mod-rewrite - Mod-Rewrite based plugin for HTTP and HTTPS requests. Well suited for installations with a lot of creates/deletes/scale actions.

  • apache-vhost - VHost based plugin for HTTP and HTTPS. Suited for installations with less app create/delete activity. Easier to customize.

  • nodejs-websocket - Web-socket proxy listening on ports 8000/8444

  • haproxy-sni-proxy - TLS proxy using SNI routing on ports 2303 through 2308 requires /usr/sbin/haproxy15 (haproxy-1.5-dev19 or later).

Default: [apache-mod-rewrite,nodejs-websocket]

4.5.49. node_unmanaged_users

List of user names who have UIDs in the range of OpenShift gears but must be excluded from OpenShift gear setups.

Default: []

4.5.50. conf_node_external_eth_dev

External facing network device. Used for routing and traffic control setup.

Default: eth0

4.5.51. conf_node_supplementary_posix_groups

Name of supplementary UNIX group to add a gear to.

4.5.52. development_mode

Set development mode and extra logging.

Default: false

4.5.53. install_login_shell

Install a Getty shell which displays DNS, IP and login information. Used for all-in-one VM installation.

4.5.54. register_host_with_named

Setup DNS entries for this host in a locally installed BIND DNS instance.

Default: false

4.5.55. install_cartridges

List of cartridges to be installed on the node. Options:

  • 10gen-mms-agent

  • cron

  • diy

  • haproxy

  • mongodb

  • nodejs

  • perl

  • php

  • phpmyadmin

  • postgresql

  • python

  • ruby

  • jenkins

  • jenkins-client

  • mariadb (will install mysql on RHEL)

  • jbossews

  • jbossas

  • jbosseap

Default: [10gen-mms-agent,cron,diy,haproxy,mongodb, nodejs,perl,php,phpmyadmin,postgresql, python,ruby,jenkins,jenkins-client,mariadb]

5. Manual Tasks

This script attempts to automate as many tasks as it reasonably can. Unfortunately, it is constrained to setting up only a single host at a time. In an assumed multi-host setup, you will need to do the following after the script has completed.

  1. Set up DNS entries for hosts.

If you installed BIND with the script, then any other components installed with the script on the same host received DNS entries. Other hosts must all be defined manually, including at least your node hosts. oo-register-dns may prove useful for this.

  1. Copy public rsync key to enable moving gears.

The broker rsync public key needs to go on nodes, but there is no good way to script that generically. Nodes should not have password-less access to brokers to copy the .pub key, so this must be performed manually on each node host:

# scp root@broker:/etc/openshift/rsync_id_rsa.pub /root/.ssh/
(above step will ask for the root password of the broker machine)
# cat /root/.ssh/rsync_id_rsa.pub >> /root/.ssh/authorized_keys
# rm /root/.ssh/rsync_id_rsa.pub

If you skip this, each gear move will require typing root passwords for each of the node hosts involved.

  1. Copy ssh host keys between the node hosts.

All node hosts should identify with the same host keys, so that when gears are moved between hosts, ssh and git don’t give developers spurious warnings about the host keys changing. So, copy /etc/ssh/ssh_* from one node host to all the rest (or, if using the same image for all hosts, just keep the keys from the image).