Browse Source

finally, add some doc

master
Nicolas Massé 9 years ago
parent
commit
86ed2d5343
  1. 130
      README.md
  2. 52
      doc/ANSIBLE_WRAPPER.md
  3. 99
      doc/BASTION.md
  4. 1
      doc/CUSTOMIZATION.md
  5. 36
      doc/MACHINE_PREPARATION.md
  6. 50
      doc/PLAYBOOKS.md
  7. 20
      doc/ROLES.md

130
README.md

@ -1,119 +1,53 @@
# OpenShift-Lab # An "easy to use" OpenShift Lab
This project is an Ansible Playbook to install OpenShift in a Lab Environment. This project is an Ansible Playbook to install OpenShift in a Lab Environment.
## Preparation work Its goal is to help people install easily OpenShift in a lab environment,
for a test drive or a PoC. So, this project focuses mostly on ease of use instead
of security, availability, etc. **DO NOT USE THIS PROJECT IN PRODUCTION**.
You have been warned.
1. Pull the "openshift-ansible" sub-project using : It features multiple architecture choices :
``` - All-in-one: master, etcd, infra node, app node on the same machines (**DONE**)
git submodule init - Small Cluster: 1 master with etcd, 1 infra node, 2 app nodes (**TODO**)
git submodule update - Big Cluster: 3 masters with etcd, 2 infra nodes, 2 app nodes, 1 load balancer (**TODO**)
```
2. Review \*.hosts and change hostnames to target your environment
## Example
```
./ansible bootstrap vm.openshift.test
./ansible play allinone
```
## Connection through a bastion host
Sometimes, your target machines are on a restricted network where access is
done through a "bastion host" (also called "jump host").
This section explains how to configure this project to work with such a
configuration.
Two variants of this configuration are possible :
1. The jump host holds the SSH keys to connect to the target host
2. The jump host has no SSH key, the SSH Keys remains on your machine
In the second configuration, you will have to setup your SSH Agent (if not
already done) and forward it.
### Step 1: Setup your SSH Agent (optional)
Run the SSH Agent : By default, it deploys the following software in addition to OpenShift :
``` - Red Hat SSO
eval "$(ssh-agent -s)" - 3scale
``` - the [OpenShift-Hostpath-Provisioner](https://github.com/nmasse-itix/OpenShift-HostPath-Provisioner)
And add your SSH key to your agent : This project is different from existing "demo" OpenShift playbooks in the sense that :
``` - It features a common inventory file for both the OpenShift playbooks and the complimentary playbooks. (it's easier to maintain)
ssh-add ~/.ssh/id_rsa - The underlying openshift-ansible playbooks are included directly (as opposed to other approaches that run an `ansible-playbook` command from inside the main playbook).
```
Source : https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/ By default, this project comes with a git submodule reference to the `openshift-ansible` repository for convenience.
But you could replace this reference with a symlink to your `openshift-ansible` installation, for instance if you installed the supported package from Red Hat.
### Step 2: Create the ssh.cfg ## Setup
Create a file named `ssh.cfg` with the following content : 1. First of all, clone this repo :
``` ```
Host jump.host git clone https://github.com/nmasse-itix/OpenShift-Lab.git
Hostname jump.host
User john-adm
ForwardAgent yes
ControlMaster auto
ControlPath ~/.ssh/ansible-%r@%h:%p
ControlPersist 5m
Host 10.0.0.*
ProxyCommand ssh -q -W %h:%p jump.host
User john
``` ```
You will have to replace `jump.host` (three occurrences) with the hostname of your jump host. 2. Pull the "openshift-ansible" sub-project using :
Also make sure to that the two usernames match your environment :
- The first `User` stanza is the username you will use to connect to your jump host
- The second `User` stanza is the username you will use to connect to your target host
You will also have to replace `10.0.0.*` by the subnet of your target machines.
If you reference your machines by DNS names instead of IP address, you could use
the DNS suffix common to your target machines, like `*.compute.internal`.
Note: the `ForwardAgent` stanza is only required if your jump host does not hold
the SSH keys to connect to your target machines.
Now you can test your ssh.cfg by issuing the following command :
``` ```
ssh -F ssh.cfg your.target.host git submodule init
git submodule update
``` ```
If your configuration is correct, you will be directly connected to your target 3. Review allinone.hosts and change hostnames to target your environment
host.
### Step 3: Edit the Ansible configuration file
Edit the `ansible.cfg` file and add : 4. If needed, bootstrap your machines (optional) :
``` ```
# Connection through a jump host ./ansible bootstrap vm.openshift.test
[ssh_connection]
ssh_args = -F ./ssh.cfg -o ControlMaster=auto -o ControlPersist=30m
control_path = ~/.ssh/ansible-%%r@%%h:%%p
``` ```
You can test that your setup is correct by using the `ping` module of Ansible : 5. Run the playbook that installs everything on one machine :
``` ```
ansible -i your-inventory-file all -m ping ./ansible play allinone
``` ```
If your setup is correct, you should see something like : ## Further readings
```
machine1.internal | SUCCESS => {
"changed": false,
"ping": "pong"
}
machine2.internal | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
Note: sometime your lab has no DNS server and you have to connect to your target If you plan to use this project regularly, you might have a look at the [Ansible roles description](doc/ROLES.md).
machines using IP addresses. If you still want to name your machines in Ansible And if you need to customize this project to suit your own needs, have a look at the [Customization Guide](doc/CUSTOMIZATION.md).
with a nice name, you can declare the target machines in the inventory file like this :
```
machine1.internal ansible_host=10.0.0.1
machine2.internal ansible_host=10.0.0.2
```

52
doc/ANSIBLE_WRAPPER.md

@ -0,0 +1,52 @@
# Ansible Wrapper presentation
The Ansible Wrapper is a small shell script (`./ansible`) that does two things :
- It calls the bootstrap playbook with the right environment variables
- It calls the target playbooks (`allinone.yml` for instance) with the right inventory file
## Bootstrap
Usually, when machines are provisioned, they are not ready to be used in Ansible.
For example :
- There is no regular user account, `root` is the only available user
- You SSH Keys are not yet installed, so a Password Authentication is required
- Sudo might no be configured
- etc.
The ansible wrapper will :
- Make sure the SSH Host Key of the target machine is trusted (otherwise Ansible would complain...)
- Do a password authentication for the first time (thanks to `sshpass`)
- Add your SSH Keys to the `authorized_keys`
- Create a regular user (by default: `redhat`)
- Install and configure sudo
- Register the machine with the Red Hat Network (RHN)
- Attach a subscription pool
To use the wrapper, you need to make sure you have `sshpass` installed :
```
sshpass -V
```
If not installed, setup sshpass as explained here : https://gist.github.com/arunoda/7790979
To bootstrap a machine, just use :
```
./ansible bootstrap machine1.compute.internal
```
__Tip :__ You can pass multiple machine on the command line to bootstrap them all at the same time.
The wrapper, will then ask you a few questions :
- The root password. If you have already setup SSK Key Authentication, you can just hit enter.
- Your RHN login
- Your RHN password
- The Pool ID that you would like to use. If you do not provide a Pool ID, no pool will be attached and you will have to do it later manually.
## Daily usage
Once your machines are bootstrapped, you can launch the target playbook (`allinone` for instance) with :
```
./ansible play allinone
```
__Note :__ the `play` command is just a shortcut to `ansible-playbook -i <target>.host <target>.yml`

99
doc/BASTION.md

@ -0,0 +1,99 @@
## Connection through a bastion host
Sometimes, your target machines are on a restricted network where access is
done through a "bastion host" (also called "jump host").
This section explains how to configure this project to work with such a
configuration.
Two variants of this configuration are possible :
1. The jump host holds the SSH keys to connect to the target host
2. The jump host has no SSH key, the SSH Keys remains on your machine
In the second configuration, you will have to setup your SSH Agent (if not
already done) and forward it.
### Step 1: Setup your SSH Agent (optional)
Run the SSH Agent :
```
eval "$(ssh-agent -s)"
```
And add your SSH key to your agent :
```
ssh-add ~/.ssh/id_rsa
```
Source : https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/
### Step 2: Create the ssh.cfg
Create a file named `ssh.cfg` with the following content :
```
Host jump.host
Hostname jump.host
User john-adm
ForwardAgent yes
ControlMaster auto
ControlPath ~/.ssh/ansible-%r@%h:%p
ControlPersist 5m
Host 10.0.0.*
ProxyCommand ssh -q -W %h:%p jump.host
User john
```
You will have to replace `jump.host` (three occurrences) with the hostname of your jump host.
Also make sure to that the two usernames match your environment :
- The first `User` stanza is the username you will use to connect to your jump host
- The second `User` stanza is the username you will use to connect to your target host
You will also have to replace `10.0.0.*` by the subnet of your target machines.
If you reference your machines by DNS names instead of IP address, you could use
the DNS suffix common to your target machines, like `*.compute.internal`.
Note: the `ForwardAgent` stanza is only required if your jump host does not hold
the SSH keys to connect to your target machines.
Now you can test your ssh.cfg by issuing the following command :
```
ssh -F ssh.cfg your.target.host
```
If your configuration is correct, you will be directly connected to your target
host.
### Step 3: Edit the Ansible configuration file
Edit the `ansible.cfg` file and add :
```
# Connection through a jump host
[ssh_connection]
ssh_args = -F ./ssh.cfg -o ControlMaster=auto -o ControlPersist=30m
control_path = ~/.ssh/ansible-%%r@%%h:%%p
```
You can test that your setup is correct by using the `ping` module of Ansible :
```
ansible -i your-inventory-file all -m ping
```
If your setup is correct, you should see something like :
```
machine1.internal | SUCCESS => {
"changed": false,
"ping": "pong"
}
machine2.internal | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
Note: sometime your lab has no DNS server and you have to connect to your target
machines using IP addresses. If you still want to name your machines in Ansible
with a nice name, you can declare the target machines in the inventory file like this :
```
machine1.internal ansible_host=10.0.0.1
machine2.internal ansible_host=10.0.0.2
```

1
doc/CUSTOMIZATION.md

@ -0,0 +1 @@
# TODO

36
doc/MACHINE_PREPARATION.md

@ -0,0 +1,36 @@
# Preparation of target machines
Currently, the machines needs to have at least 2 disk partitions :
- 1 partition for the Operating System (**REQUIRED**)
- 1 LVM partition for the Docker Storage (**REQUIRED**)
A third partition is recommended but not required :
- 1 partition for the OpenShift Persistent Volumes (**OPTIONAL**)
If your machine has only one disk, you can create partitions (that may use LVM underneath or not, free choice).
An alternative when using Virtual Machines is to add 3 disks to the VM, the setup is a bit easier.
The OS partition is created by the RHEL installer so you do not have to care much about it.
The Docker Storage partition **has to be LVM** and **has to be in a separate Volume Group**.
Namely, if your Docker Storage partition is `/dev/sda2`, you can create a separate Volume Group by using :
```
vgcreate docker /dev/sda2
```
The OpenShift Persistent Volumes partition, if not required is still highly recommended.
By a having a dedicated partition, if the Persistent Volumes start to grow it will not
fill up the OS partition.
If your OpenShift PV partition is `/dev/sda3`, you can set it up by using :
```
mkfs.xfs /dev/sda3
echo "/dev/sda3 /var/openshift xfs defaults 0 0" >> /etc/fstab
```
If you kept the default values (`docker` for the Volume Group name and
`/var/openshift` for the OpenShift PV mount point), no further setup is required.
Otherwise, you might have to set the following variables in your inventory file :
- `docker_storage_vg`
- `hostpath_provisioner_options`

50
doc/PLAYBOOKS.md

@ -0,0 +1,50 @@
# Playbooks description
## Bootstrap (`bootstrap.yml`)
The bootstrap playbook is used to prepare a machine to be managed by Ansible.
Namely, it will :
- Create a regular user account (named `redhat`)
- Add your SSH Public Key to the `authorized_keys` of `root` and `redhat`
- Install and configure `sudo` so that the `redhat` user can launch commands as `root` without password
- Register the machine on the RHN (Red Hat Network)
To work, this playbook will require a few environment variables :
| Environment Variable | Description |
| --- | --- |
| RHN_LOGIN | Your Red Hat Network login |
| RHN_PASSWORD | Your Red Hat Network password |
| RHN_POOLID | The subscription pool you want to use |
__Tip :__ You can get the PoolID by querying :
```
sudo subscription-manager list --available --matches '*OpenShift*'
```
This playbook is best used with the [Ansible Wrapper](ANSIBLE_WRAPPER.md).
## All-in-one cluster (`allinone.yml`)
The All-in-one cluster playbook will deploy everyting on one machine. It is very
convenient for development or PoC where the focus is on the features rather than on the infrastructure.
Minimal requirements for the target machine are :
- 2 Cores
- 4 GB of RAM
- 30 GB Hard Disk, partitioned as explained in the [Machine Preparation Guide](MACHINE_PREPARATION.md)
Recommended config :
- 4 Cores
- 10 GB of RAM
- 60 GB Hard Disk, partitioned as explained in the [Machine Preparation Guide](MACHINE_PREPARATION.md)
See [Machine Preparation Guide](MACHINE_PREPARATION.md) for more details about partitioning.
## Small cluster (TODO)
TODO
## Big cluster (TODO)
TODO

20
doc/ROLES.md

@ -0,0 +1,20 @@
# Roles description
## Bootstrap roles
| Role | Description |
| --- | --- |
| [bootstrap](../roles/bootstrap/) | adds your SSH key to `authorized_keys`, creates users, configures sudo |
| [register-rhn](../roles/register-rhn/) | registers the target machine on RHN (Red Hat Network) and attaches a subscription pool |
## Regular roles
| Role | Description |
| --- | --- |
| [base](../roles/base/) | configures SSH to forbid password authentication, installs basic software and sets the hostname |
| [name-resolution](../roles/name-resolution/) | ensures name resolution through the whole cluster |
| [docker](../roles/docker/) | installs docker and configures docker storage |
| [openshift-prereq](../roles/openshift-prereq/) | ensures the system meet the pre-requisites for the OpenShift installation |
| [openshift-postinstall](../roles/openshift-postinstall/) | installs the latest JBoss ImageStreams |
| [3scale](../roles/3scale/) | deploys 3scale |
| [sso](../roles/sso/) | deploys Red Hat SSO |
Loading…
Cancel
Save