# yum update atomic-openshift-utils
Depending on how your OpenShift Container Platform cluster was installed, you can add new hosts (either nodes or masters) to your installation by using the install tool for quick installations, or by using the scaleup.yml playbook for advanced installations.
If you used the quick install tool to install your OpenShift Container Platform cluster, you can use the quick install tool to add a new node host to your existing cluster.
Currently, you can not use the quick installer tool to add new master hosts. You must use the advanced installation method to do so. |
If you used the installer in either
interactive or
unattended mode, you can re-run the
installation as long as you have an
installation configuration
file at ~/.config/openshift/installer.cfg.yml (or specify a different
location with the -c
option).
See the cluster limits section for the recommended maximum number of nodes. |
To add nodes to your installation:
Ensure you have the latest installer and playbooks by updating the atomic-openshift-utils package:
# yum update atomic-openshift-utils
Run the installer with the scaleup
subcommand in interactive or
unattended mode:
# atomic-openshift-installer [-u] [-c </path/to/file>] scaleup
The installer detects your current environment and allows you to add additional nodes:
*** Installation Summary *** Hosts: - 100.100.1.1 - OpenShift master - OpenShift node - Etcd (Embedded) - Storage Total OpenShift masters: 1 Total OpenShift nodes: 1 --- We have detected this previously installed OpenShift environment. This tool will guide you through the process of adding additional nodes to your cluster. Are you ready to continue? [y/N]:
Choose (y) and follow the on-screen instructions to complete your desired task.
If you installed using the advanced install, you can add new hosts to your cluster by running the scaleup.yml playbook. This playbook queries the master, generates and distributes new certificates for the new hosts, then runs the configuration playbooks on the new hosts only. Before running the scaleup.yml playbook, complete all prerequisite host preparation steps.
The scaleup playbook only configures the new host. It does not update NO_PROXY in master services and it does not restart master services. |
This process is similar to re-running the installer in the quick installation method to add nodes, however you have more configuration options available when using the advanced method and when running the playbooks directly.
You must have an existing inventory file (for example, /etc/ansible/hosts)
that is representative of your current cluster configuration in order to run the
scaleup.yml playbook.
If you previously used the atomic-openshift-installer
command to run your
installation, you can check ~/.config/openshift/hosts (previously located at
~/.config/openshift/.ansible/hosts) for the last inventory file that the
installer generated, and use or modify that as needed as your inventory file.
You must then specify the file location with -i
when calling
ansible-playbook
later.
See the cluster limits section for recommended maximum number of nodes. |
To add a host to an existing cluster:
Ensure you have the latest playbooks by updating the atomic-openshift-utils package:
# yum update atomic-openshift-utils
Edit your /etc/ansible/hosts file and add new_<host_type> to the [OSEv3:children] section:
For example, to add a new node host, add new_nodes:
[OSEv3:children] masters nodes new_nodes
To add new master hosts, add new_masters.
Create a [new_<host_type>] section much like an existing section, specifying host information for any new hosts you want to add. For example, when adding a new node:
[nodes] master[1:3].example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" [new_nodes] node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
See Configuring Host Variables for more options.
When adding new masters, hosts added to the [new_masters] section must also be added to the [new_nodes] section. This ensures the new master host is part of the OpenShift SDN.
[masters] master[1:2].example.com [new_masters] master3.example.com [nodes] master[1:2].example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" [new_nodes] master3.example.com
Masters are also automatically marked as unschedulable for pod placement by the installer.
If you label a master host with the |
Run the scaleup.yml playbook. If your inventory file is located somewhere
other than the default of /etc/ansible/hosts, specify the location with the
-i option
.
For additional nodes:
# ansible-playbook [-i /path/to/file] \ /usr/share/ansible/openshift-ansible/playbooks/openshift-node/scaleup.yml
For additional masters:
# ansible-playbook [-i /path/to/file] \ /usr/share/ansible/openshift-ansible/playbooks/openshift-master/scaleup.yml
After the playbook completes successfully, verify the installation.
Finally, move any hosts you had defined in the [new_<host_type>] section into their appropriate section (but leave the [new_<host_type>] section definition itself in place) so that subsequent runs using this inventory file are aware of the nodes but do not handle them as new nodes. For example, when adding new nodes:
[nodes] master[1:3].example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" [new_nodes]
You can add new etcd hosts to your cluster by running the etcd scaleup playbook. This playbook queries the master, generates and distributes new certificates for the new hosts, and then runs the configuration playbooks on the new hosts only. Before running the etcd scaleup.yml playbook, complete all prerequisite host preparation steps.
To add an etcd host to an existing cluster:
Ensure you have the latest playbooks by updating the atomic-openshift-utils package:
$ yum update atomic-openshift-utils
Edit your /etc/ansible/hosts file, add new_<host_type> to the [OSEv3:children] group and add hosts under the new_<host_type> group:
For example, to add a new etcd, add new_etcd:
[OSEv3:children] masters nodes etcd new_etcd [etcd] etcd1.example.com etcd2.example.com [new_etcd] etcd3.example.com
Run the etcd scaleup.yml playbook. If your inventory file is located somewhere other than the default of /etc/ansible/hosts, specify the location with the -i
option.
$ ansible-playbook [-i /path/to/file] \
/usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/scaleup.yml
After the playbook completes successfully, verify the installation.