-
Partition tables (GPT or MSDOS)
-
Filesystems or residual filesystem signatures
-
LVM2 signatures of former Volume Groups and Logical Volumes
-
LVM2 metadata of LVM2 physical volumes
The following sections identify the hardware specifications and system-level requirements of all hosts within your OpenShift Container Platform environment.
You must have an active OpenShift Container Platform subscription on your Red Hat account to proceed. If you do not, contact your sales representative for more information.
The system requirements vary per host type:
|
|
|
|
External etcd Nodes |
|
Meeting the /var/ file system sizing requirements in RHEL Atomic Host requires making changes to the default configuration. See Managing Storage in Red Hat Enterprise Linux Atomic Host for instructions on configuring this during or after installation.
The system’s temporary directory is determined according
to the rules defined in the
tempfile
module in Python’s standard library.
OpenShift Container Platform only supports servers with x86_64 architecture. |
Test or sample environments function with the minimum requirements. For production environments, the following recommendations apply:
In a highly available OpenShift Container Platform cluster with external etcd, a master host should have, in addition to the minimum requirements in the table above, 1 CPU core and 1.5 GB of memory for each 1000 pods. Therefore, the recommended size of a master host in an OpenShift Container Platform cluster of 2000 pods would be the minimum requirements of 2 CPU cores and 16 GB of RAM, plus 2 CPU cores and 3 GB of RAM, totaling 4 CPU cores and 19 GB of RAM.
When planning an environment with multiple masters, a minimum of three etcd hosts and a load-balancer between the master hosts are required.
See Recommended Practices for OpenShift Container Platform Master Hosts for performance guidance.
The size of a node host depends on the expected size of its workload. As an OpenShift Container Platform cluster administrator, you will need to calculate the expected workload, then add about 10 percent for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity.
For more information, see Sizing Considerations and Cluster Limits.
Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping. |
Any nodes used in a Container-Native Storage or Container-Ready Storage cluster are considered storage nodes. Storage nodes can be grouped into distinct cluster groups, though a single node can not be in multiple groups. For each group of storage nodes:
A minimum of three storage nodes per group is required.
Each storage node must have a minimum of 8 GB of RAM. This is to allow running the Red Hat Gluster Storage pods, as well as other applications and the underlying operating system.
Each GlusterFS volume also consumes memory on every storage node in its storage cluster, which is about 30 MB. The total amount of RAM should be determined based on how many concurrent volumes are desired or anticipated.
Each storage node must have at least one raw block device with no present data or metadata. These block devices will be used in their entirety for GlusterFS storage. Make sure the following are not present:
Partition tables (GPT or MSDOS)
Filesystems or residual filesystem signatures
LVM2 signatures of former Volume Groups and Logical Volumes
LVM2 metadata of LVM2 physical volumes
If in doubt, wipefs -a <device>
should clear any of the above.
It is recommended to plan for two clusters: one dedicated to storage for infrastructure applications (such as an OpenShift Container Registry) and one dedicated to storage for general applications. This would require a total of six storage nodes. This recommendation is made to avoid potential impacts on performance in I/O and volume creation. |
By default, OpenShift Container Platform masters and nodes use all available cores in the
system they run on. You can choose the number of cores you want OpenShift Container Platform
to use by setting the GOMAXPROCS
environment variable. See the
Go Language documentation for
more information, including how the GOMAXPROCS
environment variable works.
For example, run the following before starting the server to make OpenShift Container Platform only run on one core:
# export GOMAXPROCS=1
Security-Enhanced Linux (SELinux) must be enabled on all of the servers before
installing OpenShift Container Platform or the installer will fail. Also, configure
SELINUX=enforcing
and SELINUXTYPE=targeted
in the
/etc/selinux/config file:
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=enforcing # SELINUXTYPE= can take one of these three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted
To access GlusterFS volumes, the mount.glusterfs
command must be available on
all schedulable nodes. For RPM-based systems, the glusterfs-fuse package must
be installed:
# yum install glusterfs-fuse
This package comes installed on every RHEL system. However, it is recommended to update to the latest available version from Red Hat Gluster Storage. To do this, the following RPM repository must be enabled:
# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
If glusterfs-fuse is already installed on the nodes, ensure that the latest version is installed:
# yum update glusterfs-fuse
OverlayFS is a union file system that allows you to overlay one file system on top of another.
As of Red Hat Enterprise Linux 7.4, you have the option to configure your
OpenShift Container Platform environment to use OverlayFS. The overlay2
graph driver is
fully supported in addition to the older overlay
driver. However, Red Hat
recommends using overlay2
instead of overlay
, because of its speed and
simple implementation.
See the
Overlay
Graph Driver section of the Atomic Host documentation for instructions on how
to to enable the overlay2
graph driver for the Docker service.
You must enable Network Time Protocol (NTP) to prevent masters and nodes in the
cluster from going out of sync. Set openshift_clock_enabled
to true
in the
Ansible inventory file to enable NTP on masters and nodes in the cluster during
Ansible installation.
# openshift_clock_enabled=true
OpenShift Container Platform runs
containers
on your hosts, and in some cases, such as build operations and the registry
service, it does so using privileged containers. Furthermore, those containers
access your host’s Docker daemon and perform docker build
and docker push
operations. As such, you should be aware of the inherent security risks
associated with performing docker run
operations on arbitrary images as they
effectively have root access. This is particularly relevant for Docker
builds.
You can limit the exposure to harmful containers by assigning specific builds to nodes so that any exposure is limited to those nodes. To do this, see the Assigning builds to specific nodes section of the developer guide and Configuring global build defaults and overrides section of the installation and configuration guide. You can also use security context constraints to control the actions that a pod can perform and what it has the ability to access.
For more information, see these articles:
The following section defines the requirements of the environment containing your OpenShift Container Platform configuration. This includes networking considerations and access to external services, such as Git repository access, storage, and cloud infrastructure providers.
OpenShift Container Platform requires a fully functional DNS server in the environment. This is ideally a separate host running DNS software and can provide name resolution to hosts and containers running on the platform.
Adding entries into the /etc/hosts file on each host is not enough. This file is not copied into containers running on the platform. |
Key components of OpenShift Container Platform run themselves inside of containers and use the following process for name resolution:
By default, containers receive their DNS configuration file (/etc/resolv.conf) from their host.
OpenShift Container Platform then inserts one DNS value into the pods
(above the node’s nameserver values). That value is defined in the
/etc/origin/node/node-config.yaml file by the
dnsIP
parameter, which by default is set to the address of the host node because the host
is using dnsmasq.
If the dnsIP
parameter is omitted from the node-config.yaml
file, then the value defaults to the kubernetes service IP, which is the first
nameserver in the pod’s /etc/resolv.conf file.
As of OpenShift Container Platform 3.2, dnsmasq is automatically configured on all masters and nodes. The pods use the nodes as their DNS, and the nodes forward the requests. By default, dnsmasq is configured on the nodes to listen on port 53, therefore the nodes cannot run any other type of DNS application.
NetworkManager, a program for providing detection and configuration for systems to automatically connect to the network, is required on the nodes in order to populate dnsmasq with the DNS IP addresses.
|
The following is an example set of DNS records for the Single Master and Multiple Nodes scenario:
master A 10.64.33.100 node1 A 10.64.33.101 node2 A 10.64.33.102
If you do not have a properly functioning DNS environment, you could experience failure with:
Product installation via the reference Ansible-based scripts
Deployment of the infrastructure containers (registry, routers)
Access to the OpenShift Container Platform web console, because it is not accessible via IP address alone
Make sure each host in your environment is configured to resolve hostnames from your DNS server. The configuration for hosts' DNS resolution depend on whether DHCP is enabled. If DHCP is:
Disabled, then configure your network interface to be static, and add DNS nameservers to NetworkManager.
Enabled, then the NetworkManager dispatch script automatically configures DNS
based on the DHCP configuration. Optionally, you can add a value to dnsIP
in the node-config.yaml file to prepend the pod’s resolv.conf file. The
second nameserver is then defined by the host’s first nameserver. By default,
this will be the IP address of the node host.
For most configurations, do not set the Instead, allow the installer to configure each node to use dnsmasq and forward
requests to the external DNS provider or SkyDNS, the internal DNS service for
cluster-wide DNS resolution of internal hostnames for services and pods. If you
do set the |
To verify that hosts can be resolved by your DNS server:
Check the contents of /etc/resolv.conf:
$ cat /etc/resolv.conf # Generated by NetworkManager search example.com nameserver 10.64.33.1 # nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
In this example, 10.64.33.1 is the address of our DNS server.
Test that the DNS servers listed in /etc/resolv.conf are able to resolve host names to the IP addresses of all masters and nodes in your OpenShift Container Platform environment:
$ dig <node_hostname> @<IP_address> +short
For example:
$ dig master.example.com @10.64.33.1 +short 10.64.33.100 $ dig node1.example.com @10.64.33.1 +short 10.64.33.101
Optionally, configure a wildcard for the router to use, so that you do not need to update your DNS configuration when new routes are added.
A wildcard for a DNS zone must ultimately resolve to the IP address of the OpenShift Container Platform router.
For example, create a wildcard DNS entry for cloudapps that has a low time-to-live value (TTL) and points to the public IP address of the host where the router will be deployed:
*.cloudapps.example.com. 300 IN A 192.168.133.2
In almost all cases, when referencing VMs you must use host names, and the host
names that you use must match the output of the hostname -f
command on each
node.
In your /etc/resolv.conf file on each node host, ensure that the DNS server that has the wildcard entry is not listed as a nameserver or that the wildcard domain is not listed in the search list. Otherwise, containers managed by OpenShift Container Platform may fail to resolve host names properly. |
A shared network must exist between the master and node hosts. If you plan to configure multiple masters for high-availability using the advanced installation method, you must also select an IP to be configured as your virtual IP (VIP) during the installation process. The IP that you select must be routable between all of your nodes, and if you configure using a FQDN it should resolve on all nodes.
NetworkManager, a program for providing detection and configuration for systems to automatically connect to the network, is required on the nodes in order to populate dnsmasq with the DNS IP addresses.
NM_CONTROLLED
is set to yes
by default. If NM_CONTROLLED
is set to no
,
then the NetworkManager dispatch script does not create the relevant
origin-upstream-dns.conf dnsmasq file, and you would need to configure
dnsmasq manually.
The OpenShift Container Platform installation automatically creates a set of internal firewall rules on each host using iptables. However, if your network configuration uses an external firewall, such as a hardware-based firewall, you must ensure infrastructure components can communicate with each other through specific ports that act as communication endpoints for certain processes or services.
While iptables is the default firewall, firewalld is recommended for new
installations. You can enable firewalld by setting |
Ensure the following ports required by OpenShift Container Platform are open on your network and configured to allow access between hosts. Some ports are optional depending on your configuration and usage.
4789 |
UDP |
Required for SDN communication between pods on separate hosts. |
53 or 8053 |
TCP/UDP |
Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. |
4789 |
UDP |
Required for SDN communication between pods on separate hosts. |
443 or 8443 |
TCP |
Required for node hosts to communicate to the master API, for the node hosts to post back status, to receive tasks, and so on. |
4789 |
UDP |
Required for SDN communication between pods on separate hosts. |
10250 |
TCP |
The master proxies to node hosts via the Kubelet for |
In the following table, (L) indicates the marked port is also used in loopback mode, enabling the master to communicate with itself. In a single-master cluster:
In a multiple-master cluster, all the listed ports must be open. |
53 (L) or 8053 (L) |
TCP/UDP |
Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. |
2049 (L) |
TCP/UDP |
Required when provisioning an NFS host as part of the installer. |
2379 |
TCP |
Used for standalone etcd (clustered) to accept changes in state. |
2380 |
TCP |
etcd requires this port be open between masters for leader election and peering connections when using standalone etcd (clustered). |
4789 (L) |
UDP |
Required for SDN communication between pods on separate hosts. |
9000 |
TCP |
If you choose the |
443 or 8443 |
TCP |
Required for node hosts to communicate to the master API, for node hosts to post back status, to receive tasks, and so on. |
22 |
TCP |
Required for SSH by the installer or system administrator. |
53 or 8053 |
TCP/UDP |
Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. Only required to be internally open on master hosts. |
80 or 443 |
TCP |
For HTTP/HTTPS use for the router. Required to be externally open on node hosts, especially on nodes running the router. |
1936 |
TCP |
(Optional) Required to be open when running the template router to access statistics. Can be open externally or internally to connections depending on if you want the statistics to be expressed publicly. Can require extra configuration to open. See the Notes section below for more information. |
2379 and 2380 |
TCP |
For standalone etcd use. Only required to be internally open on the master host. 2379 is for server-client connections. 2380 is for server-server connections, and is only required if you have clustered etcd. |
4789 |
UDP |
For VxLAN use (OpenShift SDN). Required only internally on node hosts. |
8443 |
TCP |
For use by the OpenShift Container Platform web console, shared with the API server. |
10250 |
TCP |
For use by the Kubelet. Required to be externally open on nodes. |
Notes
In the above examples, port 4789 is used for User Datagram Protocol (UDP).
When deployments are using the SDN, the pod network is accessed via a service proxy, unless it is accessing the registry from the same node the registry is deployed on.
OpenShift Container Platform internal DNS cannot be received over SDN. Depending on the detected values of openshift_facts
, or if the openshift_ip
and openshift_public_ip
values are overridden, it will be the computed value of openshift_ip
. For non-cloud deployments, this will default to the IP address associated with the default route on the master host. For cloud deployments, it will default to the IP address associated with the first internal interface as defined by the cloud metadata.
The master host uses port 10250 to reach the nodes and does not go over SDN. It depends on the target host of the deployment and uses the computed values of openshift_hostname
and openshift_public_hostname
.
Port 1936 can still be inaccessible due to your iptables rules. Use the following to configure iptables to open port 1936:
# iptables OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp \ --dport 1936 -j ACCEPT
9200 |
TCP |
For Elasticsearch API use. Required to be internally open on any infrastructure
nodes so Kibana is able to retrieve logs for display. It can be externally
opened for direct access to Elasticsearch by means of a route. The route can be
created using |
9300 |
TCP |
For Elasticsearch inter-cluster use. Required to be internally open on any infrastructure node so the members of the Elasticsearch cluster may communicate with each other. |
The Kubernetes persistent volume framework allows you to provision an OpenShift Container Platform cluster with persistent storage using networked storage available in your environment. This can be done after completing the initial OpenShift Container Platform installation depending on your application needs, giving users a way to request those resources without having any knowledge of the underlying infrastructure.
The Installation and Configuration Guide provides instructions for cluster administrators on provisioning an OpenShift Container Platform cluster with persistent storage using NFS, GlusterFS, Ceph RBD, OpenStack Cinder, AWS Elastic Block Store (EBS), GCE Persistent Disks, and iSCSI.
There are certain aspects to take into consideration if installing OpenShift Container Platform on a cloud provider.
For Amazon Web Services, see the Permissions and the Configuring a Security Group sections.
For OpenStack, see the Permissions and the Configuring a Security Group sections.
Some deployments require that the user override the detected host names and IP
addresses for the hosts. To see the default values, run the openshift_facts
playbook:
# ansible-playbook [-i /path/to/inventory] \ /usr/share/ansible/openshift-ansible/roles/openshift_facts/library/openshift_facts.py
For Amazon Web Services, see the Overriding Detected IP Addresses and Host Names section. |
Now, verify the detected common settings. If they are not what you expect them to be, you can override them.
The Advanced Installation topic discusses the available Ansible variables in greater detail.
Variable | Usage |
---|---|
|
|
|
|
|
|
|
|
|
|
If |