Overview

One method to expose a service is to assign an external IP access directly to the service you want to make accessible from outside the cluster.

Make sure you have created a range of IP addresses to use, as shown in Defining the Public IP Address Range.

By setting an external IP on the service, OpenShift Container Platform sets up IP table rules to allow traffic arriving at any cluster node that is targeting that IP address to be sent to one of the internal pods. This is similar to the internal service IP addresses, but the external IP tells OpenShift Container Platform that this service should also be exposed externally at the given IP. The administrator must assign the IP address to a host (node) interface on one of the nodes in the cluster. Alternatively, the address can be used as a virtual IP (VIP).

These IPs are not managed by OpenShift Container Platform and administrators are responsible for ensuring that traffic arrives at a node with this IP.

The following is a non-HA solution and does not configure IP failover. IP failover is required to make the service highly-available.

This process involves the following:

Administrator Prerequisites

Before starting this procedure, the administrator must:

  • Set up the external port to the cluster networking environment so that requests can reach the cluster. For example, names can be configured into DNS to point to specific nodes or other IP addresses in the cluster. The DNS wildcard feature can be used to configure a subset of names to an IP address in the cluster. This allows the users to set up routes within the cluster without further administrator attention.

  • Make sure that the local firewall on each node permits the request to reach the IP address.

  • Configure the OpenShift Container Platform cluster to use an Identity Provider that allows appropriate user access.

  • Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command:

    oadm policy add-cluster-role-to-user cluster-admin username
  • Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic.

Defining the Public IP Range

The first step in allowing access to a service is to define an external IP address range in the master configuration file:

  1. Log into OpenShift Container Platform as a user with the cluster admin role.

    $ oc login
    Authentication required (openshift)
    Username: admin
    Password:
    Login successful.
    
    You have access to the following projects and can switch between them with 'oc project <projectname>':
      * default
    Using project "default".
  2. Configure the externalIPNetworkCIDRs parameter in the /etc/origin/master/master-config.yaml file as shown:

    networkConfig:
      externalIPNetworkCIDRs:
      - <ip_address>/<cidr>

    For example:

    networkConfig:
      externalIPNetworkCIDRs:
      - 192.168.120.0/24
  3. Restart the OpenShift Container Platform master service to apply the changes.

    # systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers

The IP address pool must terminate at one or more nodes in the cluster.

Create a Project and Service

If the project and service that you want to expose do not exist, first create the project, then the service.

If the project and service already exist, go to the next step: Expose the Service to Create a Route.

  1. Log into OpenShift Container Platform.

  2. Create a new project for your service:

    $ oc new-project <project_name>

    For example:

    $ oc new-project external-ip
  3. Use the oc new-app command to create a service:

    For example:

    $ oc new-app \
        -e MYSQL_USER=admin \
        -e MYSQL_PASSWORD=redhat \
        -e MYSQL_DATABASE=mysqldb \
        registry.access.redhat.com/openshift3/mysql-55-rhel7
  4. Run the following command to see that the new service is created:

    oc get svc
    NAME               CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    mysql-55-rhel7     172.30.131.89   <none>        3306/TCP   13m

    By default, the new service does not have an external IP address.

Expose the Service to Create a Route

You must expose the service as a route using the oc expose command.

To expose the service:

  1. Log into OpenShift Container Platform.

  2. Log into the project where the service you want to expose is located.

    $ oc project project1
  3. Run the following command to expose the route:

    oc expose service <service-name>

    For example:

    oc expose service mysql-55-rhel7
    route "mysql-55-rhel7" exposed
  4. On the master, use a tool, such as cURL, to make sure you can reach the service using the cluster IP address for the service:

    curl <pod-ip>:<port>

    For example:

    curl 172.30.131.89:3306

    The examples in this section use a MySQL service, which requires a client application. If you get a string of characters with the Got packets out of order message, you are connected to the service.

    If you have a MySQL client, log in with the standard CLI command:

    $ mysql -h 172.30.131.89 -u admin -p
    Enter password:
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    
    MySQL [(none)]>

Then, perform the following tasks:

Assigning an IP Address to the Service

To assign an external IP address to a service:

  1. Log into OpenShift Container Platform.

  2. Load the project where the service you want to expose is located. If the project or service does not exist, see Create a Project and Service in the Prerequisites.

  3. Run the following command to assign an external IP address to the service you want to access. Use an IP address from the external IP address range:

    oc patch svc <name> -p '{"spec":{"externalIPs":["<ip_address>"]}}'

    The <name> is the name of the service and -p indicates a patch to be applied to the service JSON file. The expression in the brackets will assign the specified IP address to the specified service.

    For example:

    oc patch svc mysql-55-rhel7 -p '{"spec":{"externalIPs":["192.174.120.10"]}}'
    
    "mysql-55-rhel7" patched
  4. Run the following command to see that the service has a public IP:

    oc get svc
    NAME               CLUSTER-IP      EXTERNAL-IP     PORT(S)    AGE
    mysql-55-rhel7     172.30.131.89   192.174.120.10  3306/TCP   13m
  5. On the master, use a tool, such as cURL, to make sure you can reach the service using the public IP address:

    $ curl <public-ip>:<port>

    For example:

    curl 192.168.120.10:3306

    If you get a string of characters with the Got packets out of order message, you are connected to the service.

    If you have a MySQL client, log in with the standard CLI command:

    $ mysql -h 192.168.120.10 -u admin -p
    Enter password:
    Welcome to the MariaDB monitor. Commands end with ; or \g.
    
    MySQL [(none)]>

Configuring Networking

After the external IP address is assigned, you need to create routes to that IP.

The following steps are general guidelines for configuring the networking required to access the exposed service from other nodes. As network environments vary, consult your network administrator for specific configurations that need to be made within your environment.

These steps assume that all of the systems are on the same subnet.

On the master:

  1. Restart the network to make sure the network is up.

    $ service network restart
    Restarting network (via systemctl):  [  OK  ]

    If the network is not up, you will receive error messages such as Network is unreachable when running the following commands.

  2. Run the following command with the external IP address of the service you want to expose and device name associated with the host IP from the ifconfig command output:

    $ ip address add <external-ip> dev <device>

    For example:

    $ ip address add 192.168.120.10 dev eth0

    If you need to, run the following command to obtain the IP address of the host server where the master resides:

    $ ifconfig

    Look for the device that is listed similar to: UP,BROADCAST,RUNNING,MULTICAST.

    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 10.16.41.22  netmask 255.255.248.0  broadcast 10.16.47.255
            ...
  3. Add a route between the IP address of the host where the master resides and the gateway IP address of the master host:

    $ route add -net <host_ip_address> netmask <netmask> gw <gateway_ip_address> dev <device>

    For example:

    $ route add -host 10.16.41.22 gw 10.16.41.254 dev eth0

    The netstat -nr command provides the gateway IP address:

    $ netstat -nr
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
    0.0.0.0         10.16.41.254    0.0.0.0         UG        0 0          0 eth0
  4. Add a route between the IP address of the exposed service and the IP address of the master host:

    $ route add -net 192.174.120.0/24 gw 10.16.41.22 eth0

On the Node:

  1. Restart the network to make sure the network is up.

    $ service network restart
    Restarting network (via systemctl):  [  OK  ]

    If the network is not up, you will receive error messages such as Network is unreachable when executing the following commands.

  2. Add a route between IP address of the host where the node is located and the gateway IP of the node host:

    $ route add -net 10.16.40.0 netmask 255.255.248.0 gw 10.16.47.254 eth0

    The ifconfig command displays the host IP:

    ifconfig
    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 10.16.41.71  netmask 255.255.255.0  broadcast 10.19.41.255

    The netstat -nr command displays the gateway IP:

    netstat -nr
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
    0.0.0.0         10.16.41.254    0.0.0.0         UG        0 0          0 eth0
  3. Add a route between the IP address of the exposed service and the IP address of the host system where the master node resides:

    $ route add -net 192.174.120.0 netmask 255.255.255.0 gw 10.16.41.22 dev eth0
  4. Use a tool, such as cURL, to make sure you can reach the service using the public IP address:

    $ curl <public-ip>:<port>

    For example:

    curl 192.168.120.10:3306

    If you get a string of characters with the Got packets out of order message, your service is accessible from the node.

On the system that is not in the cluster:

  1. Restart the network to make sure the network is up.

    $ service network restart
    Restarting network (via systemctl):  [  OK  ]

    If the network is not up, you will receive error messages such as Network is unreachable when executing the following commands.

  2. Add a route between the IP address of the remote host and the gateway IP of the remote host:

    $ route add -net 10.16.64.0 netmask 255.255.248.0 gw 10.16.71.254 eno1
  3. Add a route between the IP address of the exposed service on master and the IP address of the master host:

    $ route add -net 192.174.120.0 netmask 255.255.255.0 gw 10.16.41.22
  4. Use a tool, such as cURL, to make sure you can reach the service using the public IP address:

    $ curl <public-ip>:<port>

    For example:

    curl 192.168.120.10:3306

    If you get a string of characters with the Got packets out of order message, your service is accessible outside the cluster.

Configure IP Failover

Optionally, an administrator can configure IP failover.

IP failover manages a pool of Virtual IP (VIP) addresses on a set of nodes. Every VIP in the set is serviced by a node selected from the set. As long as a single node is available, the VIPs will be served. There is no way to explicitly distribute the VIPs over the nodes. As such, there may be nodes with no VIPs and other nodes with multiple VIPs. If there is only one node, all VIPs will be on it.

The VIPs must be routable from outside the cluster.

To configure IP failover:

  1. On the master, make sure the ipfailover service account has sufficient security privileges:

    oadm policy add-scc-to-user privileged -z ipfailover
  2. Run the following command to create the IP failover:

    oadm ipfailover --virtual-ips=<exposed-ip-address> --watch-port=<exposed-port> --replicas=<number-of-pods> --create

    For example:

    oadm ipfailover --virtual-ips="172.30.233.169" --watch-port=32315 --replicas=4 --create
    --> Creating IP failover ipfailover ...
        serviceaccount "ipfailover" created
        deploymentconfig "ipfailover" created
    --> Success