# yum install glusterfs-fuse
This topic provides an end-to-end example of how to use an existing Container-Native Storage, Container-Ready Storage, or standalone Red Hat Gluster Storage cluster as persistent storage for OpenShift Container Platform. It is assumed that a working Red Hat Gluster Storage cluster is already set up. For help installing Container-Native Storage or Container-Ready Storage, see Persistent Storage Using Red Hat Gluster Storage. For standalone Red Hat Gluster Storage, consult the Red Hat Gluster Storage Administration Guide.
For an end-to-end example of how to dynamically provision GlusterFS volumes, see Complete Example Using GlusterFS for Dynamic Provisioning.
All |
To access GlusterFS volumes, the mount.glusterfs
command must be available on
all schedulable nodes. For RPM-based systems, the glusterfs-fuse package must
be installed:
# yum install glusterfs-fuse
This package comes installed on every RHEL system. However, it is recommended to update to the latest available version from Red Hat Gluster Storage. To do this, the following RPM repository must be enabled:
# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
If glusterfs-fuse is already installed on the nodes, ensure that the latest version is installed:
# yum update glusterfs-fuse
By default, SELinux does not allow writing from a pod to a remote Red Hat Gluster Storage server. To enable writing to Red Hat Gluster Storage volumes with SELinux on, run the following on each node running GlusterFS:
$ sudo setsebool -P virt_sandbox_use_fusefs on (1)
1 | The -P option makes the bool persistent between reboots. |
The |
To enable static provisioning, first create a GlusterFS volume. See the
Red Hat Gluster Storage Administration Guide for information on
how to do this using the gluster
command-line interface or the
heketi project site for information on
how to do this using heketi-cli
. For this example, the volume will be named
myVol1
.
Define the following Service and Endpoints in gluster-endpoints.yaml
:
---
apiVersion: v1
kind: Service
metadata:
name: glusterfs-cluster (1)
spec:
ports:
- port: 1
---
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-cluster (1)
subsets:
- addresses:
- ip: 192.168.122.221 (2)
ports:
- port: 1 (3)
- addresses:
- ip: 192.168.122.222 (2)
ports:
- port: 1 (3)
- addresses:
- ip: 192.168.122.223 (2)
ports:
- port: 1 (3)
1 | These names must match. |
2 | The ip values must be the actual IP addresses of a Red Hat Gluster Storage server,
not hostnames. |
3 | The port number is ignored. |
From the OpenShift Container Platform master host, create the Service and Endpoints:
$ oc create -f gluster-endpoints.yaml
service "glusterfs-cluster" created
endpoints "glusterfs-cluster" created
Verify that the Service and Endpoints were created:
$ oc get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
glusterfs-cluster 172.30.205.34 <none> 1/TCP <none> 44s
$ oc get endpoints
NAME ENDPOINTS AGE
docker-registry 10.1.0.3:5000 4h
glusterfs-cluster 192.168.122.221:1,192.168.122.222:1,192.168.122.223:1 11s
kubernetes 172.16.35.3:8443 4d
Endpoints are unique per project. Each project accessing the GlusterFS volume needs its own Endpoints. |
In order to access the volume, the container must run with either a user ID (UID) or group ID (GID) that has access to the file system on the volume. This information can be discovered in the following manner:
$ mkdir -p /mnt/glusterfs/myVol1
$ mount -t glusterfs 192.168.122.221:/myVol1 /mnt/glusterfs/myVol1
$ ls -lnZ /mnt/glusterfs/
drwxrwx---. 592 590 system_u:object_r:fusefs_t:s0 myVol1
1 | The UID is 592. |
2 | The GID is 590. |
Define the following PersistentVolume (PV) in gluster-pv.yaml
:
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster-default-volume (1)
annotations:
pv.beta.kubernetes.io/gid: "590" (2)
spec:
capacity:
storage: 2Gi (3)
accessModes: (4)
- ReadWriteMany
glusterfs:
endpoints: glusterfs-cluster (5)
path: myVol1 (6)
readOnly: false
persistentVolumeReclaimPolicy: Retain
1 | The name of the volume. |
2 | The GID on the root of the GlusterFS volume. |
3 | The amount of storage allocated to this volume. |
4 | accessModes are used as labels to match a PV and a PVC. They currently
do not define any form of access control. |
5 | The Endpoints resource previously created. |
6 | The GlusterFS volume that will be accessed. |
From the OpenShift Container Platform master host, create the PV:
$ oc create -f gluster-pv.yaml
Verify that the PV was created:
$ oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
gluster-default-volume <none> 2147483648 RWX Available 2s
Create a PersistentVolumeClaim (PVC) that will bind to the new PV in
gluster-claim.yaml
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster-claim (1)
spec:
accessModes:
- ReadWriteMany (2)
resources:
requests:
storage: 1Gi (3)
1 | The claim name is referenced by the pod under its volumes section. |
2 | Must match the accessModes of the PV. |
3 | This claim will look for PVs offering 1Gi or greater capacity. |
From the OpenShift Container Platform master host, create the PVC:
$ oc create -f gluster-claim.yaml
Verify that the PV and PVC are bound:
$ oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
gluster-pv <none> 1Gi RWX Available gluster-claim 37s
$ oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
gluster-claim <none> Bound gluster-pv 1Gi RWX 24s
PVCs are unique per project. Each project accessing the GlusterFS volume needs its own PVC. PVs are not bound to a single project, so PVCs across multiple projects may refer to the same PV. |
At this point, you have a dynamically created GlusterFS volume bound to a PVC. You can now now utilize this PVC in a pod. In this example, we use a simple NGINX pod.
Create the pod object definition:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
name: nginx-pod
spec:
containers:
- name: nginx-pod
image: nginx
ports:
- name: web
containerPort: 80
volumeMounts:
- name: gluster-vol1
mountPath: /usr/share/nginx/html
readOnly: false
volumes:
- name: gluster-vol1
persistentVolumeClaim:
claimName: gluster1 (1)
1 | The name of the PVC created in the previous step. |
From the OpenShift Container Platform master host, create the pod:
# oc create -f nginx-pod.yaml pod "nginx-pod" created
View the pod. Give it a few minutes, as it might need to download the image if it does not already exist:
# oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-pod 1/1 Running 0 9m 10.38.0.0 node1
oc exec
into the container and create an index.html file in the
mountPath
definition of the pod:
$ oc exec -ti nginx-pod /bin/sh $ cd /usr/share/nginx/html $ echo 'Hello World from GlusterFS!!!' > index.html $ ls index.html $ exit
Now curl
the URL of the pod:
# curl http://10.38.0.0 Hello World from GlusterFS!!!
Delete the pod, recreate it, and wait for it to come up:
# oc delete pod nginx-pod pod "nginx-pod" deleted # oc create -f nginx-pod.yaml pod "nginx-pod" created # oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-pod 1/1 Running 0 9m 10.37.0.0 node1
Now curl
the pod again and it should still have the same data as before.
Note that its IP address may have changed:
# curl http://10.37.0.0 Hello World from GlusterFS!!!
Check that the index.html file was written to GlusterFS storage by doing the following on any of the nodes:
$ mount | grep heketi /dev/mapper/VolGroup00-LogVol00 on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota) /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_1e730a5462c352835055018e1874e578 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_1e730a5462c352835055018e1874e578 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_d8c06e606ff4cc29ccb9d018c73ee292 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) $ cd /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292/brick $ ls index.html $ cat index.html Hello World from GlusterFS!!!