# yum install glusterfs-fuse
This topic provides an end-to-end example of how to use an existing Container-Native Storage, Container-Ready Storage, or standalone Red Hat Gluster Storage cluster as dynamic persistent storage for OpenShift Container Platform. It is assumed that a working Red Hat Gluster Storage cluster is already set up. For help installing Container-Native Storage or Container-Ready Storage, see Persistent Storage Using Red Hat Gluster Storage. For standalone Red Hat Gluster Storage, consult the Red Hat Gluster Storage Administration Guide.
All |
To access GlusterFS volumes, the mount.glusterfs
command must be available on
all schedulable nodes. For RPM-based systems, the glusterfs-fuse package must
be installed:
# yum install glusterfs-fuse
This package comes installed on every RHEL system. However, it is recommended to update to the latest available version from Red Hat Gluster Storage. To do this, the following RPM repository must be enabled:
# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
If glusterfs-fuse is already installed on the nodes, ensure that the latest version is installed:
# yum update glusterfs-fuse
By default, SELinux does not allow writing from a pod to a remote Red Hat Gluster Storage server. To enable writing to Red Hat Gluster Storage volumes with SELinux on, run the following on each node running GlusterFS:
$ sudo setsebool -P virt_sandbox_use_fusefs on (1)
1 | The -P option makes the bool persistent between reboots. |
The |
To enable dynamic provisioning, first create a StorageClass
object
definition. The definition below is based on the minimum requirements needed
for this example to work with OpenShift Container Platform. See
Dynamic
Provisioning and Creating Storage Classes for additional parameters and
specification definitions.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: glusterfs
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://10.42.0.0:8080" (1)
restauthenabled: "false" (2)
1 | The heketi server URL. |
2 | Since authentication is not turned on in this example, set to false . |
From the OpenShift Container Platform master host, create the StorageClass:
# oc create -f gluster-storage-class.yaml storageclass "glusterfs" created
Create a PVC using the newly-created StorageClass. For example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 30Gi
storageClassName: glusterfs
From the OpenShift Container Platform master host, create the PVC:
# oc create -f glusterfs-dyn-pvc.yaml persistentvolumeclaim "gluster1" created
View the PVC to see that the volume was dynamically created and bound to the PVC:
# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE gluster1 Bound pvc-78852230-d8e2-11e6-a3fa-0800279cf26f 30Gi RWX gluster-dyn 42s
At this point, you have a dynamically created GlusterFS volume bound to a PVC. You can now now utilize this PVC in a pod. In this example, we use a simple NGINX pod.
Create the pod object definition:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
name: nginx-pod
spec:
containers:
- name: nginx-pod
image: nginx
ports:
- name: web
containerPort: 80
volumeMounts:
- name: gluster-vol1
mountPath: /usr/share/nginx/html
readOnly: false
volumes:
- name: gluster-vol1
persistentVolumeClaim:
claimName: gluster1 (1)
1 | The name of the PVC created in the previous step. |
From the OpenShift Container Platform master host, create the pod:
# oc create -f nginx-pod.yaml pod "nginx-pod" created
View the pod. Give it a few minutes, as it might need to download the image if it does not already exist:
# oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-pod 1/1 Running 0 9m 10.38.0.0 node1
oc exec
into the container and create an index.html file in the
mountPath
definition of the pod:
$ oc exec -ti nginx-pod /bin/sh $ cd /usr/share/nginx/html $ echo 'Hello World from GlusterFS!!!' > index.html $ ls index.html $ exit
Now curl
the URL of the pod:
# curl http://10.38.0.0 Hello World from GlusterFS!!!
Delete the pod, recreate it, and wait for it to come up:
# oc delete pod nginx-pod pod "nginx-pod" deleted # oc create -f nginx-pod.yaml pod "nginx-pod" created # oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-pod 1/1 Running 0 9m 10.37.0.0 node1
Now curl
the pod again and it should still have the same data as before.
Note that its IP address may have changed:
# curl http://10.37.0.0 Hello World from GlusterFS!!!
Check that the index.html file was written to GlusterFS storage by doing the following on any of the nodes:
$ mount | grep heketi /dev/mapper/VolGroup00-LogVol00 on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota) /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_1e730a5462c352835055018e1874e578 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_1e730a5462c352835055018e1874e578 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_d8c06e606ff4cc29ccb9d018c73ee292 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) $ cd /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292/brick $ ls index.html $ cat index.html Hello World from GlusterFS!!!