apiVersion: v1 kind: Pod spec: nodeName: <value>
As a cluster administrator, you can set a policy to prevent application developers with certain roles from targeting specific nodes when scheduling pods.
The Pod Node Constraints admission controller ensures that pods
are deployed onto only specified node hosts using labels]
and prevents users without a specific role from using the
nodeSelector
field to schedule pods.
Use the Pod Node Constraints admission controller to ensure a pod
is deployed onto only a specified node host by assigning it a label
and specifying this in the nodeName
setting in a pod configuration.
Ensure you have the desired labels (see Updating Labels on Nodes for details) and node selector set up in your environment.
For example, make sure that your pod configuration features the nodeName
value indicating the desired label:
apiVersion: v1 kind: Pod spec: nodeName: <value>
Modify the master configuration file (/etc/origin/master/master-config.yaml) in two places:
Add PodNodeConstraints
to the admissionConfig
section:
... admissionConfig: pluginConfig: PodNodeConstraints: configuration: apiversion: v1 kind: PodNodeConstraintsConfig ...
Then, add the same to the kubernetesMasterConfig
section:
... kubernetesMasterConfig: admissionConfig: pluginConfig: PodNodeConstraints: configuration: apiVersion: v1 kind: PodNodeConstraintsConfig ...
Restart OpenShift Container Platform for the changes to take effect.
# systemctl restart atomic-openshift-master
Using node selectors, you can ensure that pods are only placed onto nodes with specific labels. As a cluster administrator, you can use the Pod Node Constraints admission controller to set a policy that prevents users without the pods/binding permission from using node selectors to schedule pods.
The nodeSelectorLabelBlacklist
field of a master configuration file gives you
control over the labels that certain roles can specify in a pod configuration’s
nodeSelector
field. Users, service accounts, and groups that have the
pods/binding permission role
can specify any node selector. Those without the
pods/binding permission are prohibited from setting a nodeSelector
for any
label that appears in nodeSelectorLabelBlacklist
.
For example, an OpenShift Container Platform cluster might consist of five data
centers spread across two regions. In the U.S., "us-east", "us-central", and
"us-west"; and in the Asia-Pacific region (APAC), "apac-east" and "apac-west".
Each node in each geographical region is labeled accordingly. For example,
region: us-east
.
See Updating Labels on Nodes for details on assigning labels. |
As a cluster administrator, you can create an infrastructure where application
developers should be deploying pods only onto the nodes closest to their
geographical location. You can create a node selector, grouping the U.S. data centers into superregion: us
and the APAC
data centers into superregion: apac
.
To maintain an even loading of resources per data center, you can add the
desired region
to the nodeSelectorLabelBlacklist
section of a master
configuration. Then, whenever a developer located in the U.S. creates a pod, it
is deployed onto a node in one of the regions with the superregion: us
label.
If the developer tries to target a specific region for their pod (for example,
region: us-east
), they receive an error. If they try again, without the
node selector on their pod, it can still be deployed onto the region they tried
to target, because superregion: us
is set as the project-level node selector,
and nodes labeled region: us-east
are also labeled superregion: us
.
Ensure you have the desired labels (see Updating Labels on Nodes for details) and node selector set up in your environment.
For example, make sure that your pod configuration features the nodeSelector
value indicating the desired label:
apiVersion: v1 kind: Pod spec: nodeSelector: <key>: <value> ...
Modify the master configuration file (/etc/origin/master/master-config.yaml) in two places:
Add nodeSelectorLabelBlacklist
to the admissionConfig
section with
the labels that are assigned to the node hosts you want to deny pod placement:
... admissionConfig: pluginConfig: PodNodeConstraints: configuration: apiversion: v1 kind: PodNodeConstraintsConfig nodeSelectorLabelBlacklist: - kubernetes.io/hostname - <label> ...
Then, add the same to the kubernetesMasterConfig
section to restrict direct pod creation:
... kubernetesMasterConfig: admissionConfig: pluginConfig: PodNodeConstraints: configuration: apiVersion: v1 kind: PodNodeConstraintsConfig nodeSelectorLabelBlacklist: - kubernetes.io/hostname - <label_1> ...
Restart OpenShift Container Platform for the changes to take effect.
# systemctl restart atomic-openshift-master
The Pod Node Selector admission controller allows you to force pods onto nodes associated with a specific project and prevent pods from being scheduled in those nodes.
The Pod Node Selector admission controller determines where a pod can be placed using labels on projects and node selectors specified in pods. A new pod will be placed on a node associated with a project only if the node selectors in the pod match the labels in the project.
After the pod is created, the node selectors are merged into the pod so that the pod specification includes the labels originally included in the specification and any new labels from the node selectors. The example below illustrates the merging effect.
The Pod Node Selector admission controller also allows you to create a list of labels that are permitted in a specific project. This list acts as a whitelist that lets developers know what labels are acceptable to use in a project and gives administrators greater control over labeling in a cluster.
To activate the Pod Node Selector admission controller:
Configure the Pod Node Selector admission controller and whitelist, using one of the following methods:
Add the following to the master configuration file (/etc/origin/master/master-config.yaml):
admissionConfig: pluginConfig: PodNodeSelector: configuration: podNodeSelectorPluginConfig: (1) clusterDefaultNodeSelector: "k3=v3" (2) ns1: region=west,env=test,infra=fedora,os=fedora (2)
1 | Adds the Pod Node Selector admission controller plug-in. | ||
2 | Creates default labels for all nodes. | ||
3 | Creates a whitelist of permitted labels in the specified project. Here, the project is ns1 and the labels are the key=value pairs that follow.
|
Restart OpenShift Container Platform for the changes to take effect.
# systemctl restart atomic-openshift-master
Create a project object that includes the
scheduler.alpha.kubernetes.io/node-selector
annotation and labels.
{ "kind": "Namespace", "apiVersion": "v1", "metadata": { "name": "ns1", "annotations": { "scheduler.alpha.kubernetes.io/node-selector": "env=test,infra=fedora" (1) } }, "spec": {}, "status": {} }
1 | Annotation to create the labels to match the project label selector. Here, the key/value labels are env=test and infra=fedora . |
Create a pod specification that includes the labels in the node selector, for example:
apiVersion: v1 kind: Pod metadata: labels: name: hello-pod name: hello-pod spec: containers: - image: "docker.io/ocpqe/hello-pod:latest" imagePullPolicy: IfNotPresent name: hello-pod ports: - containerPort: 8080 protocol: TCP resources: {} securityContext: capabilities: {} privileged: false terminationMessagePath: /dev/termination-log dnsPolicy: ClusterFirst restartPolicy: Always nodeSelector: (1) env: test os: fedora serviceAccount: "" status: {}
1 | Node selectors to match project labels. |
Create the pod in the project:
oc create -f pod.yaml --namespace=ns1
Check that the node selector labels were added to the pod configuration:
get pod pod1 --namespace=ns1 -o json nodeSelector": { "env": "test", "infra": "fedora", "os": "fedora" }
The node selectors are merged into the pod and the pod should be scheduled in the appropriate project.
If you create a pod with a label that is not specified in the project specification, the pod is not scheduled on the node.
For example, here the label env: production
is not in any project specification:
nodeSelector: "env: production" "infra": "fedora", "os": "fedora"
If there is a node that does not have a node selector annotation, the pod will be scheduled there.