OKD
OKD (formerly OpenShift Origin) is the open source, foundational project of OpenShift and it is a pseudo-acronym for the Origin Community Distribution of Kubernetes.
CodeReady Containers
CodeReady Containers (crc) is a simple, local, single-node OpenShift cluster that supports OKD. Installation requires a free Red Hat account.
minishift
minishift is a simple, local, single-node OpenShift cluster:
- Linux:
wget https://github.com/minishift/minishift/releases/latest/download/minishift-$(curl -s -L -H "Accept: application/json" https://github.com/minishift/minishift/releases/latest | sed 's/.*tag_name":"v//g' | sed 's/".*//g')-linux-amd64.tgz
- macOS:
brew cask install minishift
YAML Manifest
A YAML Manifest is a YAML file used to manipulate Kubernetes and it has four required parts:
- apiVersion
- kind
- metadata
- spec
oc
The OpenShift client (oc) is the OpenShift command line interface available for download publicly or from a cluster:
- https://downloads-openshift-console.${CLUSTER_DOMAIN_NAME}/
- Most common downloads under:
Version
- Print oc version:
oc version
Client Version: openshift-clients-4.4.0-202006211643.p0-9-g9cd748327 Kubernetes Version: v1.17.1+45f8ddb
Tips
- Source tab-completion scripts with each new installed version of
oc
:- bash: https://docs.openshift.com/container-platform/latest/cli_reference/openshift_cli/configuring-cli.html
- zsh:
source <(oc completion zsh)
in~/.zshrc
Login
- To get the login token, go to the OpenShift console, click on your name in the top right and then Copy Login Command.
- Login:
oc login -u $USER https://api.${CLUSTER_DOMAIN_NAME}:6443
- The file pointed to by the
KUBECONFIG
envar or specified withoc --config=
(normally~/.kube/config
) contains session authentication information. - You may login multiple times concurrently by specifying different
KUBECONFIG
envar or--config=
files and switch between the two.
- The file pointed to by the
- Show logged-in server version:
oc version
and search for "Server Version" - Show logged-in name:
oc whoami
- Show API address:
oc whoami --show-server
- Show web console address:
oc whoami --show-console
- Show login token:
oc whoami --show-token
- Show current project, cluster, and user:
oc whoami --show-context
- Show general status:
oc status --suggest
- Logout:
oc logout
APIs
- List available APIs and versions:
oc api-versions
- List available resource kinds:
oc api-resources
NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap [...]
- Describe a resource kind:
oc explain $KIND
$ oc explain pod KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. [...]
- Add
--api-version=
for a specific API version.
- Add
- Describe sub-components of a resource kind:
oc explain $NAME.$COMPONENT
$ oc explain pod.spec KIND: Pod VERSION: v1 RESOURCE: spec <Object> DESCRIPTION: Specification of the desired behavior of the pod. [...]
Projects
- New Project (and set to the current project):
oc new-project $NAME
- For details, add
--display-name="$DISPLAY" --description="$DESCRIPTION"
- For details, add
- Show current project:
oc project
- List projects:
oc get projects
NAME DISPLAY NAME STATUS default Active kube-node-lease Active kube-public Active kube-system Active openshift Active openshift-apiserver Active [...]
- Change current project:
oc project $PROJECT
- Delete a project and all its resources:
oc delete project $NAME
- An explicit project may be specified with
-n $PROJECT
- Show deployed applications for a project:
oc status
$ oc status In project OpenShift Tools (af1f-openshift-tools) on server https://api.shared-na4.na4.openshift.opentlc.com:6443 http://cakephp-mysql-example-af1f-openshift-tools.apps.shared-na4.na4.openshift.opentlc.com (svc/cakephp-mysql-example) dc/cakephp-mysql-example deploys istag/cakephp-mysql-example:latest <- bc/cakephp-mysql-example source builds https://github.com/sclorg/cakephp-ex.git on openshift/php:7.3 build #1 running for 51 seconds - 377fe8f: Merge pull request #117 from multi-arch/master (Honza Horak <hhorak@redhat.com>) deployment #1 waiting on image or update svc/mysql - 172.30.12.249:3306 dc/mysql deploys openshift/mysql:5.7 deployment #1 deployed 50 seconds ago - 1 pod View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.
- Create an application from YAML:
- Create the YAML. For example, the following requires 2 replicas of
httpd:
apiVersion: apps/v1 kind: Deployment metadata: name: httpd-deployment spec: selector: matchLabels: app: httpd replicas: 2 template: metadata: labels: app: httpd spec: containers: - name: httpd image: image-registry.openshift-image-registry.svc:5000/openshift/httpd ports: - containerPort: 8080
- Execute the YAML:
oc create -f httpd.yaml
- Monitor app deployment:
oc get all
- Create service YAML. For example, using the
app=httpd
selector:apiVersion: v1 kind: Service metadata: name: httpd-deployment spec: ports: - port: 8080 protocol: TCP targetPort: 8080 selector: app: httpd type: ClusterIP
- Execute the YAML:
oc create -f httpd-service.yaml
- Monitor app deployment:
oc get all
- Create a route YAML. For example, pointing to the service:
apiVersion: route.openshift.io/v1 kind: Route metadata: name: httpd-deployment spec: port: targetPort: 8080 to: kind: Service name: httpd-deployment
- Execute the YAML:
oc create -f httpd-route.yaml
- Monitor app deployment:
oc get all
- Test the route using the public host port shown in
oc get all
- Create the YAML. For example, the following requires 2 replicas of
httpd:
Applications
- Deploy application to a project:
oc new-app $LANGUAGE~$URL -n $PROJECT
- This creates a DeploymentConfig and ReplicationController
- Deploy application from a particular image:
oc new-app --docker-image=$REGISTRY/$REPO/$IMAGE:$TAG --name=$NAME
- Describe an application's DeploymentConfig:
oc describe dc $APP
- Update an application's DeploymentConfig:
oc edit dc $APP
- Search for application images:
oc new-app --search ${IMAGE}
- Tail build logs:
oc logs -f build/$APP-1 -n $PROJECT
- Tag a built image:
oc tag $APP:latest -n $PROJECT
- List app tags:
oc describe imagestream $APP -n $PROJECT
- Deploy built app from one project to another:
oc new-app $PROJECT1/$APP:$TAG --name=$APP -n $PROJECT2
- Create a route:
oc expose service $APP -n $PROJECT
- Disable automatic deployment:
oc get dc $APP -o yaml -n $PROJECT | sed 's/automatic: true/automatic: false/g' | oc replace -f -
DeploymentConfigs
- Show DeploymentConfigs' statuses:
oc get dc
Deployments
- Show deployment including events/actions that it took such as
rollout:
oc describe deployment $NAME -n $PROJECT
- Show deployment logs:
oc logs -f deployment/$NAME -n $PROJECT
- Show all deployments:
oc status
oroc get all
- Scale the pods of a deployment:
oc scale deployment $NAME --replicas=N
- Show replica rollout status:
oc rollout status deployment $NAME
- After an image is updated, edit the deployment's container image
tag:
oc set image deployment $NAME CONTAINER=NEW_IMAGE
oroc edit deployment $NAME
- Deployment strategies:
- Rolling (default): This creates a new ReplicaSet, scales it up with the new image, and scales down the ReplicaSet with the old image. If the rollout doesn't succeed within 10 minutes, it's rolled back. This is also called a Canary approach.
- Recreate: Terminate existing pods, perform actions like DB
migrations, start new pods.
- Lifecycle hooks may be done before (pre) stopping the old pods, after stopping old pods but before starting new pods (mid), or after starting the new pods (post).
- Blue/Green: Create a separate deployment/service, change the route to point to the new service when ready, and delete the old deployment/service.
- A/B: Split traffic (e.g. 50/50).
Resources
- List all resources in a project:
oc get all
- Describe a resource:
oc describe ${RESOURCE}
Nodes
- List nodes:
oc get nodes
NAME STATUS ROLES AGE VERSION ip-10-0-129-77.us-west-1.compute.internal Ready master 52m v1.17.1 ip-10-0-130-47.us-west-1.compute.internal Ready worker 43m v1.17.1 ip-10-0-135-81.us-west-1.compute.internal Ready worker 43m v1.17.1 ip-10-0-143-178.us-west-1.compute.internal Ready master 52m v1.17.1 ip-10-0-147-232.us-west-1.compute.internal Ready worker 43m v1.17.1 ip-10-0-153-32.us-west-1.compute.internal Ready master 52m v1.17.1
- List more details of nodes:
oc get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-10-0- Ready master 73m v1.17.1 10.0.129.77 <none> Red Hat E 4.18.0-147.8.1. cri-o://1.17.4-8.d ip-10-0- Ready worker 64m v1.17.1 10.0.130.47 <none> Red Hat E 4.18.0-147.8.1. cri-o://1.17.4-8.d ip-10-0- Ready worker 64m v1.17.1 10.0.135.81 <none> Red Hat E 4.18.0-147.8.1. cri-o://1.17.4-8.d ip-10-0- Ready master 73m v1.17.1 10.0.143.178 <none> Red Hat E 4.18.0-147.8.1. cri-o://1.17.4-8.d ip-10-0- Ready worker 64m v1.17.1 10.0.147.232 <none> Red Hat E 4.18.0-147.8.1. cri-o://1.17.4-8.d ip-10-0- Ready master 73m v1.17.1 10.0.153.32 <none> Red Hat E 4.18.0-147.8.1. cri-o://1.17.4-8.d
Machine Sets
- List machine sets:
oc get machinesets -n openshift-machine-api
NAME DESIRED CURRENT READY AVAILABLE AGE cluster-f1bd-4fp56-worker-us-west-1a 2 2 2 2 58m cluster-f1bd-4fp56-worker-us-west-1c 1 1 1 1 58m
- Describe a machineset:
oc describe machineset $NAME -n openshift-machine-api
- Change the type of instances of a machine set:
- Scale down a machine set to 0:
oc scale machineset $NAME --replicas=0 -n openshift-machine-api
- Wait until replicas is 0:
oc get machinesets -n openshift-machine-api
- Wait until the node disappears:
oc get nodes
- Change the instance type:
oc patch machineset $NAME --type='merge' --patch='{"spec": { "template": { "spec": { "providerSpec": { "value": { "instanceType": "m5.2xlarge"}}}}}}' -n openshift-machine-api
- Scale back up:
oc scale machineset $NAME --replicas=1 -n openshift-machine-api
- Scale down a machine set to 0:
Machines
- List machines:
oc get machines -n openshift-machine-api
NAME PHASE TYPE REGION ZONE AGE cluster-f1bd-4fp56-master-0 Running m4.xlarge us-west-1 us-west-1a 52m cluster-f1bd-4fp56-master-1 Running m4.xlarge us-west-1 us-west-1c 52m cluster-f1bd-4fp56-master-2 Running m4.xlarge us-west-1 us-west-1a 52m cluster-f1bd-4fp56-worker-us-west-1a-smf2q Running m4.large us-west-1 us-west-1a 46m cluster-f1bd-4fp56-worker-us-west-1a-x2qqz Running m4.large us-west-1 us-west-1a 46m cluster-f1bd-4fp56-worker-us-west-1c-gz2lc Running m4.large us-west-1 us-west-1c 46m
- Describe a machine:
oc describe machine $NAME -n openshift-machine-api
Name: cluster-f1bd-4fp56-master-0 Namespace: openshift-machine-api Labels: machine.openshift.io/cluster-api-cluster=cluster-f1bd-4fp56 machine.openshift.io/cluster-api-machine-role=master machine.openshift.io/cluster-api-machine-type=master machine.openshift.io/instance-type=m4.xlarge machine.openshift.io/region=us-west-1 machine.openshift.io/zone=us-west-1a Annotations: machine.openshift.io/instance-state: running [...] Ami: Id: ami-02b6556210798d665 API Version: awsproviderconfig.openshift.io/v1beta1 Block Devices: [...] Addresses: Address: 10.0.129.77 Type: InternalIP Address: ip-10-0-129-77.us-west-1.compute.internal Type: InternalDNS Address: ip-10-0-129-77.us-west-1.compute.internal Type: Hostname [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Updated 2m35s (x12 over 55m) aws-controller Updated machine cluster-f1bd-4fp56-master-0
- List control plane machines:
oc get machines -l machine.openshift.io/cluster-api-machine-type=master -n openshift-machine-api
- List worker machines:
oc get machines -l machine.openshift.io/cluster-api-machine-type=worker -n openshift-machine-api
- List machines with a custom query such as instance type:
oc get machines -n openshift-machine-api -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{.spec.providerSpec.value.instanceType}{end}{"\n"}'
or region:oc get machines -n openshift-machine-api -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{.spec.providerSpec.value.placement.region}{end}{"\n"}'
Pods
- List pods for the current project:
oc get pods
- List all pods:
oc get pods --all-namespaces
- List pods for a particular project:
$ oc get pods -n openshift-apiserver NAME READY STATUS RESTARTS AGE apiserver-77c9656b9f-c5mlr 1/1 Running 0 45m apiserver-77c9656b9f-ddll4 1/1 Running 0 46m apiserver-77c9656b9f-lfxvb 1/1 Running 0 45m
- List running pods:
oc get pods --field-selector status.phase=Running
- Open a shell on a pod:
oc rsh $POD
- Execute a remote command on a pod:
oc exec $POD -- uptime
- Copy a file or directory from a pod:
oc cp $POD:/etc/hostname hostname
- List all pods:
oc get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-7f87667d89-rd9tz 1/1 Running 1 56m openshift-apiserver apiserver-77c9656b9f-c5mlr 1/1 Running 0 45m [...]
- Tail pod updates until you hit Ctrl+C:
oc get pods -w
- Show details of a pod:
oc get pods $POD -o yaml
Scaling
- Set number of pods:
oc scale --replicas=$N
- Configure auto-scaling:
oc autoscale
Services
List services:
oc get svc
Default service name:
_service-name.project_.svc.cluster.local
Show details of a service:
oc get svc $SERVICE -o yaml
$ oc get svc mysql -o yaml [...] spec: clusterIP: 172.30.12.249 ports: - name: mysql port: 3306 protocol: TCP targetPort: 3306 selector: name: mysql sessionAffinity: None type: ClusterIP status: loadBalancer: {}
Routes
- Expose an application publicly with a route:
oc expose service/${APP}
- Add
--name=$NAME
to specify the name
- Add
- Create a secured route:
oc create route
- Display all application routes:
oc get route
- Display routes for an application:
oc get route/${APP}
- Display route details:
oc get route $ROUTE -o yaml
- Edit a route to change its service:
oc patch route/$ROUTE -p '{"spec":{"to":{"name":"$NEWSERVICE"}}}'
- Display route URL:
oc get route $ROUTE --template='{{ .spec.host }}'
- This can be combined with curl:
curl http://$(oc get route $ROUTE --template='{{ .spec.host }}')/
- This can be combined with curl:
Ports
- Expose a port for an existing pod:
oc port-forward $POD $PORT:$PORT
Jenkins
- Create Jenkins pipeline:
oc new-app jenkins-persistent --param MEMORY_LIMIT=2Gi --param ENABLE_OAUTH=true -e JENKINS_PASSWORD=$JENKINSPASSWORD -n $PROJECT
- Set pipeline maximum resource usage if needed:
oc set resources dc jenkins --limits=cpu=2 --requests=cpu=1,memory=2Gi
- Set pipeline maximum resource usage if needed:
- Find the route for Jenkins:
oc get route jenkins -n $PROJECT
- Allow a Jenkins app to access other projects:
oc policy add-role-to-user edit system:serviceaccount:$JENKINSPROJECT:jenkins -n $PROJECT
- Allow another project to pull from a pipeline:
oc policy add-role-to-group system:image-puller system:serviceaccounts:$JENKINSPROJECT -n $PROJECT
- If you want to disable automatic deployment of newly built
application images:
oc set triggers dc openshift-tasks --manual
Jenkins Web Console
Retrieve the route from above and log-in with OpenShift OAuth credentials. Common tasks:
- New pipeline: New Item } Tasks } Pipeline } OK } Add pipeline code } Save } Build Now } Open Blue Ocean
- Example pipeline:
node { stage('Build Tasks') { openshift.withCluster() { openshift.withProject("$PROJECT") { openshift.selector("bc", "openshift-tasks").startBuild("--wait=true") } } } stage('Tag Image') { openshift.withCluster() { openshift.withProject("$PROJECT") { openshift.tag("openshift-tasks:latest", "openshift-tasks:${BUILD_NUMBER}") } } } stage('Deploy new image') { openshift.withCluster() { openshift.withProject("$PROJECT") { openshift.selector("dc", "openshift-tasks").rollout().latest(); } } } }
Quotas
- List quotas for a project:
oc get quota -n $PROJECT
- Get details of a quota:
oc describe quota $QUOTA -n $PROJECT
- List cluster resource quotes:
oc get appliedclusterresourcequotas
NAME LABELS SELECTOR ANNOTATIONS SELECTOR clusterquota-shared-10c1 <none> map[openshift.io/requester:...]
- Show details of a cluster resource quota:
oc describe appliedclusterresourcequota $NAME
Name: clusterquota-shared-10c1 [...] Resource Used Hard -------- ---- ---- configmaps 3 100 limits.cpu 500m 16 limits.memory 512Mi 45Gi persistentvolumeclaims 0 15 pods 1 30 requests.cpu 50m 16 requests.memory 512Mi 20Gi requests.storage 0 50Gi secrets 13 150 services 2 150
- Query drivers of the resource usage:
oc get pods --field-selector=status.phase=Running -o json | jq '.items[] | {name: .metadata.name, res: .spec.containers[].resources}'
- Change container memory limits:
oc set resources dc $NAME --limits=memory=1Gi
Environment Variables
- List envars for a resource:
oc set env $KIND/$RESOURCE --list
- Set envar on a resource:
oc set env $KIND/$RESOURCE NAME=value
- Set envar on all resources of a kind:
oc set env $KIND --all NAME=value
- Set envar on a resource from a secret:
oc set env $KIND/$RESOURCE --from=secret/$NAME
- Delete an envar with a dash (
-
):oc set env $KIND/$RESOURCE NAME-
- Automatically created envars:
${SERVICE}_SERVICE_HOST
${SERVICE}_SERVICE_PORT
Secrets
- Create secret:
oc create secret generic $NAME
- Describe secret:
oc describe secrets $NAME
oroc get secret $NAME -o yaml
- Mount a secret as a filesystem:
oc set volume $KIND/$RESOURCE --add --overwrite --name=$VOLUME --mount-path $PATH --secret-name=$SECRET
ConfigMaps
- Create a ConfigMap:
oc create configmap $NAME
- Create a ConfigMap from a literal value:
oc create configmap $NAME --from-literal=KEY=value
- Create a ConfigMap from a file of key/value pairs on each line:
oc create configmap $NAME --from-file=$FILE.properties
- Create a ConfigMap from a directory of files with key=filename and
value=file contents:
oc create configmap $NAME --from-file=$DIRECTORY
- Describe a ConfigMap:
oc describe configmaps $NAME
oroc get configmaps $NAME -o yaml
- Consume ConfigMap as an envar in a pod:
spec: containers: - name: $PODNAME image: $IMAGE command: [ "/bin/sh", "-c", "env" ] env: - name: $LOCALKEY valueFrom: configMapKeyRef: name: $CONFIGMAP key: $KEY
- Consume ConfigMap as a file in a pod:
spec: containers: - name: $PODNAME image: $IMAGE command: [ "/bin/sh", "cat", "/etc/config/$KEYFILE" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: $CONFIGMAP
- Consume ConfigMap as a file in a pod at a specific sub-directory:
spec: containers: - name: $PODNAME image: $IMAGE command: [ "/bin/sh", "cat", "/etc/config/subdir/$KEYFILE" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: $CONFIGMAP items: - key: $KEY path: subdir/special-key
Downward API
- Mount Downward API data in a pod as envars:
spec: containers: - name: $PODNAME image: $IMAGE command: [ "/bin/sh", "-c", "env" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: resource: requests.cpu - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: resource: limits.cpu - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: resource: limits.memory
- Mount Downward API data in a pod as volumes:
spec: containers: - name: $PODNAME image: $IMAGE command: [ "/bin/sh", "cat", "/downward/pod_labels" ] volumeMounts: - name: podinfo mountPath: /downward readOnly: false volumes: - name: podinfo downwardAPI: items: - path: "pod_labels" fieldRef: fieldPath: metadata.label - path: "cpu_limit" resourceFieldRef: containerName: client-container resource: limits.cpu - path: "cpu_request" resourceFieldRef: containerName: client-container resource: requests.cpu - path: "mem_limit" resourceFieldRef: containerName: client-container resource: limits.memory - path: "mem_request" resourceFieldRef: containerName: client-container resource: requests.memor
BuildConfig
- Show deployment process:
oc logs -f bc/$APP
Common Troubleshooting
- Get status:
oc status
andoc get all
- List pods:
oc get pods
- SSH into a pod:
oc rsh $POD
- Execute a remote command on a pod:
oc exec $POD -- uptime
- Copy a file or directory from a pod:
oc cp $POD:/etc/hostname hostname
- List services:
oc get svc
- List all events sorted by timestamp:
oc get events --sort-by='.lastTimestamp'
- Use
--loglevel=10
on any command for detailed tracing. - Access a particular resource:
oc debug $NAME
such asdc/$DC
ornode/$NODE
- Show output in different formats with
-o wide
,-o yaml
,-o json
,-o jsonpath='{.spec.host}{"\n"}'
(for XPath-like selection), and more.
Create Template
Export template for existing object:
oc export all --as-template=$NEWTEMPLATE
Create a new configuration from a template:
oc process -f template.json -v KEY1=value1,KEY2=value2 | oc create -f -
Create a stored template:
oc create -f template.json
Describe a stored template:
oc describe template $NAME
(many are stored in-n openshift
)Create application from stored template:
oc new-app --template=$TEMPLATE --param=KEY1=value1
- A
prefix/
is not needed for the template if it's in theopenshift
project
- A
Create application from template file:
oc new-app --file=template.json --param=KEY1=value1
Generating random passwords:
parameters: - name: PASSWORD description: "The random user password" generate: expression from: "[a-zA-Z0-9]{12}"
Users
- List users:
oc get users
- List roles and users for a project:
oc get rolebindings
- List roles and users for a cluster:
oc get clusterrolebindings
- Add project role to a user:
oc adm policy add-role-to-user $ROLE $USER
- Add cluster role to a user:
oc adm policy add-cluster-role-to-user $ROLE $USER
Registry
- Show registry pods:
oc get pods -n openshift-image-registry
- Show registry services:
oc get services -n openshift-image-registry
- Show registry deployments:
oc get deployments -n openshift-image-registry
- Show registry logs:
oc logs deployments/image-registry -n openshift-image-registry
- By default, a registry does not have a public route. If this is
required, the
registry must be exposed.
oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
- Show registry routes:
oc get routes -n openshift-image-registry
- Allow a user to push to the registry:
oc policy add-role-to-user registry-editor $USER
- Allow a user to pull from the registry:
oc policy add-role-to-user registry-viewer $USER
- docker login:
echo $(oc whoami --show-token) | docker login -u $(oc whoami) --password-stdin default-route-openshift-image-registry.${CLUSTER_DOMAIN_NAME}
- Build image for the registry (or tag an image):
docker build -t $ROUTE/$PROJECT/$IMAGE .
(where$ROUTE
comes fromHOST/PORT
ofoc get routes -n openshift-image-registry
and$PROJECT
comes fromoc
) - Push image to the registry:
docker push $ROUTE/$PROJECT/$IMAGE
- List image in the registry:
docker image ls $ROUTE/$PROJECT/$IMAGE
- List images in the registry:
oc get images
- List image streams:
oc get imagestreams
Volumes
- Add emptyDir volume to a deployment config and create new
deployment:
oc set volume dc/$DC --add --mount-path=$LOCALPATH
Security
Debug a node
oc get nodes
oc debug nodes/$NAME
chroot /host
oc login [...]
Debug a pod
oc get pods
oc debug pod/$NAME --image registry.access.redhat.com/ubi8/ubi:8.0
Cluster management
- List nodes:
oc get nodes
- Describe a node:
oc describe node $NAME
- Show max pods of a node:
oc describe node $NAME | awk '/^Allocatable/ { alloctable=1; } alloctable && /pods:/ { print; exit; }'
- Show cluster CPU usage:
oc adm top node
- Prepare debugging:
oc adm inspect
- Run MustGather:
oc adm must-gather
- Dump cluster administration details and logs:
kubectl cluster-info dump > clusterdump.txt
Other commands
- oc get all -o name --selector app=${APP}: List all resources for an application.
- oc delete all --selector app=${APP}: Schedule deletion of all resources related to an application.
- oc rsync: Retrieve/update files in a running container: <>