OpenShift

OpenShift is a set of open-source projects and Red Hat products based on those projects:

Architecture

An OpenShift node runs Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) along with OpenShift installed.

In addition to Kubernetes master node components, an OpenShift master node also runs OpenShift Master Services which are the major value-add over Kubernetes (OpenShift API server, Operator Lifecycle Management, and the OpenShift Container Platform Web Console) and Infrastructure Services (a curated set of pods including Kubernetes master pods, monitoring, logging, OS tuning, etc.).

MachineSets are normally scoped to logical boundaries such as availability zones and provide templates to create and add hardware to the cluster. MachineSets are based on the Kubernetes Cluster API.

Operator Lifecycle Manager

Operator Lifecycle Manager (OLM) automates keeping the Operator-based workloads and middleware deployed on the cluster up-to-date (including over-the-air updates).

OpenShift Container Platform Web Console

The OpenShift Container Platform Web Console is the main OpenShift administrative web interface.

Topology View
  • Statuses: Running, Not Ready, Warning, Failed, Pending, Succeeded, Terminating, and Unknown
  • To add a service to an existing application group, hold Shift and drag on top of that group. This adds the required labels.

OpenShift Resources

  • A ReplicationController is functionally equivalent to a ReplicaSet. These control/monitor the number of pods running in a deployment.
  • A BuildConfig defines triggers to start a new build and deployment.

DeploymentConfigs

A DeploymentConfig is similar to a Kubernetes Deployment with some differences:

  • As per the CAP theorem, DeploymentConfigs prefer consistency whereas Deployments prefer availability. With DeploymentConfigs, if a node running a deployer pod crashes, you must manually delete the node. With Deployments, the controller manager uses leader election so if a deployer pod crashes, other masters can act on the same Deployment at the same time with reconciliation. Deployments and ReplicaSets are recommended although oc new-app uses DeploymentConfigs and ReplicationControllers. Changing the number of replicas creates a new ReplicaSet.
  • DeploymentConfigs support automatic rollback on a failure, lifecycle hooks, and custom deployment strategies.
  • Deployments support faster rollouts, proportional scaling, and pausing mid-rollout.

In sum:

[...] it is recommended to use Deployments unless you need a specific feature or behavior provided by DeploymentConfigs.

Continuous Integration

OpenShift implements CI with Jenkins. Jenkins can integrate with source code repositories such as Git by using a commit hook to pull new source code and build it automatically. OpenShift Pipelines is built on top of the open source Tekton pipelines project to define an application lifecycle which is used by Jenkins. After building with Jenkins, the pipeline invokes OpenShift's Source-to-Image build (S2I) using compiled source and a builder image, builds a container using podman build, and pushes it to the registry.

OpenShift Service Mesh

Combines Istio, Kiali (UI), and Jaeger (tracing) projects to provide security and network segmentation for microservices applications.

Istio uses side-car containers to enhance service-to-service communication. This includes service discovery, load balancing, failure recovery, metrics,monitoring, A/B testing, canary rollouts, rate limiting, access control, and end-to-end authentication.

Kiali visualizes an entire graph of services scheduled within an instance of the mesh.

OpenShift Image Registry

The built-in registry's default address is image-registry.openshift-image-registry.svc at port 5000 and default images are located in the openshift project; for example, image-registry.openshift-image-registry.svc:5000/openshift/httpd.

Health Checks

Types of health checks:

  • Liveness check to see if the container is running
  • Readiness check to see if a pod is ready to serve requests

To perform a health check:

  • HTTP check is good if response is between 200 - 399
  • TCP Socket check is good if a socket can be opened to the container
  • Container Execution Check is good if a command executed in the container returns 0

OpenShift Serverless

OpenShift Serverless uses Knative to develop serverless applications. It's integrated with Apache Camel K with event sources such as HTTP, Kafka, and AMQP with three main components:

  • Configuration to separate code and configuration following the Twelve-Factor methodology.
  • Revision is a version of an application that is immutable
  • Route that maps a URL to one or more revisions
  • Service that creates a Route, Configuration, and Revision

Eventing in Knative supports brokers, triggers, and subscriptions. A broker represents an event mesh in which events can be sent to multiple subscribers interested in that event and in which durability, persistence, performance, and other semantics can be adjusted.

Autoscaling

Autoscaling configures targeted concurrency for the applications in the revision template:

  • target annotation and/or containerConcurrency: Number of concurrent requests handled by each revision container.
  • minScale/maxScale: Minimum and maximum pods.

OpenShift do (odo)

OpenShift do (odo) is a successor to Appsody and provides templates for building cloud-native applications.

A devfile describes a development environment (specified in devfile.yaml). This devfile instructs how to push an application into a cloud.

Links:

Listing devfiles

$ odo catalog list components
Odo Devfile Components:
NAME                          DESCRIPTION                                                         REGISTRY
java-maven                    Upstream Maven and OpenJDK 11                                       DefaultDevfileRegistry
java-openliberty              Open Liberty microservice in Java                                   DefaultDevfileRegistry
[...]

Links:

Create application

  1. Ensure you're logged in with oc
  2. Create a project (this creates a new namespace in the target cluster; nothing local):
    odo project create testodo
  3. Convert an existing application (not covered below), clone a sample application, or use a skeleton sample application using a starter:
    1. Clone sample:
      1. git clone https://github.com/OpenLiberty/application-stack-intro
      2. cd application-stack-intro
      3. odo create java-openliberty testodoserviceintro
    2. Use a skeleton starter:
      1. mkdir testodoservice1 && cd testodoservice1
      2. odo create java-openliberty testodoservice1 --starter
  4. Push the service to the cloud:
    odo push
  5. Once the push is complete, the URL will be displayed (or available later with odo url list).
  6. For the application-stack-intro sample (but not the skeleton starter), test the application at /api/resource:
    curl $(odo url list | awk 'NR == 3 {print $3;}')/api/resource
  7. After making a change, use odo push again, or use odo watch to automatically push when changes are detected.
  8. If you want to print the container logs:
    odo log
    Or tail them:
    odo log -f
  9. Once the project/application is no longer needed, delete it (this will delete the namespace, stop running pods, etc.):
    odo delete testodo

CodeReady Workspaces

CodeReady Workspaces are based on Eclipse Che to provide collaborative development and integrated Continuous Integration and Continuous Deployment.

Red Hat Ansible Automation Platform

Ansible provides simple, powerful, and agentless automation of tasks using playbooks written in YAML pushed to servers over SSH.

Networking

Container networking is based on integrated Open vSwitch with support for integration with a third-party software-defined network (SDN).

For external traffic, DNS resolves to a Router container (not to be confused with an Ingress/Route resource) which then uses the openshift-sdn overlay network to route to internal IPs.

Logging

Cluster logging is managed through an EFK stack: ElasticSearch (log store), Fluentd (send logs from node to ElasticSearch), and Kibana (web UI to search logs).

Application Configuration

Application configuration should be read from the environment (e.g. storage, envars, etc.) to allow for dynamic updating and using the same image across environments.

Environment Variables

Envars may be set with secrets, ConfigMaps, and/or the Downward API. These may be set at different scopes including a deployment configuration, replication controller, or build configuration.

Secrets

Secrets are a standardized, base-64 encoded (non-encrypted) mechanism for mounting data (either as an envar or secret filesystem volume) for an application such as a password. Stored on tmpfs.

ConfigMaps

ConfigMaps populate configuration data in a container from a directory, file, or literal value. ConfigMaps cannot be shared across projects, must be created before creating a pod, and updating a ConfigMap does not update the pod.

Downward API

The Downward API allows pods to access information about Kubernetes resources through environment variables and/or volumes.

Persistent Storage

Storage volumes are exposed as PersistentVolumes created by administrators. PersistentVolumes are consumed by pods through PersistentVolumeClaims using an access mode:

  • ReadWriteOnce: Mountable read-write by only a single node at one time.
  • ReadOnlyMany: Mountable read-only by many nodes at one time.
  • ReadWriteMany: Mountable read-write by many nodes at one time.

Dynamically provisioned storage provides access to underlying storage infrastructure through plugins.

Volume types:

  • emptyDir: Empty directory created at pod initialization and lost at the end of the pod's life (i.e. non-persistent). Useful for scratch data shared across containers in the same pod. Set emptyDir.medium to Memory to use tmpfs.

PersistentVolumeClaim

First, create the PVC with a name:

apiVersion: "v1"
kind: "PersistentVolumeClaim"
metadata:
  name: "pvol"
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      [...]

Then associate the PVC with a pod:

apiVersion: "v1"
kind: "Pod"
metadata:
  name: "mypod"
  labels:
    name: "frontendhttp"
spec:
  containers:
    -
      name: "myfrontend"
      image: "nginx"
      ports:
        -
          containerPort: 80
          name: "http-server"
      volumeMounts:
        -
          mountPath: "/var/www/html"
          name: "pvol"
  volumes:
    -
      name: "pvol"

Templates

A template describes how to create a set of resources with optional customization and labels to produce a configuration.

Prometheus

The node-exporter collects metrics from each node and sends to Prometheus.

The HPA may use any Prometheus metrics.

Resources:

Alert Rule

  1. Operators } Installed Operators } Prometheus Operator } Prometheus Rule
  2. New PrometheusRule instance.
  3. Under spec.groups, add YAML with PromQL:
    spec:
      groups:
        - name: ./example.rules
          rules:
            - alert: ExampleAlert
              expr: vector(1)
        - name: libertyexample
          rules:
          - alert: heapUsageTooHigh
            expr: base_memory_usedHeap_bytes / base_memory_maxHeap_bytes > 0.9
            for: 1m
            labels:
              severity: warning
            annotations:
              summary: "Heap usage is too high"
              description: "{{ $labels.instance }} heap usage is too high"
  4. Change labels to match the ruleSelector in the Prometheus YAML; for example:
    labels:
        prometheus: k8s
        role: prometheus-rulefiles
  5. Check status under Status } Rules
  6. Check if alert has popped under Alerts

See https://openliberty.io/blog/2020/04/15/prometheus-alertmanager-rhocp-open-liberty.html

YAML Examples

Custom Resource Definition

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: mycrd.example.com
spec:
  group: example.com
  names:
    kind: MyCRD
    listKind: MyCRDList
    plural: mycrds
    singular: mycrd
    shortNames:
    - m
  scope: Namespaced
  version: v1

Followed by:

oc create -f mycrd.yaml

Create a custom resource from a CRD:

apiVersion: example.com/v1
kind: MyCRD
metadata:
  name: mycr
spec:
  replicaCount: 2

Followed by:

oc create -f mycr.yaml

List CRs:

oc get mycrds

Describe a CR:

oc describe mycrd mycr

Increase number of repliaces of a CR:

oc scale mycrds --replicas=2

Delete a CR:

oc delete mycrd mycr

Jenkins BuildConfig Pipeline

kind: "BuildConfig"
apiVersion: build.openshift.io/v1
metadata:
  name: "pipeline1"
spec:
  triggers:
  - github:
      secret: $GHSECRET
    type: GitHub
  - generic:
      secret: $OTSECRET
    type: Generic
  strategy:
    type: "JenkinsPipeline"
    jenkinsPipelineStrategy:
      jenkinsfile: |
                    pipeline {
                      agent any
                      stages{
                        stage("Build") {
                          steps{
                            script{
                              openshift.withCluster() {
                                openshift.withProject() {
                                  echo '*** Build Starting ***'
                                  openshift.selector('bc', '$APP').startBuild("--wait").logs('-f')
                                  echo '*** Build Complete ***'
                                }
                              }
                            }
                          }
                        }
                        stage("Deploy and Verify in Development Env"){
                          steps{
                            script{
                              openshift.withCluster() {
                                openshift.withProject() {
                                  echo '*** Deployment Starting ***'
                                  openshift.selector('dc', '$APP').rollout().latest()
                                  echo '*** Deployment Complete ***'
                                }
                              }
                            }
                          }
                        }
                      }
                    }