IHS openshift FAQ

The OpenShift platform provides the high availability, routing, failover and SSL termination functions without adding additional runtime-specific reverse proxies like IHS and the WAS Web Server Plug-in.

For this reason, the recommended topologies of WebSphere Application Server in OpenShift use neither of IHS nor the WAS Plug-in.

Despite IHS not normally being used in this environment, this FAQ addresses questions users coming from a traditional IHS/WAS Plugin background might have about adapting to openshift.

Note: The information in this document applies to all kubernetes based platforms, not just OpenShift.

General FAQ's

  • Is IHS supported as a gateway into (in front of) OpenShift?

    This configuration is not supported. IHS is only supported as a direct gateway to WebSphere.

  • Is IHS supported inside of OpenShift?

    There is no recommended/tested topology of WebSphere Application Server in openshift that includes IBM HTTP Server and the WAS WebServer Plugin.

    However, there are a few configurations IBM will support as if IHS were running outside of openshift:

    1. IHS + the WAS WebServer Plug-in as a gateway forwarding 1:1 to a single WebSphere instance when the two containers are deployed in the same pod (sidecar pattern).

      This configuration allows customizations to requests/responses to occur inside of IHS, including third-party Apache modules (such as Siteminder).

      In this configuration, the license service deployment annotation productChargedContainers should list the Liberty container only rather than "All" to avoid having IHS CPU limits being counted against Liberty.

      The WebSphere Liberty and Open Liberty operators include sidecar support.

    2. IHS + the WAS WebServer Plug-in running in openshift as a gateway to WebSphere running outside of openshift.

    As in any supported configuration, it is critical to ensure that logs and traces can be captured/persisted.

  • What proxy server can I run in front of the OpenShift cluster?

    It is beyond the scope of this document to recommend a specific enterprise proxy server, but some options to explore include IBM Datapower, ISAM, Istio, various HTTP proxies included in RedHat Enterprise Linux (haproxy, varnish, Apache HTTP Server) or popular appliances from F5.

Sessions

  • How does affinity work?

    When a service is exposed via a Route, an ingress controller (proxy server) is configured to map a combination of host and path to instances of the service. On the command line, this is done via oc expose or oc route. Only the latter can specify different types of TLS termination.

    When a route is accessed over HTTP, or HTTPS configured with any termination mode other than "passthrough", the proxy server sends its own affinity cookie to the client to maintain session affinity with a backend pod.

    When TLS "passthrough" termination is configured on a route, the proxy cannot add cookies. In this case, the proxy uses consistent hashing of the client IP address to map to a backend.

    Regardless of the method of session affinity, planned (scale-in) or unplanned removal of pods will result in failover.

  • How does failover work?

    During a failover, a request with a JSESSIONID cookie arrives at a server that wasn't used to establish a session. If no flavor of distribution/persistence has been configured, the session will be lost. Some types of persistence include DB, Hazelcast, and Extreme Scale.

Routes

  • How do I debug a route that doesn't appear to work?

    • Make sure 1 or more pods are running for the service

    • Make sure the <httpEndpoint> in server.xml has host="*"

    • Make sure the ports in the Service match the listening ports in the application container

    • View the logs from a pod running the service and make sure you see a reasonable looking CWWKT0016I Web Appplication available... message.

    • Login to the pods running the service and make sure the application responds within the container.

      • If you don't have an HTTP client in your image, but your bash shell is built with /dev/tcp support, you can send a simple request by editing the below:

      bash -c 'exec 5<> /dev/tcp/localhost/9080; printf "GET / HTTP/1.1\r\nHost: localhost\r\n\Connection: close\r\n\r\n" >&5; cat <&5'