UNCLASSIFIED - NO CUI

Documentation for adding ingress on a tcp port for a 3rd party package or mission app

Once Big Bang is deployed and 3rd party and mission apps are running in the environment, how to get data flow in and out. Specifically, how to get ingress on a new TCP port and flowing to the appropriate service with TLS handled in Istio? For example, I have Kafka running, including a virtual service but it's not clear how to direct traffic into the cluster on the appropriate port.

Currently, I am working on a proof of concept so I have tried to keep everything as simple as possible with complexity to be added later. So single replica, NodePort services, no Istio. With this set up I am able to access the cluster externally and I am including my set up below.

Situation 1: Data into cluster. I have an app deployed with a deployment manifest for the app and a service manifest. A ConfigMap provides the app configuration. All of my code is in C2S deployed on an air gap Big Bang deployment so I am unable to copy it here but it is a very basic deployment, nothing fancy and the service looks like the following. (Data streams on port 9006 from external source and communication with other apps within the cluster are on other ports)

apiVersion: v1
kind: Service
metadata:
 name: nnc-svc-ext
spec:
  type: NodePort
  selector:
    app: nnc
  ports:
    - port: 9006
      nodePort: 30006
      protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: nnc-svc
spec:
  selector:
    app: nnc
  ports:
    - port: 21007
      protocol: TCP

My external client connects successfully using the IP of the node that the nnc deploys to as the host and port 30006 but of course, looking up the IP is not a long term solution.

Situation 2: Data consumed from Kafka Broker. Using Helm charts from https://repo1.dso.mil/platform-one/big-bang/apps/third-party/kafka

I was able to get this working with Istio disabled and using external Access enabled using type: NodePort. I could reach the broker from a client outside the cluster. But I have to use the node IP address of the broker and set the advertised listener to the node IP as well. Client connection will need TLS support and I successfully did this on the Kafka Broker with this set up (using jks files) but my research on Kafka in Kubernetes indicates it is preferable to handle TLS at the load balancer.

I need to address both situations (even if independently) with the following requirements:

  • TLS for Kafka
  • scalability
  • high availability
  • minimize number of access points. For example if I scale to 10 Kafka Brokers, I don't want to have 10 different IP/hosts to potentially use.

Since I am using Big Bang, a nice to have is leveraging the Big Bang services.

Based on this, I was thinking the architecture would include a load balancer fronting everything that can be used as the host and listening on one port for the app and one port for Kafka and then all traffic gets routed appropriately with TLS for the Kafka clients. I thought the Istio controller was the way to do this and have tried adding virtual services with the appropriate port but have not been able to get anything working. If there is a better option for the architecture, I am open to that as long as my requirements are met. Note that currently I am working in a closed environment so have to use an internal load balancer and don't care about setting up DNS. Eventually, I will move to an environment where an elastic IP can be attached to an internet facing load balancer.

Edited by Patience Henson