spark-operator
This is a hardened version of the upstream spark-operator image. This is meant to be deployed as a controller for SparkApplication and ScheduledSparkApplications, controlling the execution of spark clusters on Kubernetes. A chart with deployment instructions will be maintained elsewhere.
It can also be used as a base container. An example of some containers that will rely on this are the jupyter notebook containers as well as some others for the Kubeflow deployment.
To test, download tini
from the URL in hardening_manifest.yaml, rename as tini
, and run make build
then make run
.
To compare to unhardened container, run make run-original