chore: update appliance-mode values and docs
Update Appliance-Mode Values
Summary
I was tasked with evaluating an edge use-case and was directed to use the appliance-mode values. The values as written would not deploy on a 4CPU 16GB node. The changes in this MR successfully deployed and executed in my test environments. My goal was to keep CPU requests near 100% and limits near 200% and Memory requests near 50% and limits near 100%.
Relevant logs/screenshots
Non-terminated Pods: (23 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-75fc8f8fff-wgfgk 100m (2%) 0 (0%) 70Mi (0%) 170Mi (1%) 23m
kube-system local-path-provisioner-5b5579c644-h8j46 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23m
kube-system metrics-server-74474969b-gmtct 100m (2%) 0 (0%) 70Mi (0%) 0 (0%) 23m
flux-system helm-controller-6444bfbd57-dzpp2 500m (12%) 500m (12%) 1Gi (6%) 1Gi (6%) 21m
flux-system notification-controller-5ff9dc9f8d-d5qrb 100m (2%) 100m (2%) 200Mi (1%) 200Mi (1%) 21m
flux-system kustomize-controller-7bbb47f98f-zdh4v 100m (2%) 100m (2%) 600Mi (3%) 600Mi (3%) 21m
flux-system source-controller-6567b9cd77-mgwsw 100m (2%) 100m (2%) 384Mi (2%) 384Mi (2%) 21m
gatekeeper-system gatekeeper-audit-859f86477-jz4z5 100m (2%) 600m (15%) 256Mi (1%) 512Mi (3%) 19m
gatekeeper-system gatekeeper-controller-manager-6b5bf7fb68-rctnj 100m (2%) 175m (4%) 256Mi (1%) 512Mi (3%) 19m
istio-operator istio-operator-6f4d86f76f-g2xvm 200m (5%) 200m (5%) 256Mi (1%) 256Mi (1%) 15m
kube-system svclb-public-ingressgateway-81c659e6-79dw5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15m
istio-system istiod-697d878cf6-x2qjx 100m (2%) 500m (12%) 256Mi (1%) 2Gi (12%) 15m
istio-system public-ingressgateway-7d6fdfd6cf-trbxb 100m (2%) 2 (50%) 128Mi (0%) 1Gi (6%) 15m
monitoring monitoring-monitoring-prometheus-node-exporter-949zb 200m (5%) 300m (7%) 384Mi (2%) 506Mi (3%) 13m
monitoring monitoring-monitoring-kube-operator-85c4c6c54b-mpvkz 200m (5%) 300m (7%) 384Mi (2%) 768Mi (4%) 13m
monitoring monitoring-monitoring-kube-state-metrics-7c978688d5-5mtgx 110m (2%) 200m (5%) 384Mi (2%) 384Mi (2%) 13m
monitoring alertmanager-monitoring-monitoring-kube-alertmanager-0 250m (6%) 300m (7%) 406Mi (2%) 406Mi (2%) 13m
monitoring prometheus-monitoring-monitoring-kube-prometheus-0 250m (6%) 500m (12%) 434Mi (2%) 2354Mi (14%) 13m
monitoring monitoring-monitoring-grafana-7bd8c8d97b-dq6w7 400m (10%) 400m (10%) 712Mi (4%) 712Mi (4%) 13m
cluster-auditor opa-exporter-78d594466b-dmmzx 200m (5%) 400m (10%) 512Mi (3%) 556Mi (3%) 10m
logging logging-loki-0 200m (5%) 200m (5%) 512Mi (3%) 512Mi (3%) 10m
logging logging-promtail-x646w 150m (3%) 300m (7%) 288Mi (1%) 384Mi (2%) 9m31s
twistlock twistlock-console-7c4554dbbf-v44b7 150m (3%) 350m (8%) 768Mi (4%) 2304Mi (14%) 3m20s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 3710m (92%) 7525m (188%)
memory 8284Mi (51%) 15616Mi (97%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Edited by Sam