Default memory limits for node exporter pods could cause OOMKilled problems
Current default memory for the node-exporter pods is 50Mi. In my k3d development environment, this works fine until I create a shared volume of /var/log:/var/log
on the servers/agents. The node exporters run around 85Mi on the agents and around 125Mi on the servers. This results in OOMKilled problems unless the limits are overridden.
For reference the /var/log shared has 18Mi of files, with the majority in /var/log/journal
I've been running with 200Mi limits and it appears to be stable.
We should probably reach out to some customers to see if any of them had to bump up the memory to prevent OOMKilled problems and adjust our memory accordingly.