Issue found in nightly CI, while sonarqube seems to startup fine in k3d CI the RKE2 pipeline consistently fails on the pod crashlooping: - https://repo1.dso.mil/platform-one/big-bang/bigbang/-/jobs/6234798 - https://repo1.dso.mil/platform-one/big-bang/bigbang/-/jobs/6262554 Pod log output is: ```bash 2021.09.06 17:26:58 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp 2021.09.06 17:26:59 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on [HTTP: 127.0.0.1:9001, TCP: 127.0.0.1:40189] 2021.09.06 17:27:00 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch warning: usage of JAVA_HOME is deprecated, use ES_JAVA_HOME 2021.09.06 17:27:01 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running warning: no-jdk distributions that do not bundle a JDK are deprecated and will be removed in a future release OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. 2021.09.06 17:28:23 INFO es[][o.e.n.Node] version[7.12.1], pid[26], build[default/tar/3186837139b9c6b6d23c3200870651f10d3343b7/2021-04-20T20:56:39.040728659Z], OS[Linux/4.18.0-305.3.1.el8_4.x86_64/amd64], JVM[Red Hat, Inc./OpenJDK 64-Bit Server VM/11.0.11/11.0.11+9-LTS] 2021.09.06 17:28:23 INFO es[][o.e.n.Node] JVM home [/usr/lib/jvm/java-11-openjdk-11.0.11.0.9-2.el8_4.x86_64] 2021.09.06 17:28:23 INFO es[][o.e.n.Node] JVM arguments [-XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/opt/sonarqube/temp, -XX:ErrorFile=../logs/es_hs_err_pid%p.log, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=COMPAT, -Des.enforce.bootstrap.checks=true, -Xmx512m, -Xms512m, -XX:MaxDirectMemorySize=256m, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/opt/sonarqube/elasticsearch, -Des.path.conf=/opt/sonarqube/temp/conf/es, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=false] 2021.09.06 17:28:44 INFO es[][o.e.p.PluginsService] loaded module [analysis-common] 2021.09.06 17:28:44 INFO es[][o.e.p.PluginsService] loaded module [lang-painless] 2021.09.06 17:28:44 INFO es[][o.e.p.PluginsService] loaded module [parent-join] 2021.09.06 17:28:44 INFO es[][o.e.p.PluginsService] loaded module [percolator] 2021.09.06 17:28:44 INFO es[][o.e.p.PluginsService] loaded module [transport-netty4] 2021.09.06 17:28:44 INFO es[][o.e.p.PluginsService] no plugins loaded 2021.09.06 17:28:46 INFO es[][o.e.e.NodeEnvironment] using [1] data paths, mounts [[/opt/sonarqube/data (/dev/nvme0n1p2)]], net usable_space [136.3gb], net total_space [149.9gb], types [xfs] 2021.09.06 17:28:46 INFO es[][o.e.e.NodeEnvironment] heap size [503.6mb], compressed ordinary object pointers [true] 2021.09.06 17:28:47 INFO es[][o.e.n.Node] node name [sonarqube], node ID [tcHcF_4KSI-8lPpbDbjQEA], cluster name [sonarqube], roles [master, remote_cluster_client, data, ingest] 2021.09.06 17:29:17 INFO app[][o.s.a.SchedulerImpl] Stopping SonarQube 2021.09.06 17:29:18 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped 2021.09.06 17:29:18 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped 2021.09.06 17:29:18 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 143 ``` Google results for `Process exited with exit value [es]: 143` yield a wide variety of problems but in general note either storage or memory issues. I suspect that while the resources may work in general the JVM arguments are causing us issues (specifically `-Xmx512m, -Xms512m` maybe?) Note that these issues are attached to the elasticsearch instance bundled with sonar and those settings are not directly accessible via helm values to my knowledge.