First, a note that this seems to be rare. I only see it on one cluster right now, but I don't know what's causing it, so I don't know if it will spread.
I updated my test ElasticXL/VSNext cluster from R82 jumbo 60 to jumbo 91 yesterday. I took a look at my telemetry in Grafana, and noticed member 2 appeared to have a moderate memory leak before the update, and both members had an even worse one after. Now, over the span of a day, otelcol went from ~100 MB of RAM consumed to 1 GB. Not yet sure what's going on. I haven't seen this leak elsewhere, but there's very little special about my lab boxes. The management for this cluster is running R82.10 jumbo 6 with the same Skyline version and the same Skyline config, and otelcol's RSS has been stable at about 60 MB.

The first dip back to baseline was when I rebooted both members for jumbo 91. The second, recent one was from restarting the Skyline processes on both members with this script:
#!/bin/bash
source /etc/profile.d/CP.sh
echo "Started: $(date +%s)" >> /var/log/skyline_restart_log.txt
skylinesvcs=( "$CPOTELCOL_DIR/CPotelcolCli.sh" "$CPOTLPAGENT_DIR/CPotlpagentCli.sh" "$CPVIEWEXPORTER_DIR/CPviewExporterCli.sh" )
for s in ${skylinesvcs[*]};do $s stop;done
cpwd_admin stop -name CPVIEWD;sleep 1;cpwd_admin start -name CPVIEWD -path "$CPDIR/bin/cpviewd" -command "cpviewd"
cpview -a off;sleep 1;cpview -a on;sleep 1
for s in ${skylinesvcs[*]};do $s start;done
echo "Execute completed: $(date +%s)" >> /var/log/skyline_restart_log.txt
A gigabyte per day is a pretty sizable leak. I'll dig into it more with support on Monday, but thought people may want to check their own environments.