Goroutine leak in websocketconn
Thinking about #33364 (moved), I found that snowflake-server is chewing a lot of memory. It may be some memory leak or something.
$ top -o%MEM
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
26910 debian-+ 20 0 1916628 1.522g 0 S 0.0 77.8 58:51.37 snowflake-serve
The memory use seems to be inhibiting other processes. runsvdir
puts status messages in its own argv
so you can inspect them with ps
. Currently it's reflecting xz
not being able to allocate memory to compress logs:
$ ps ax | grep runsvdir
1358 ? Ss 94:01 runsvdir -P /etc/service log: locate memory \
svlogd: warning: processor failed, restart: /home/snowflake-proxy/snowflake-proxy-standalone-17h.log.d xz: (stdin): Cannot allocate memory \
svlogd: warning: processor failed, restart: /home/snowflake-proxy/snowflake-proxy-standalone-17h.log.d xz: (stdin): Cannot allocate memory \
svlogd: warning: processor failed, restart: /home/snowflake-proxy/snowflake-proxy-standalone-17h.log.d
I even got it just now trying to run a diagnostic command (it doesn't always happen):
$ ps ax | grep standal
-bash: fork: Cannot allocate memory
In the short term, looks like we need to restart the server. Then we need to figure out what's causing it to use so much memory.
The server was last restarted 2020-02-10 18:57 (one week ago) at ca9ae12c383405bc9a755e1bc902e9755495c1f1 for #32964 (moved).