Linux OOM Killer — The Process Assassin You Need to Know
Your System Has a Hitman
Somewhere deep inside the Linux kernel, there is a mechanism whose entire purpose is to kill processes. It does not ask for permission. It does not send a polite notification. When your system runs out of memory and there is absolutely nothing else it can do, the OOM Killer (Out of Memory Killer) steps in and terminates a process to free up RAM.
Sounds brutal? It is. But the alternative is worse – a completely frozen system that does nothing at all. The OOM Killer is the kernel’s last resort, a survival mechanism that sacrifices one process so the rest of the system can keep running.
How Does It Choose Its Victim?
The OOM Killer does not pick randomly. Every running process on your system has a score called oom_score. The higher the score, the more likely the process is to be killed. The kernel calculates this score based on several factors, but the most important one is simple: how much memory is the process using?
A process that consumes 4 GB of RAM will have a much higher score than one using 20 MB. The kernel also considers things like how long the process has been running and whether it belongs to root.
You can check the OOM score of any process:
# Replace PID with the actual process ID
cat /proc/PID/oom_score
For a practical example, let’s check the score of your current shell:
cat /proc/$$/oom_score
You will likely see a small number. Now try checking a browser with 47 tabs open – that score will be significantly higher.
How to Protect a Process from Being Killed
Sometimes you have a process that absolutely must not be killed. A database server, for example. You can tell the kernel to leave it alone by adjusting its oom_score_adj value.
The oom_score_adj value ranges from -1000 to 1000. Setting it to -1000 effectively makes the process immune to the OOM Killer:
# Protect a process (replace PID with the actual process ID)
echo -1000 | sudo tee /proc/PID/oom_score_adj
Conversely, if you want a process to be the first to go:
# Make a process the preferred target
echo 1000 | sudo tee /proc/PID/oom_score_adj
Keep in mind that this setting does not survive a reboot. For persistent configuration, you can set OOMScoreAdjust=-1000 in a systemd service file.
How to See OOM Kills in Logs
When the OOM Killer strikes, it leaves evidence. You can find it in dmesg:
dmesg | grep -i "oom"
A typical OOM kill message looks something like this:
[ 234.567890] Out of memory: Killed process 1234 (some-app) total-vm:4567892kB, anon-rss:3210456kB
This tells you exactly which process was killed, how much virtual memory it had reserved, and how much physical memory (RSS) it was actually using. If you see these messages appearing regularly, your system has a memory problem that needs attention.
You can also check the system journal:
journalctl -k | grep -i "oom"
Triggering OOM on Purpose (Don’t Do This in Production)
For the curious – and only on a test machine you do not care about – you can trigger the OOM Killer intentionally. Here is a simple C-style memory hog using Python:
# WARNING: This WILL crash processes on your system.
# Only run this on a disposable test VM.
python3 -c "a = []; [a.append(' ' * 10**6) for _ in iter(int, 1)]"
You can also witness OOM behavior with a classic fork bomb, but seriously, do not run this on anything important:
# DO NOT run this on a production system. You have been warned.
:(){ :|:& };:
The point of mentioning these is not to encourage chaos, but to help you understand what happens when memory is exhausted. If you test this on a VM, watch dmesg -w in another terminal to see the OOM Killer in action.
Quick Tips to Avoid OOM Problems
Here are a few practical things you can do to prevent your system from reaching the OOM state:
- Monitor memory usage. Tools like
htop,vmstat, orfree -hare your friends. Do not wait for the kernel to tell you there is a problem. - Set memory limits. Use cgroups or systemd’s
MemoryMaxdirective to cap how much RAM a service can consume. - Configure swap. Swap is not a replacement for RAM, but it gives the system breathing room before the OOM Killer has to intervene.
- Check your applications for memory leaks. A process that slowly eats all your RAM over days or weeks is a classic OOM trigger.
- Use
oom_score_adjwisely. Protect your critical services, but do not protect everything – if nothing can be killed, the whole system locks up.
Final Thought
The OOM Killer is not a bug. It is not a flaw. It is a deliberate design choice that keeps Linux systems running when things go wrong. Understanding how it works puts you in a much better position to manage your systems, protect your critical services, and debug those mysterious “my process just disappeared” incidents.
The next time a process vanishes without a trace, check dmesg. The OOM Killer might have something to confess.