TIL: Docker containers will happily fill your disk when network mounts fail

#docker#linux#raspberry-pi#smb#cifs#systemd#debugging#storage#TIL

After rebooting my Raspberry Pi running several Docker services that write to network storage (SMB/CIFS shares on a NAS), the containers refused to start. The logs showed this error:

[Fatal] Application startup exception
AppFolder /config is not writable
[Trace] DiskProviderBase: Directory '/config' isn't writable. No space left on device

“No space left on device”. WTF.

Initial Investigation

First check: disk space.

$ df -h
Filesystem                 Size  Used Avail Use% Mounted on
udev                       7.9G     0  7.9G   0% /dev
tmpfs                      3.2G   15M  3.2G   1% /run
/dev/mmcblk0p2             117G  113G     0 100% /
tmpfs                      8.0G     0  8.0G   0% /dev/shm
tmpfs                      5.0M   48K  5.0M   1% /run/lock
/dev/mmcblk0p1             510M   75M  436M  15% /boot/firmware
tmpfs                      8.0G     0  8.0G   0% /tmp
tmpfs                      1.6G   32K  1.6G   1% /run/user/1000
//192.168.1.171/storage    11T  3.6T  7.0T  34% /mnt/nas/storage
//192.168.1.171/data       11T  3.6T  7.0T  34% /mnt/nas/data

Yep. 100% full. On a 128GB SD card. But I’m storing everything on a NAS with several terabytes of space… right?

The Hunt for Disk Space

Started with the usual suspects:

$ sudo du -h --max-depth=1 / 2>/dev/null | sort -h | tail -20
0       /dev
0       /proc
0       /sys
0       /tmp
4.0K    /media
4.0K    /srv
16K     /lost+found
16K     /opt
56K     /root
5.0M    /etc
15M     /run
124M    /boot
445M    /home
2.0G    /usr
5.9G    /var
3.6T    /
3.6T    /mnt

Wait, what? The total shows 3.6T because of the mounted NAS shares, but the actual directories only add up to ~8.5GB. Something’s wrong here.

Let’s check /var more closely:

$ sudo du -h --max-depth=1 /var | sort -h | tail -20
4.0K    /var/local
4.0K    /var/mail
4.0K    /var/opt
12K     /var/spool
40K     /var/tmp
376K    /var/log
1.1M    /var/backups
262M    /var/cache
3.6G    /var/lib
5.9G    /var

Docker is in /var/lib, let’s look:

$ sudo du -h --max-depth=1 /var/lib | sort -h | tail -20
...
3.4G    /var/lib/docker
3.6G    /var/lib
$ sudo du -h --max-depth=1 /var/lib/docker | sort -h
4.0K    /var/lib/docker/runtimes
4.0K    /var/lib/docker/swarm
4.0K    /var/lib/docker/tmp
8.0K    /var/lib/docker/plugins
100K    /var/lib/docker/volumes
184K    /var/lib/docker/network
408K    /var/lib/docker/buildkit
5.6M    /var/lib/docker/image
116M    /var/lib/docker/containers
3.3G    /var/lib/docker/overlay2
3.4G    /var/lib/docker

Everything looks normal. Docker is using 3.4GB, which is reasonable. But the math ain’t mathing:

  • /var: 5.9G
  • /usr: 2.0G
  • /home: 445M
  • /boot: 124M
  • Other: ~100M

Total: ~8.5GB, but df reports 113GB used.

Where are the missing ~105GB?

The “Oh Shit” Moment

Then it hit me. What if… what if the SMB shares weren’t mounted when the containers started processing files after the reboot?

Docker would have written to the local mount point directory instead of the NAS.

$ sudo umount /mnt/nas/storage
$ sudo umount /mnt/nas/data
$ sudo du -sh /mnt/nas/storage
92G     /mnt/nas/storage

$ ls -lah /mnt/nas/storage
total 92G
drwxr-xr-x 2 root root 4.0K Oct 23 07:45 .
drwxr-xr-x 4 root root 4.0K Oct 14 12:30 ..
-rw-r--r-- 1 root root 8.2G Oct 22 23:14 dataset-2024-10.tar.gz
-rw-r--r-- 1 root root 6.8G Oct 22 22:47 backup-prod-db.sql.gz
-rw-r--r-- 1 root root 4.3G Oct 22 21:32 build-artifacts-v2.3.tar
-rw-r--r-- 1 root root 7.1G Oct 23 01:15 logs-archive-2024-10.zip
-rw-r--r-- 1 root root 5.4G Oct 23 02:38 media-processing-output.mp4
... [many more files]

There they are. 92GB of files, sitting on my 128GB SD card, hidden under the mount point.

What Happened?

Here’s the timeline:

  1. Reboot the Raspberry Pi
  2. Docker starts before the SMB shares are mounted (or SMB mount fails)
  3. Services start processing and writing files
  4. Files get written to /mnt/nas/storage/ - but this is now the LOCAL directory on the SD card, not the NAS
  5. SMB mounts eventually happen, covering up the local files
  6. SD card is now full, but you can’t see why
  7. Next reboot, containers can’t start because there’s no disk space

The files are there, just hidden under the mount point. When you mount a filesystem over a directory that contains files, those files become inaccessible but still take up space.

The Fix

Immediate Solution

# Unmount the NAS shares
sudo umount /mnt/nas/storage
sudo umount /mnt/nas/data

# Delete the local files (or move them if needed)
sudo rm -rf /mnt/nas/storage/*
sudo rm -rf /mnt/nas/data/*

# Remount the NAS
sudo mount -a

# Check disk space
$ df -h
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mmcblk0p2             117G   21G   92G  19% /

# Restart the affected containers
docker compose restart service1 service2

Much better.

Permanent Solution

To prevent this from happening again, you need to ensure SMB mounts are ready before Docker starts. Add this to /etc/fstab:

//192.168.1.171/storage /mnt/nas/storage cifs credentials=/root/.smbcredentials,uid=1000,gid=1000,iocharset=utf8,_netdev 0 0
//192.168.1.171/data /mnt/nas/data cifs credentials=/root/.smbcredentials,uid=1000,gid=1000,iocharset=utf8,_netdev 0 0

The key is _netdev - this tells systemd to wait for the network before mounting.

Alternatively, add health checks to your docker-compose.yml or use systemd dependencies to ensure Docker services start after the mounts are ready. You can also configure Docker to start after remote-fs.target:

# Create override for Docker service
sudo systemctl edit docker

# Add these lines:
[Unit]
After=remote-fs.target

Lessons Learned

  1. Files under mount points still take up space - they’re just hidden from view
  2. Docker containers don’t care if your mount is ready - they’ll happily write to the local directory
  3. Always use _netdev for network mounts in /etc/fstab - especially on systems that start services quickly
  4. When disk usage doesn’t add up, check your mount points - unmount and inspect

Debugging Commands Reference

# Check disk usage
df -h
df -i  # Check inode usage

# Find what's using space
sudo du -h --max-depth=1 / | sort -h
sudo du -sh /var/lib/docker

# Find large files
sudo find / -type f -size +1G 2>/dev/null

# Check what's mounted
mount | grep cifs
df -h -t cifs

# List open deleted files (sometimes these hold space)
sudo lsof +L1

# Check mount points for hidden files
sudo umount /mnt/nas/storage
ls -lah /mnt/nas/storage
sudo du -sh /mnt/nas/storage

TL;DR: If your Docker containers write to network mounts (SMB/CIFS/NFS), ensure those mounts are ready before Docker starts. Otherwise, containers will write to the local mount point directory on your system disk, filling it up with files that become hidden once the network mount succeeds. Use _netdev in /etc/fstab and consider systemd service dependencies to prevent this race condition.