Pages from the fire

Synology

Last updated: 2025-11-22

I have a home server to backup and show our photos. I like the Synology devices (appliances, perhaps) for this purpose because they are

  1. Easy to setup
  2. Reasonably hands off
  3. Low power (The ones I chose)
  4. Run Linux (underneath)
  5. Have the apps I need

The Synology Photos application is adequate for backing up photos from our iPhones.

The Jellyfin server and front ends (for iPad) are spectacular and the Jellyfin server has a distribution for Synology.

Firewall Docker apps

One of the advantages of running apps in Docker on Synology is that you can restrict outbound requests. Synology’s firewall is really set up to block inbound requests to applications, which likely is the most common use case: you trust the applications you are running, but you don’t trust the wider internet where attacks may come from.

However, you may want to guard against the rare case where an application you are running has been compromised and it starts reading your files and sending them to some remote server.

Here running under Docker has two safeguards. First you can restrict what directories the container has access to (and you can also mount them as read-only so the application can’t erase or mutilate your data).

Second, you can run the containers over a “Bridge network” that is separate from your LAN. This bridge network is just for containers (on Synology it defaults to 172.17.0.0/24).

Containers get an IP address in this range and the container manager forwards requests from the application via this IP address. You can then set up a firewall rule that blocks requests from those IP addresses (or ranges).

Ideally I would like to block/allow outgoing requests TO certain IP addresses but I haven’t figured out how to do that.

Mount shared folders “traditionally” (2026)

I’ve always used shared Synology folders via Nautilus’ interface which hides all the machinery operating underneath, which is Samba which is an open source SMB implementation.

This works absolutely fine when on the graphical desktop and using Nautilus for file management, but very cumbersome when trying to access these mounts on the command line.

NFS is the native Linux way of sharing directories over a network.

On Synology (As of DSM 7)

  1. Enable NFS in system settings
  2. Add an NFS rule to the shared folder you want
  3. Hostname or IP is the allowlist. I’m within a small in home network and I just do “*”. If you are open to the wider world, you need to have more restrictive allowlisting
  4. “Squash” is important. I set it to “Map all users to admin” otherwise I can’t get things to mount. I don’t know if this is too broad, but it only applies to this directory tree.
  5. If you have firewall enabled on the Synology you’ll have to allowlist the NFS server.

On the client Linux machine

  1. Create a directory to mount to. I create it under /mnt e.g. /mnt/my_share
  2. Use mount to mount the network drive
sudo mount -t nfs myserver.local:/my/shared/folder /mnt/my-share

On client Synology (mounting NFS from one Synology to another)

  1. Open “File Station”
  2. Create a shared folder
  3. Create a directory in the shared folder
  4. Go to “Tools” -> “Mount Remote Folder”
  5. Use the IP address of the other Synology: the mDNS name did not work for me even with several attempts. I set the host Synology’s IP as static from the router.

Additional apps

Directory permissions

Synology uses ACLs which layer over the traditional Linux file permissions.

NOTE: If your system uses ACLs when you do ls -al you will see the character “+” at the end of the list of traditional Linux permissions.

To see and set the ACLs on a linux system you can use getfacl and setfacl. On Synology you can use synoacltool.

System partition getting full

This one was a head scratcher. I went to update my system but got the dreaded “There is insufficient system capacity for DSM updates” message.

None of the tips in the linked page in the knowledge center applied to me.

My system partition was indeed very full,

$ df -H
Filesystem         Size  Used Avail Use% Mounted on
/dev/md0           2.5G  2.3G   67M  98% /
devtmpfs           5.1G     0  5.1G   0% /dev
tmpfs              5.2G  250k  5.2G   1% /dev/shm
tmpfs              5.2G   19M  5.2G   1% /run
tmpfs              5.2G     0  5.2G   0% /sys/fs/cgroup
tmpfs              5.2G   30M  5.2G   1% /tmp
/dev/loop0          29M  786k   26M   4% /tmp/SynologyAuthService
/dev/vg1/volume_2  3.0T  2.1T  892G  70% /volume2

but with what?

For a more detailed break down I tried

sudo du -sh --exclude=volume* /* 2>/dev/null

The last command excludes the volume1,… shared folders that are not part of the system partition (Or so I thought.).

I scoured the interwebs and tried some troubleshooting tips.

I used the builtin synocleaner tool

sudo synocleaner --delete-all-core
sudo synocleaner --delete-log
sudo synocleaner --delete-journal

But it didn’t free up much space. I just didn’t have so much on my system partition, and packages that I installed (Jellyfin, Synology Photos) were well behaved and installed themselves outside the system volume.

Then I ran into this thread. The person claimed that /volumeUSB1 was carrying some intermediate data.

I was skeptical, but …

$ df -h /volumeUSB1
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0        2.3G  2.2G   61M  98% /

Whaaaaaat? That is mounted on the system volume??

I looked into that folder. The folder and its contents appeared only when I ssh-ed into the server, not on the GUI File browser. The data appeared to be our family photos which were clearly on a different partition (/photos) which was mounted on the data volume.

Long story short, after doing some experiments to verify that this was indeed some kind of duplicate junk data and not somehow my real data, I blew away the folders there.

$ df -h /volumeUSB1
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0        2.3G  1.4G  849M  62% /

Additional software repositories