Persistent Synology Configuration
—
DSM is a Linux-based operating system shipped with Synology servers. It has a kinda immutable root filesystem: you may change it, but you don’t know when your modifications will be lost. Sometimes it happens after a reboot, other times – after a system upgrade. To persist configuration, Synology expects you to configure everything from a web UI. This is very limiting approach for power users. This situation made me create a system which applies user configuration and doesn’t rely on web UI, apart from some initial setup.
TLDR: It comes down to calling a series of shell scripts
with run-parts
during Synology boot. run-parts
is
called from Synology Task Manager.
Preparations
Before we continue, we need 3 things:
- a persistent file system, which Synology won’t modify by itself: one of already configured volumes. For me this is /volume1
- Enabled SSH : Control Panel → Terminal & SNMP → Terminal → Enable SSH service
- Access to administrator account (not root), because only admins may SSH to Synology box.
Fake Rootfs
With these things at hand, we may SSH into our Synology box using admin
account.
(sidenote: To be frank, it’s possible to run shell commands
through Synology Task Manager, but I wholeheartedly hate this method.)
Let’s ignore annoying warning in MOTD and switch to the root account with
sudo su
command. As a real root we may create a kind of minimal rootfs on
our volume, which will imitate the real one. Let’s put it in /volume1/sys:
# mkdir -p /volume1/sys/{bin,etc,opt,usr/share}
# chown -R root:administrators /volume1/sys
It serves as a bucket for executable scripts (for backup, certificates renewal), configuration files etc. For now we’ll populate it with scripts which we’d like to boot. I think that the most appropriate directory is inside usr/share, so let’s create it and put some scripts inside: (sidenote: All vim commands can be replaced with any other way to populate this directory with scripts. All sripts presented below are only examples of what you can achieve.)
# mkdir /volume1/sys/usr/share/run-on-boot.d
# vim /volume1/sys/usr/share/run-on-boot.d/10-hosts
# vim /volume1/sys/usr/share/run-on-boot.d/10-udev-serial-rule
# vim /volume1/sys/usr/share/run-on-boot.d/50-start-docker
# vim /volume1/sys/usr/share/run-on-boot.d/99-ntfy
# chmod +x /volume1/sys/usr/share/run-on-boot.d/*
Run Scripts After Boot
DSM allows running any scripts after the server boot, but we must add them
through Task Scheduler in web UI: Control Panel → Task Scheduler → Create
-> Triggered Task → User-defined script. Make sure that user
who runs the script is root event is “Boot-up” and switch to Task
Settings tab, where we may type the actual script. To run the scripts in
lexicographical order we’re going to use ordinary run-parts
program.
echo "Runing on-boot sequence"
/bin/run-parts --verbose /volume1/sys/usr/share/run-on-boot.d
On top of that, Synology allows us to configure e-mail notifications when script fails. It’s a useful option and I recommend enabling it, because these e-mails contain output of failed script, which is a tremendous help for debugging.
Scripts
Here are examples of scripts which I use on my Synology NAS. Prefixing them with a number between 00 and 99 is an old convention to make sure that scripts are executed in order, which you may encounter in most Unix-based operating systems.
10-hosts
At one point I had a problem with my NAS calling home for no apparent reason. (sidenote: NAS was calling Quick Connect, a cloud service for remote access, which I had never enabled. I believe it was a bug, not a bad will of Synology Inc.) On top of being a privacy concern, it did put unnecessary burden on my local DNS server (PiHole), because it generated ~8000 A and AAA queries/hour.
I decided to block certain domains in /etc/hosts file. /etc/hosts on Synology comes with a warning though, that any manual changes are lost after a hostname change or system upgrade. Thanks to my script, after each reboot I make sure that Synology obeys my will.
#!/bin/bash
add_host() {
ip=$1
hostname=$2
if ! grep -q "\b${hostname}\b" /etc/hosts; then
echo "${ip} ${hostname}" >> /etc/hosts
fi
}
add_host 127.0.0.1 checkip.synology.com
add_host 127.0.0.1 global.quickconnect.to
add_host 127.0.0.1 global.quickconnect.to.home
add_host ::1 checkipv6.synology.com
~
10-udev-serial-rule
This script makes sure that udev rule which configures persistent device names in /dev exists. It’s necessary because Synology likes to delete all custom rules after upgrades: a thing I have learned when Home Assistant refused to start.
#!/bin/bash
set -o errexit
set -o nounset
set -o pipefail
rule=/usr/lib/udev/rules.d/60-serial.rules
write_udev() {
cat <<EOF
ACTION=="remove", GOTO="serial_end"
...
EOF
}
if [[ ! -e "${rule}" ]]; then
echo "Recreating ${rule}"
write_udev > "${rule}"
chmod 644 "${rule}"
udevadm control --reload-rules
udevadm trigger
fi
50-start-docker
Dockerized services are important part of my homelab and I run quite a few of them. I orchestrate them with docker-compose which this script autostarts.
#!/bin/bash
set -o errexit
set -o nounset
set -o pipefail
echo "Starting up Docker services..."
cd /volume1/docker && /usr/local/bin/docker-compose up -d
99-ntfy
Finally, when on-boot scripts finish, system sends a push notification through self-hosted instance of ntfy.sh, which I receive on my phone. Starting DNS and DHCP server is a part of this boot-up sequence, so I know that there’s no internet access for me until I receive the notification.
#!/bin/bash
set -o errexit
set -o nounset
set -o pipefail
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
. "${DIR}/.env"
curl --silent \
-H "Authorization: Bearer ${NTFY_TOKEN}" \
-d "Synology: Boot finished" \
https://ntfy.example.com/boot-alerts >/dev/null
This script uses a separate .env file, located in the same directory as a script. .env exports NTFY_TOKEN environment variable in case any other scripts would like to send notifications. Remember to not set executable flag on .env file or otherwise run-parts may execute it.