systemd: The Init System

Why This Matters

Picture this: you deploy a web application at 2 AM, and the server reboots unexpectedly. When it comes back up, your database starts before the network is ready, your app starts before the database is listening, and your reverse proxy starts before your app is responding. Nothing works. Users see errors. Your phone rings.

This is the exact problem an init system solves. It is the very first process that runs on your Linux system (PID 1), and it is responsible for starting everything else in the right order, keeping services alive, and shutting things down cleanly. On virtually every modern Linux distribution, that init system is systemd.

Whether you are a developer deploying applications, a sysadmin managing servers, or someone learning Linux for the first time, understanding systemd is non-negotiable. It controls how your system boots, how services run, how logs are collected, and how your system shuts down.


Try This Right Now

Open a terminal and run these commands. No setup required:

# What is PID 1 on your system?
ps -p 1 -o comm=

# How long has your system been running?
systemctl status --no-pager | head -5

# List all running services
systemctl list-units --type=service --state=running --no-pager

# What target (runlevel) is your system in?
systemctl get-default

If ps -p 1 printed systemd, you are running systemd. That covers Ubuntu, Fedora, Debian, RHEL, Arch, openSUSE, and nearly every other mainstream distribution.


What Is an Init System?

When the Linux kernel finishes its own initialization, it does one final thing: it launches a single userspace process. This process gets PID 1, and it becomes the ancestor of every other process on the system.

This PID 1 process is the init system, and it has several critical responsibilities:

  1. Start system services in the correct order (networking, logging, databases, etc.)
  2. Manage dependencies between services (the database needs the network first)
  3. Supervise running services and restart them if they crash
  4. Handle system state transitions (booting, shutting down, rebooting)
  5. Reap orphaned processes (adopt zombie children whose parents died)
+----------------------------------------------------------+
|                    Linux Kernel                           |
|  (hardware init, drivers, mount root filesystem)         |
+---------------------------+------------------------------+
                            |
                            v
                    +-------+-------+
                    |   PID 1       |
                    |   (systemd)   |
                    +-------+-------+
                            |
              +-------------+-------------+
              |             |             |
              v             v             v
        +---------+   +---------+   +---------+
        | sshd    |   | nginx   |   | cron    |
        | (svc)   |   | (svc)   |   | (svc)   |
        +---------+   +---------+   +---------+

A Brief History: SysVinit to systemd

SysVinit (The Old Way)

For decades, Linux used SysVinit, inherited from Unix System V. It worked like this:

  • Shell scripts in /etc/init.d/ controlled each service
  • Scripts were symlinked into numbered "runlevel" directories (/etc/rc3.d/, etc.)
  • Services started sequentially, one after another
  • The naming convention (S20sshd, S30apache) controlled start order
# Old SysVinit style (you may still see this on older systems)
/etc/init.d/apache2 start
/etc/init.d/apache2 stop
/etc/init.d/apache2 restart

The problems with SysVinit were real:

  • Slow boot times because services started one at a time
  • No dependency tracking — just numbered ordering and hope
  • No process supervision — if a service crashed, nobody restarted it
  • Shell scripts everywhere — fragile, inconsistent, hard to debug

systemd (The Modern Way)

Lennart Poettering and Kay Sievers created systemd in 2010. It was controversial (many Unix purists objected to its scope), but it solved genuine problems:

  • Parallel service startup dramatically reduced boot times
  • Declarative unit files replaced fragile shell scripts
  • Dependency management ensured correct startup order
  • Process supervision with automatic restart on failure
  • Unified logging via the journal (journald)
  • On-demand activation via socket and D-Bus activation

By 2015, every major distribution had adopted systemd.

Think About It: Why would parallel service startup require explicit dependency management? What could go wrong if you just started everything at the same time without tracking which services depend on which?


systemctl: Your Primary Interface

systemctl is the command you will use most often to interact with systemd. Think of it as the control panel for everything running on your system.

Starting and Stopping Services

# Start a service (takes effect immediately)
sudo systemctl start nginx

# Stop a service
sudo systemctl stop nginx

# Restart a service (stop + start)
sudo systemctl restart nginx

# Reload a service configuration without full restart
# (not all services support this)
sudo systemctl reload nginx

# Reload if supported, otherwise restart
sudo systemctl reload-or-restart nginx

Enabling and Disabling Services

Starting a service only affects the current session. If you reboot, it will not start automatically unless you enable it:

# Enable a service to start at boot
sudo systemctl enable nginx

# Disable a service from starting at boot
sudo systemctl disable nginx

# Enable AND start in one command
sudo systemctl enable --now nginx

# Disable AND stop in one command
sudo systemctl disable --now nginx

When you enable a service, systemd creates a symlink in the appropriate target directory. When you disable it, that symlink is removed.

# See what enabling actually does
sudo systemctl enable --now nginx
# Output: Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service
#         -> /usr/lib/systemd/system/nginx.service

Checking Service Status

# Detailed status of a service
systemctl status nginx

Here is what typical output looks like:

● nginx.service - A high performance web server
     Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; preset: disabled)
     Active: active (running) since Mon 2025-03-10 14:22:01 UTC; 3h ago
       Docs: man:nginx(8)
    Process: 1234 ExecStartPre=/usr/bin/nginx -t -q (code=exited, status=0/SUCCESS)
   Main PID: 1235 (nginx)
      Tasks: 3 (limit: 4915)
     Memory: 8.2M
        CPU: 120ms
     CGroup: /system.slice/nginx.service
             ├─1235 "nginx: master process /usr/bin/nginx"
             ├─1236 "nginx: worker process"
             └─1237 "nginx: worker process"

Mar 10 14:22:01 server01 systemd[1]: Starting nginx.service...
Mar 10 14:22:01 server01 systemd[1]: Started nginx.service.

Let us break down each line:

FieldMeaning
LoadedWhere the unit file lives, whether it is enabled
ActiveCurrent state and how long it has been running
Main PIDThe primary process ID
TasksNumber of tasks (threads/processes) in the cgroup
MemoryMemory consumed by the service and its children
CGroupThe cgroup tree showing all child processes

Quick Status Checks

Sometimes you just need a yes/no answer:

# Is it running?
systemctl is-active nginx
# Output: active

# Is it enabled at boot?
systemctl is-enabled nginx
# Output: enabled

# Has it failed?
systemctl is-failed nginx
# Output: active   (meaning "not failed")

These commands return exit codes you can use in scripts:

if systemctl is-active --quiet nginx; then
    echo "nginx is running"
else
    echo "nginx is NOT running"
fi

Hands-On: Exploring Your System's Services

Let us explore what is running on your system right now.

Step 1: List All Running Services

systemctl list-units --type=service --state=running --no-pager

You will see output like:

UNIT                      LOAD   ACTIVE SUB     DESCRIPTION
cron.service              loaded active running Regular background program processing
dbus.service              loaded active running D-Bus System Message Bus
NetworkManager.service    loaded active running Network Manager
sshd.service              loaded active running OpenSSH Daemon
systemd-journald.service  loaded active running Journal Service
systemd-udevd.service     loaded active running Rule-based Manager for Device Events
...

Step 2: List Failed Services

systemctl list-units --type=service --state=failed --no-pager

On a healthy system, this should be empty. If you see failures, investigate with systemctl status <unit-name>.

Step 3: List All Installed Services (Running or Not)

systemctl list-unit-files --type=service --no-pager

This shows every service unit file installed on your system and whether it is enabled, disabled, static, or masked.

Step 4: View a Service's Unit File

systemctl cat sshd.service

This prints the actual unit file contents. We will study unit file anatomy in Chapter 16.

Distro Note: On Ubuntu/Debian, the SSH service is called ssh.service. On Fedora/RHEL/Arch, it is sshd.service. When in doubt, use tab completion: systemctl status ssh<TAB>.


Unit Types: Not Just Services

systemd does not just manage services. It manages many types of "units." Each unit type handles a different kind of system resource:

Unit TypeExtensionPurpose
Service.serviceDaemons and processes
Socket.socketIPC or network sockets (for activation)
Timer.timerScheduled tasks (like cron jobs)
Mount.mountFilesystem mount points
Automount.automountOn-demand filesystem mounting
Target.targetGroups of units (like runlevels)
Device.deviceKernel device events
Path.pathFilesystem path monitoring
Swap.swapSwap space
Slice.sliceResource management groups (cgroups)
Scope.scopeExternally created process groups

Listing Different Unit Types

# List all active timers
systemctl list-timers --no-pager

# List all mount units
systemctl list-units --type=mount --no-pager

# List all socket units
systemctl list-units --type=socket --no-pager

# List all targets
systemctl list-units --type=target --no-pager

Service Units

These are the most common. They manage long-running daemons:

systemctl list-units --type=service --no-pager | head -20

Socket Units

Socket units enable socket activation: systemd listens on a socket and only starts the actual service when a connection arrives. This saves resources.

# See which sockets systemd is listening on
systemctl list-sockets --no-pager
LISTEN                        UNIT                     ACTIVATES
/run/dbus/system_bus_socket   dbus.socket              dbus.service
/run/systemd/journal/socket   systemd-journald.socket  systemd-journald.service
[::]:22                       sshd.socket              sshd.service

Timer Units

Timer units are systemd's replacement for cron jobs. We will cover these in detail in Chapters 16 and 24.

# List all active timers and when they fire next
systemctl list-timers --all --no-pager

Mount Units

Every entry in /etc/fstab gets automatically converted to a mount unit:

# See mount units
systemctl list-units --type=mount --no-pager

Think About It: Why would systemd want to manage mount points as units instead of just reading /etc/fstab the old-fashioned way? Think about dependencies: what if a service needs a specific filesystem to be mounted before it can start?


Targets: The New Runlevels

In SysVinit, runlevels (0-6) defined what state the system was in. systemd replaces runlevels with targets -- units that group other units together.

Runlevel to Target Mapping

Runlevelsystemd TargetPurpose
0poweroff.targetHalt the system
1rescue.targetSingle-user mode (recovery)
2multi-user.targetMulti-user, no GUI (Debian-specific)
3multi-user.targetMulti-user, no GUI
4multi-user.targetUnused (custom)
5graphical.targetMulti-user with GUI
6reboot.targetReboot

Checking and Changing the Default Target

# What target does your system boot into?
systemctl get-default
# Output: graphical.target   (desktop) or multi-user.target (server)

# Change default to text/server mode
sudo systemctl set-default multi-user.target

# Change default to graphical/desktop mode
sudo systemctl set-default graphical.target

Switching Targets at Runtime

# Switch to rescue mode (single-user, for recovery)
sudo systemctl isolate rescue.target

# Switch to multi-user (text mode)
sudo systemctl isolate multi-user.target

# Switch to graphical mode
sudo systemctl isolate graphical.target

WARNING: systemctl isolate rescue.target will kill most running services and drop you to a root shell. Do not run this on a remote server unless you have console access.

Understanding Target Dependencies

Targets are like dependency trees. multi-user.target depends on basic.target, which depends on sysinit.target, which depends on local-fs.target and others:

graphical.target
    └── multi-user.target
            └── basic.target
                    ├── sockets.target
                    ├── timers.target
                    ├── paths.target
                    ├── slices.target
                    └── sysinit.target
                            ├── local-fs.target
                            ├── swap.target
                            └── cryptsetup.target

You can visualize this:

# Show what a target "wants" (its dependencies)
systemctl list-dependencies multi-user.target --no-pager

# Show the full boot dependency tree
systemctl list-dependencies default.target --no-pager

Hands-On: Managing a Real Service

Let us work through a complete service management workflow using the SSH daemon.

Step 1: Check Current Status

systemctl status sshd.service

Distro Note: Use ssh.service on Debian/Ubuntu.

Step 2: Stop the Service

sudo systemctl stop sshd.service

WARNING: If you are connected via SSH, do NOT stop sshd. Your existing connection will survive, but you will not be able to open new ones. Use a different service for practice if you are remote.

Step 3: Verify It Stopped

systemctl is-active sshd.service
# Output: inactive

systemctl status sshd.service
# Active line now shows: inactive (dead)

Step 4: Start It Again

sudo systemctl start sshd.service

systemctl is-active sshd.service
# Output: active

Step 5: Check Boot Configuration

systemctl is-enabled sshd.service
# Output: enabled

Step 6: View Recent Logs

# Last 20 log entries for sshd
journalctl -u sshd.service -n 20 --no-pager

Masking Services: The Nuclear Option

Sometimes disabling a service is not enough. Another service or a system update might re-enable it. Masking a service makes it completely impossible to start:

# Mask a service (symlinks unit file to /dev/null)
sudo systemctl mask bluetooth.service

# Try to start it — it will refuse
sudo systemctl start bluetooth.service
# Failed to start bluetooth.service: Unit bluetooth.service is masked.

# Unmask it when you want to allow it again
sudo systemctl unmask bluetooth.service

Masking is useful when:

  • You want to ensure a service never runs (security hardening)
  • Two services conflict and you want to permanently disable one
  • You are troubleshooting and want to eliminate a service entirely
# See what a mask looks like
ls -la /etc/systemd/system/bluetooth.service
# /etc/systemd/system/bluetooth.service -> /dev/null

Debug This: Why Won't My Service Start?

You install nginx and try to start it, but it fails:

sudo systemctl start nginx.service
# Job for nginx.service failed because the control process exited with error code.

Here is your debugging workflow:

Step 1: Check Status

systemctl status nginx.service --no-pager -l

The -l flag prevents line truncation. Look for error messages in the log section at the bottom.

Step 2: Check the Journal

journalctl -u nginx.service -n 50 --no-pager

Look for lines marked with err or crit priority.

Step 3: Check Configuration Syntax

# For nginx specifically
sudo nginx -t

Step 4: Check Port Conflicts

# Is something else using port 80?
sudo ss -tlnp | grep :80

Step 5: Check File Permissions

# Can the service user read its config?
ls -la /etc/nginx/nginx.conf

# Can it write to its log directory?
ls -la /var/log/nginx/

Step 6: Try Starting Manually

# Run the exact command from the unit file
systemctl cat nginx.service | grep ExecStart
# ExecStart=/usr/sbin/nginx -g 'daemon off;'

# Run it manually to see direct error output
sudo /usr/sbin/nginx -t

Common causes of service start failures:

  • Configuration syntax errors in the service's config file
  • Port already in use by another service
  • Missing files or directories that the service expects
  • Permission denied on config files, log directories, or PID files
  • Missing dependencies (a library or another service)

Useful systemctl Commands Reference

# Reload systemd itself after editing unit files
sudo systemctl daemon-reload

# Show all properties of a unit
systemctl show nginx.service --no-pager

# Show a specific property
systemctl show nginx.service --property=MainPID
systemctl show nginx.service --property=ActiveState

# List all units that failed
systemctl --failed --no-pager

# Reset the "failed" state of a unit
sudo systemctl reset-failed nginx.service

# Show the boot time breakdown
systemd-analyze

# Show which services took the longest to start
systemd-analyze blame --no-pager

# Show the critical chain (boot bottlenecks)
systemd-analyze critical-chain --no-pager

# Verify a unit file for errors
systemd-analyze verify /etc/systemd/system/myapp.service

systemd-analyze: Understanding Boot Performance

systemd-analyze is a powerful tool for understanding what happens during boot:

# Overall boot time
systemd-analyze
# Startup finished in 2.5s (kernel) + 5.1s (userspace) = 7.6s

# Which services took the longest?
systemd-analyze blame --no-pager | head -10
# 3.2s   NetworkManager-wait-online.service
# 1.1s   snapd.service
# 0.8s   udisks2.service
# 0.5s   accounts-daemon.service
# ...

# Show the critical path through the boot
systemd-analyze critical-chain --no-pager

The critical chain shows the longest dependency path through boot. This is where to focus if you want to speed up boot times.


What Just Happened?

+------------------------------------------------------------------+
|                     CHAPTER 15 RECAP                              |
+------------------------------------------------------------------+
|                                                                  |
|  - The init system (PID 1) starts and supervises all services    |
|  - systemd replaced SysVinit with parallel startup,             |
|    dependency management, and process supervision                |
|  - systemctl is your main tool: start, stop, restart,           |
|    enable, disable, status                                       |
|  - Unit types: service, socket, timer, mount, target, etc.      |
|  - Targets replace runlevels: multi-user.target,                |
|    graphical.target, rescue.target                               |
|  - enable = start at boot; start = start now                    |
|  - mask = prevent a service from starting entirely              |
|  - systemd-analyze helps diagnose slow boots                    |
|                                                                  |
+------------------------------------------------------------------+

Try This

Exercise 1: Service Inventory

List all enabled services on your system. How many are there? Pick three you do not recognize and look up what they do using systemctl cat and man.

systemctl list-unit-files --type=service --state=enabled --no-pager

Exercise 2: Boot Analysis

Run systemd-analyze blame and find the three slowest services. Research whether any can be safely disabled on your system.

Exercise 3: Target Exploration

Run systemctl list-dependencies graphical.target and trace the dependency tree. Draw it on paper. How many levels deep does it go?

Exercise 4: Service Lifecycle

Install a simple service (like apache2 or httpd), then practice the full lifecycle:

sudo apt install apache2        # or: sudo dnf install httpd
sudo systemctl start apache2    # or: httpd
sudo systemctl status apache2
sudo systemctl stop apache2
sudo systemctl enable apache2
sudo systemctl disable apache2
sudo systemctl mask apache2
sudo systemctl start apache2    # Watch it fail
sudo systemctl unmask apache2

Bonus Challenge

Use systemd-analyze critical-chain to find the boot bottleneck on your system. Can you reduce boot time by disabling unnecessary services? Document the before and after boot times.