Logging with journald & syslog
Why This Matters
It is 3 AM. Your monitoring system pages you: "Website down." You SSH into the server, and the application is not running. You need answers, fast. When did it stop? What error caused it? Did something else fail first?
Every answer lives in the logs.
Logging is how Linux systems tell you what is happening, what went wrong, and what happened leading up to a failure. Without good logging, you are debugging blind. With good logging, you have a time machine that lets you replay exactly what happened on your system.
Modern Linux has two logging systems that work together: journald (systemd's structured journal) and rsyslog (the traditional syslog daemon). This chapter teaches you to query, filter, manage, and rotate logs so you always have the information you need when something breaks.
Try This Right Now
# See the last 20 log entries
journalctl -n 20 --no-pager
# Follow the log in real time (like tail -f but for everything)
journalctl -f
# Press Ctrl+C to stop
# See logs from the current boot only
journalctl -b --no-pager | head -30
# Find logs from a specific service
journalctl -u sshd.service -n 10 --no-pager
# See all ERROR-level messages from the last hour
journalctl -p err --since "1 hour ago" --no-pager
The Two Logging Systems
Modern Linux distributions run two logging systems side by side:
+----------------------------------------------------------+
| Applications / Services / Kernel |
+----------+----------------------------+------------------+
| |
v v
+--------+---------+ +----------+-----------+
| journald | | rsyslog |
| (systemd journal)| | (traditional syslog) |
| Binary, indexed | | Plain text files |
| /run/log/journal | | /var/log/syslog |
| or | | /var/log/messages |
| /var/log/journal | | /var/log/auth.log |
+------------------+ +----------------------+
journald collects logs from:
- systemd services (stdout/stderr)
- The kernel (kmsg)
- Syslog messages
- Audit framework
rsyslog receives messages from:
- journald (forwarded)
- Direct syslog connections
- Remote log sources
On most modern systems, journald is the primary collector, and rsyslog writes the traditional text files that many tools expect.
journalctl: Your Log Investigation Tool
journalctl is the command-line tool for querying the systemd journal. It is
incredibly powerful once you know its filtering options.
Basic Usage
# View all logs (oldest first, paged)
journalctl
# View all logs (newest first)
journalctl -r
# View last N entries
journalctl -n 50
# Follow new entries in real time
journalctl -f
# No pager (dump to stdout, useful for piping)
journalctl --no-pager
Filtering by Unit
This is the most common filter -- show logs from a specific service:
# Logs from nginx
journalctl -u nginx.service --no-pager
# Logs from multiple units
journalctl -u nginx.service -u php-fpm.service --no-pager
# Follow a specific service's logs
journalctl -u myapp.service -f
Filtering by Time
# Logs since a specific time
journalctl --since "2025-03-10 14:00:00" --no-pager
# Logs in a time range
journalctl --since "2025-03-10 14:00" --until "2025-03-10 15:00" --no-pager
# Relative time expressions
journalctl --since "1 hour ago" --no-pager
journalctl --since "30 min ago" --no-pager
journalctl --since yesterday --no-pager
journalctl --since today --no-pager
Filtering by Boot
# Current boot only
journalctl -b
# Previous boot
journalctl -b -1
# Two boots ago
journalctl -b -2
# List all recorded boots
journalctl --list-boots
Sample output from --list-boots:
-3 abc123... Sat 2025-03-08 10:15:22 — Sat 2025-03-08 22:01:15
-2 def456... Sun 2025-03-09 08:30:11 — Sun 2025-03-09 23:45:00
-1 ghi789... Mon 2025-03-10 07:00:05 — Mon 2025-03-10 23:59:59
0 jkl012... Tue 2025-03-11 06:55:30 — Tue 2025-03-11 14:22:10
Think About It: Why would you want to look at logs from a previous boot? Think about what happens when a system crashes and reboots -- the clues to the crash are in the previous boot's logs, not the current one.
Filtering by Priority
Syslog priorities, from most to least severe:
| Priority | Keyword | Meaning |
|---|---|---|
| 0 | emerg | System is unusable |
| 1 | alert | Immediate action needed |
| 2 | crit | Critical conditions |
| 3 | err | Error conditions |
| 4 | warning | Warning conditions |
| 5 | notice | Normal but significant |
| 6 | info | Informational |
| 7 | debug | Debug-level messages |
# Show only errors and above (emerg, alert, crit, err)
journalctl -p err --no-pager
# Show warnings and above
journalctl -p warning --no-pager
# Show a specific priority range
journalctl -p warning..err --no-pager
# Combine with other filters
journalctl -p err -u nginx.service --since today --no-pager
Filtering by Other Fields
The journal stores structured data. You can filter on many fields:
# By process ID
journalctl _PID=1234 --no-pager
# By user ID
journalctl _UID=1000 --no-pager
# By executable path
journalctl _EXE=/usr/sbin/sshd --no-pager
# By hostname (useful in centralized logging)
journalctl _HOSTNAME=webserver01 --no-pager
# By kernel messages only
journalctl -k --no-pager
# or equivalently:
journalctl _TRANSPORT=kernel --no-pager
Output Formats
# Default format (human-readable)
journalctl -n 5
# Short with precise timestamps
journalctl -n 5 -o short-precise
# JSON format (for parsing)
journalctl -n 5 -o json --no-pager
# JSON, one entry per line (great for piping to jq)
journalctl -n 5 -o json-pretty --no-pager
# Verbose (show all fields)
journalctl -n 1 -o verbose --no-pager
# Export format (for backup/transfer)
journalctl -o export --no-pager > journal-export.bin
The verbose output is particularly useful for understanding what metadata the journal stores:
journalctl -n 1 -o verbose --no-pager
Tue 2025-03-11 14:22:01.123456 UTC [s=abc123;i=42;b=def456...]
_TRANSPORT=syslog
PRIORITY=6
SYSLOG_IDENTIFIER=sshd
_PID=1234
_UID=0
_GID=0
_EXE=/usr/sbin/sshd
_COMM=sshd
_CMDLINE=sshd: user [priv]
MESSAGE=Accepted publickey for user from 10.0.0.1 port 54321
...
Hands-On: Log Investigation Workflow
Let us practice a realistic log investigation. We will look at SSH authentication events.
Step 1: Find SSH Events
journalctl -u sshd.service --since today --no-pager
Distro Note: Use
-u ssh.serviceon Ubuntu/Debian.
Step 2: Filter for Authentication Failures
journalctl -u sshd.service --since today --no-pager | grep -i "failed\|invalid\|error"
Or use the journal's native grep:
journalctl -u sshd.service --since today --no-pager --grep="Failed password"
Step 3: Count Events
# How many failed login attempts today?
journalctl -u sshd.service --since today --no-pager --grep="Failed password" | wc -l
Step 4: Extract Attacker IPs
journalctl -u sshd.service --since today --no-pager --grep="Failed password" \
| grep -oP 'from \K[0-9.]+' | sort | uniq -c | sort -rn | head -10
This gives you the top 10 IP addresses attempting failed SSH logins.
Step 5: See the Full Context Around an Event
# Find an interesting timestamp, then look at everything around that time
journalctl --since "2025-03-11 14:20:00" --until "2025-03-11 14:25:00" --no-pager
Persistent Journal Storage
By default, many distributions store the journal only in /run/log/journal/, which
means logs are lost on reboot. For production systems, you want persistent storage.
Check Your Current Setup
# Where is the journal stored?
journalctl --disk-usage
# Is it persistent?
ls -la /var/log/journal/ 2>/dev/null && echo "Persistent" || echo "Volatile"
Enable Persistent Storage
# Create the persistent journal directory
sudo mkdir -p /var/log/journal
# Set correct ownership
sudo systemd-tmpfiles --create --prefix /var/log/journal
# Restart journald to start using persistent storage
sudo systemctl restart systemd-journald
# Verify
journalctl --disk-usage
Or configure it in the journal configuration:
sudo mkdir -p /etc/systemd/journald.conf.d/
sudo tee /etc/systemd/journald.conf.d/persistent.conf << 'CONF'
[Journal]
Storage=persistent
CONF
sudo systemctl restart systemd-journald
The Storage= options are:
| Value | Behavior |
|---|---|
auto | Persistent if /var/log/journal/ exists, otherwise volatile (default) |
persistent | Always persistent, creates the directory if needed |
volatile | Only store in /run/log/journal/ (lost on reboot) |
none | Do not store logs at all (not recommended) |
Journal Size Management
The journal can grow large on busy systems. Configure size limits:
sudo tee /etc/systemd/journald.conf.d/size.conf << 'CONF'
[Journal]
# Maximum disk space the journal can use
SystemMaxUse=500M
# Maximum size of individual journal files
SystemMaxFileSize=50M
# Keep at least this much free disk space
SystemKeepFree=1G
# Maximum time to keep entries
MaxRetentionSec=30day
CONF
sudo systemctl restart systemd-journald
Manual Cleanup
# See current disk usage
journalctl --disk-usage
# Archived and active journals take up 1.2G in the file system.
# Remove entries older than 2 weeks
sudo journalctl --vacuum-time=2weeks
# Reduce journal to a specific size
sudo journalctl --vacuum-size=500M
# Remove entries beyond a number of journal files
sudo journalctl --vacuum-files=5
Think About It: What is the trade-off between keeping more log history and managing disk space? On a production server, how far back would you want to keep logs, and why?
rsyslog and /var/log
While journald is the modern standard, rsyslog and the traditional /var/log/ files
are still important. Many tools, scripts, and monitoring systems expect plain text log
files.
The /var/log Directory
ls /var/log/
Common log files:
| File | Contents |
|---|---|
/var/log/syslog | General system messages (Debian/Ubuntu) |
/var/log/messages | General system messages (RHEL/Fedora) |
/var/log/auth.log | Authentication events (Debian/Ubuntu) |
/var/log/secure | Authentication events (RHEL/Fedora) |
/var/log/kern.log | Kernel messages |
/var/log/dmesg | Boot-time kernel messages |
/var/log/boot.log | Boot process messages |
/var/log/cron | Cron job execution |
/var/log/maillog | Mail server logs |
/var/log/nginx/ | nginx access and error logs |
/var/log/apt/ | Package management logs (Debian/Ubuntu) |
/var/log/dnf.log | Package management log (Fedora) |
Distro Note: Debian/Ubuntu use
/var/log/syslogand/var/log/auth.log. RHEL and Fedora use/var/log/messagesand/var/log/secure. The content is the same; only the filenames differ.
rsyslog Configuration
rsyslog's main configuration is in /etc/rsyslog.conf with additional files in
/etc/rsyslog.d/.
# View main config
cat /etc/rsyslog.conf
The configuration uses rules in the format:
facility.priority destination
Example rules:
# All auth messages go to auth.log
auth,authpriv.* /var/log/auth.log
# Everything except auth goes to syslog
*.*;auth,authpriv.none /var/log/syslog
# Kernel messages
kern.* /var/log/kern.log
# Emergency messages to all logged-in users
*.emerg :omusrmsg:*
Syslog Facilities
| Facility | Purpose |
|---|---|
auth | Authentication |
authpriv | Private authentication |
cron | Cron daemon |
daemon | System daemons |
kern | Kernel |
mail | Mail system |
user | User processes |
local0-local7 | Custom use |
Testing Syslog
# Send a test message to syslog
logger "This is a test message from $(whoami)"
# Send with a specific facility and priority
logger -p local0.notice "Test from local0"
# Check it arrived
journalctl --since "1 min ago" --no-pager
tail -5 /var/log/syslog 2>/dev/null || tail -5 /var/log/messages 2>/dev/null
Log Rotation with logrotate
Text log files grow forever unless rotated. logrotate handles this automatically: it compresses old logs, removes ancient ones, and signals services to reopen their log files.
How logrotate Works
+------------------+ logrotate +------------------+
| access.log | -------------> | access.log | (current, fresh)
| (500 MB, 7 days) | | access.log.1 | (yesterday)
+------------------+ | access.log.2.gz | (2 days ago, compressed)
| access.log.3.gz | (3 days ago, compressed)
+------------------+
Configuration
Global settings are in /etc/logrotate.conf. Per-application configs are in
/etc/logrotate.d/.
# See what configs exist
ls /etc/logrotate.d/
Example configuration for a custom application:
sudo tee /etc/logrotate.d/myapp << 'CONF'
/var/log/myapp/*.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
create 0640 myapp myapp
sharedscripts
postrotate
systemctl reload myapp.service 2>/dev/null || true
endscript
}
CONF
Let us break down each directive:
| Directive | Meaning |
|---|---|
daily | Rotate once per day (also: weekly, monthly) |
missingok | Do not error if the log file is missing |
rotate 14 | Keep 14 rotated files before deleting |
compress | Compress rotated files with gzip |
delaycompress | Do not compress the most recent rotated file |
notifempty | Do not rotate if the file is empty |
create 0640 myapp myapp | Create new log file with these permissions |
sharedscripts | Run postrotate only once, not per file |
postrotate/endscript | Commands to run after rotation |
Testing logrotate
# Dry run (see what would happen without doing it)
sudo logrotate --debug /etc/logrotate.d/myapp
# Force a rotation right now
sudo logrotate --force /etc/logrotate.d/myapp
# Run the full logrotate (as cron normally does)
sudo logrotate /etc/logrotate.conf
Viewing the nginx Rotation Config
cat /etc/logrotate.d/nginx
Typical output:
/var/log/nginx/*.log {
daily
missingok
rotate 52
compress
delaycompress
notifempty
create 0640 www-data adm
sharedscripts
prerotate
if [ -d /etc/logrotate.d/httpd-prerotate ]; then
run-parts /etc/logrotate.d/httpd-prerotate
fi
endscript
postrotate
invoke-rc.d nginx rotate >/dev/null 2>&1
endscript
}
Hands-On: Setting Up Complete Logging
Let us set up proper logging for a custom application.
Step 1: Create a Test Application That Logs
sudo mkdir -p /opt/logdemo
sudo tee /opt/logdemo/app.sh << 'SCRIPT'
#!/bin/bash
while true; do
# Log to stdout (captured by journald)
echo "[INFO] Processing request at $(date)"
# Simulate occasional errors
if (( RANDOM % 5 == 0 )); then
echo "[ERROR] Something went wrong at $(date)" >&2
fi
sleep 5
done
SCRIPT
sudo chmod +x /opt/logdemo/app.sh
Step 2: Create a Service for It
sudo tee /etc/systemd/system/logdemo.service << 'UNIT'
[Unit]
Description=Logging Demo Application
[Service]
Type=simple
ExecStart=/opt/logdemo/app.sh
StandardOutput=journal
StandardError=journal
SyslogIdentifier=logdemo
Restart=on-failure
[Install]
WantedBy=multi-user.target
UNIT
sudo systemctl daemon-reload
sudo systemctl start logdemo.service
Step 3: Query the Logs
# All logs from our demo
journalctl -u logdemo.service --no-pager -n 20
# Only errors
journalctl -u logdemo.service -p err --no-pager
# Follow in real time
journalctl -u logdemo.service -f
# JSON output for parsing
journalctl -u logdemo.service -o json-pretty -n 5 --no-pager
Step 4: Clean Up
sudo systemctl stop logdemo.service
sudo rm /etc/systemd/system/logdemo.service
sudo rm -rf /opt/logdemo
sudo systemctl daemon-reload
Debug This: Where Did My Logs Go?
You check journalctl -u myapp.service and see nothing. The service is running. Where
are the logs?
Diagnosis checklist:
-
Is the service actually producing output?
systemctl cat myapp.service | grep -E "Standard(Output|Error)"If
StandardOutput=nullorStandardError=null, output is discarded. -
Is the application logging to a file instead of stdout? Many applications write to their own log files. Check the app's configuration.
ls -la /var/log/myapp/ 2>/dev/null -
Is the journal full and dropping messages?
journalctl --disk-usage journalctl -p warning --no-pager | grep -i "journal" -
Is the service running under the wrong identifier?
# Maybe it is logging under a different name journalctl --since "5 min ago" --no-pager | grep -i myapp -
Are you looking at the right boot?
# Make sure you are looking at the current boot journalctl -u myapp.service -b 0 --no-pager
Centralized Logging Concepts
In production, you rarely look at logs on individual servers. Instead, you send logs to a centralized system.
Why Centralize?
- Persistence: If a server dies, its local logs may be lost
- Correlation: See events from multiple servers in one place
- Search: Query across all servers at once
- Alerting: Trigger alerts on specific log patterns
- Compliance: Some regulations require centralized, tamper-proof logs
Common Approaches
+----------+ +----------+ +----------+
| Server A | | Server B | | Server C |
| rsyslog | --> | | --> | |
| journald| | rsyslog | | journald|
+----+-----+ +----+-----+ +----+-----+
| | |
+--------+-------+-------+--------+
| |
v v
+-----------------+ +------------------+
| Central Syslog | | Elasticsearch |
| Server | | (ELK/OpenSearch)|
+-----------------+ +------------------+
Forwarding with rsyslog
To send logs to a remote syslog server:
sudo tee /etc/rsyslog.d/50-remote.conf << 'CONF'
# Forward all logs to central server via TCP
*.* @@logserver.example.com:514
# Or via UDP (single @)
# *.* @logserver.example.com:514
CONF
sudo systemctl restart rsyslog
Forwarding with journald
journald can forward to a remote journal:
sudo tee /etc/systemd/journal-upload.conf << 'CONF'
[Upload]
URL=http://logserver.example.com:19532
CONF
sudo systemctl enable --now systemd-journal-upload.service
Open Source Centralized Logging Stacks
| Stack | Components |
|---|---|
| ELK | Elasticsearch + Logstash + Kibana |
| OpenSearch | OpenSearch + Data Prepper + OpenSearch Dashboards |
| Loki | Grafana Loki + Promtail + Grafana |
| Graylog | Graylog + MongoDB + OpenSearch |
Grafana Loki is particularly popular because it is lightweight and integrates naturally with Grafana dashboards. It is designed to be "like Prometheus, but for logs."
What Just Happened?
+------------------------------------------------------------------+
| CHAPTER 17 RECAP |
+------------------------------------------------------------------+
| |
| - journalctl is your primary log investigation tool |
| - Filter by unit (-u), time (--since/--until), priority (-p), |
| and boot (-b) |
| - Enable persistent journal storage for production systems |
| - Manage journal size with SystemMaxUse= and vacuum commands |
| - rsyslog writes traditional /var/log text files |
| - /var/log/syslog (Debian) or /var/log/messages (RHEL) for |
| general messages |
| - logrotate compresses and cleans old log files |
| - Centralized logging (ELK, Loki, Graylog) for production |
| - logger command sends test messages to syslog |
| |
+------------------------------------------------------------------+
Try This
Exercise 1: Log Investigation
Use journalctl to answer these questions about your system:
- How many error-level messages occurred today?
- Which service produced the most log entries in the last hour?
- When did your system last boot?
Exercise 2: Persistent Journal
Check whether your journal is persistent. If not, enable persistent storage and verify that logs survive a reboot (if you can reboot your system).
Exercise 3: Custom logrotate
Create a logrotate configuration for a fictional application that writes to
/var/log/myapp/app.log. Configure it to rotate weekly, keep 8 weeks of history,
compress old files, and test with logrotate --debug.
Exercise 4: Priority Filtering
Use journalctl -p to list all critical and error messages from the last 7 days. Are
there any patterns? Any services that appear repeatedly?
Bonus Challenge
Set up rsyslog to write all authentication-related log entries to a separate file at
/var/log/auth-audit.log. Configure logrotate for this file. Then generate some
authentication events (SSH logins, sudo commands) and verify they appear in both
journalctl and your custom log file.