Links, Inodes & the VFS
Why This Matters
You delete a 2GB log file, but df -h shows no change in disk usage. You check and the file is gone -- ls confirms it. Yet the space is not freed. What is going on?
The answer lies in a concept called inodes -- the hidden data structures that the filesystem uses to track every file. The file you deleted was still held open by a running process, and as long as that process keeps its file descriptor open, the inode (and the data blocks it points to) stays alive. Understanding inodes, links, and how Linux's Virtual Filesystem layer ties everything together gives you the ability to diagnose puzzles like this in seconds.
This chapter pulls back the curtain on how Linux tracks files internally, how hard links and soft links work (and why they behave differently), and how the VFS lets you interact with wildly different systems -- physical disks, RAM-based pseudo-filesystems, network shares -- through a single uniform interface.
Try This Right Now
# See the inode number of a file
ls -i /etc/hostname
# Get detailed inode information
stat /etc/hostname
# See inode usage on your filesystem
df -i
# Create and compare hard and soft links
echo "original content" > /tmp/original.txt
ln /tmp/original.txt /tmp/hardlink.txt
ln -s /tmp/original.txt /tmp/softlink.txt
ls -li /tmp/original.txt /tmp/hardlink.txt /tmp/softlink.txt
Look at the output of that last command carefully. The original file and the hard link share the same inode number. The soft link has a different inode. This single observation is the key to understanding everything in this chapter.
What Is an Inode?
An inode (index node) is a data structure on disk that stores all the metadata about a file -- everything except the file's name and its actual data. Every file and directory on an ext4, XFS, or Btrfs filesystem has exactly one inode.
What an Inode Stores
+------------------------------------------+
| INODE #12345 |
+------------------------------------------+
| File type: regular file |
| Permissions: rw-r--r-- (644) |
| Owner (UID): 1000 |
| Group (GID): 1000 |
| Size: 4096 bytes |
| Timestamps: |
| - atime: last accessed |
| - mtime: last modified |
| - ctime: last status change |
| Link count: 1 |
| Pointers to data blocks: |
| Block 0 -> disk sector 48392 |
| Block 1 -> disk sector 48393 |
| Block 2 -> disk sector 48400 |
+------------------------------------------+
Notice what is not in the inode: the filename. The filename lives in the directory that contains the file. A directory is essentially a table mapping names to inode numbers.
Directory: /home/alice/
+-------------------+--------+
| Filename | Inode |
+-------------------+--------+
| . | 23001 |
| .. | 2 |
| notes.txt | 23042 |
| report.pdf | 23108 |
| scripts/ | 23200 |
+-------------------+--------+
The stat Command
stat shows you everything the inode contains:
stat /etc/hostname
File: /etc/hostname
Size: 7 Blocks: 8 IO Block: 4096 regular file
Device: 801h/2049d Inode: 131073 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2025-01-15 10:00:00.000000000 +0000
Modify: 2025-01-10 08:30:00.000000000 +0000
Change: 2025-01-10 08:30:00.000000000 +0000
Birth: 2025-01-10 08:30:00.000000000 +0000
Key fields:
- Inode: The inode number (131073 in this example)
- Links: The hard link count (how many directory entries point to this inode)
- Access/Modify/Change: The three timestamps
atime: When the file was last readmtime: When the file's contents were last modifiedctime: When the inode's metadata was last changed (permissions, owner, link count)
# See inode numbers alongside filenames
ls -i /etc/ | head -10
# Check inode usage (you can run out of inodes!)
df -i
Think About It: Can you run out of inodes even if you have plenty of disk space? Yes! If you create millions of tiny files (as some mail servers or cache systems do), you might exhaust inodes before disk space. On ext4, the inode count is set at filesystem creation time with
mkfs.ext4 -N.
Hard Links
A hard link is an additional directory entry that points to the same inode as an existing file. The hard link and the original file are completely indistinguishable -- they are both equally "real" names for the same data.
# Create a file
echo "I am the original" > /tmp/original.txt
stat /tmp/original.txt | grep -E "Inode|Links"
# Inode: 12345 Links: 1
# Create a hard link
ln /tmp/original.txt /tmp/hardlink.txt
stat /tmp/original.txt | grep -E "Inode|Links"
# Inode: 12345 Links: 2 <-- Link count increased!
stat /tmp/hardlink.txt | grep -E "Inode|Links"
# Inode: 12345 Links: 2 <-- Same inode!
# Both names see the same content
cat /tmp/original.txt
cat /tmp/hardlink.txt
# Modify through one name, see it through the other
echo "modified!" >> /tmp/hardlink.txt
cat /tmp/original.txt
# I am the original
# modified!
What Happens When You Delete a Hard Link?
Deleting a file (with rm) actually removes a directory entry and decrements the inode's link count. The data is only freed when the link count reaches zero AND no processes have the file open.
# Check link count
stat /tmp/original.txt | grep Links
# Links: 2
# Delete the "original" -- only removes one name
rm /tmp/original.txt
# The hard link still works! Data is intact.
cat /tmp/hardlink.txt
# I am the original
# modified!
stat /tmp/hardlink.txt | grep Links
# Links: 1 <-- Count decreased, but > 0, so data survives
BEFORE rm: AFTER rm:
Directory entries: Directory entries:
"original.txt" --+ (deleted)
|---> Inode 12345 "hardlink.txt" ---> Inode 12345
"hardlink.txt" --+ Links: 1
Links: 2 Data blocks: still allocated
Data blocks: allocated
Hard Link Restrictions
-
Cannot cross filesystems. A hard link must be on the same filesystem as the target because inode numbers are only unique within a filesystem.
-
Cannot link to directories (for regular users). This prevents circular references in the directory tree. Only the filesystem itself creates hard links to directories (
.and..).
# This will fail:
ln /tmp/somefile /home/somefile # FAILS if /tmp and /home are on different filesystems
# This will also fail:
ln /tmp/mydir /tmp/mydir-link # FAILS: can't hard link directories
# ln: /tmp/mydir: hard link not allowed for directory
Think About It: Why would circular directory hard links be catastrophic? Think about what happens to
find,du, or any tool that walks the directory tree recursively.
Symbolic (Soft) Links
A symbolic link (symlink) is a special file that contains a text path pointing to another file. It is like a shortcut or alias. Unlike hard links, symlinks:
- Have their own inode
- Can cross filesystem boundaries
- Can point to directories
- Can point to files that do not exist (dangling link)
# Create a symlink
echo "target file content" > /tmp/target.txt
ln -s /tmp/target.txt /tmp/symlink.txt
# Inspect
ls -l /tmp/symlink.txt
# lrwxrwxrwx 1 user user 15 ... /tmp/symlink.txt -> /tmp/target.txt
# ^ ^^^^^^^^^^^^^^^^
# 'l' = link type target path stored in the link
# Different inodes
ls -li /tmp/target.txt /tmp/symlink.txt
# 12345 -rw-r--r-- 1 user user 20 ... /tmp/target.txt
# 12346 lrwxrwxrwx 1 user user 15 ... /tmp/symlink.txt -> /tmp/target.txt
# Reading through the symlink gives you the target's content
cat /tmp/symlink.txt
# target file content
Dangling Symlinks
# Delete the target
rm /tmp/target.txt
# The symlink still exists but points nowhere
ls -l /tmp/symlink.txt
# lrwxrwxrwx 1 user user 15 ... /tmp/symlink.txt -> /tmp/target.txt
cat /tmp/symlink.txt
# cat: /tmp/symlink.txt: No such file or directory
# Find dangling symlinks
find /tmp -xtype l 2>/dev/null
Symlinks to Directories
# Symlinks can point to directories (unlike hard links)
ln -s /var/log /tmp/logs
ls /tmp/logs/
# Shows contents of /var/log/
Real-World Symlink Uses
Symlinks are everywhere in a modern Linux system:
# UsrMerge: /bin -> /usr/bin
ls -l /bin
# Alternatives system (Debian/Ubuntu)
ls -l /usr/bin/python3
ls -l /etc/alternatives/editor
# Library versioning
ls -l /usr/lib/x86_64-linux-gnu/libssl* 2>/dev/null || ls -l /usr/lib64/libssl* 2>/dev/null
# libssl.so -> libssl.so.3
# libssl.so.3 -> libssl.so.3.0.0
Hard Links vs Soft Links: The Complete Comparison
+---------------------------------------------------------------+
| Hard Link vs Symbolic Link |
+---------------------------------------------------------------+
| |
| Feature Hard Link Symbolic Link |
| --------------------------------------------------- |
| Same inode as target Yes No (own inode) |
| Cross filesystems No Yes |
| Link to directories No* Yes |
| Survives target Yes (data stays) No (dangling link) |
| deletion |
| File type in ls -l Same as target 'l' (link) |
| Size Same as target Length of path string |
| Relative paths N/A Relative to link loc |
| Performance Direct (fast) Extra lookup (tiny) |
| |
| * root can hard-link directories on some FS, but shouldn't |
+---------------------------------------------------------------+
Hands-On: Seeing the Difference
mkdir -p /tmp/linklab && cd /tmp/linklab
# Create original file
echo "I am the data" > original.txt
# Create both types of links
ln original.txt hard.txt
ln -s original.txt soft.txt
# Compare inodes
ls -li original.txt hard.txt soft.txt
# original.txt and hard.txt: SAME inode number
# soft.txt: DIFFERENT inode number
# Compare with stat
stat original.txt | grep -E "Inode|Links|Size"
stat hard.txt | grep -E "Inode|Links|Size"
stat soft.txt | grep -E "Inode|Links|Size"
# Delete the original
rm original.txt
# Hard link survives
cat hard.txt
# I am the data
# Soft link is broken
cat soft.txt
# cat: soft.txt: No such file or directory
# The file type
file hard.txt
# hard.txt: ASCII text
file soft.txt
# soft.txt: broken symbolic link to original.txt
The Virtual Filesystem (VFS) Layer
Here is one of the most elegant ideas in Linux kernel design. Linux supports dozens of different filesystems: ext4, XFS, Btrfs, FAT32, NTFS, NFS, procfs, sysfs, tmpfs, and many more. Yet when you type cat /proc/cpuinfo, you use the same cat command you would use on a regular file. When you write to /sys/class/leds/led0/brightness, you use the same echo command. How?
The Virtual Filesystem Switch (VFS) is an abstraction layer in the kernel that provides a uniform interface for all filesystems. User programs never talk to a specific filesystem -- they make VFS system calls (open, read, write, close, stat), and the VFS routes each call to the appropriate filesystem driver.
+----------------------------------------------+
| User Space Applications |
| (cat, ls, cp, vim, python, nginx...) |
+----------------------------------------------+
| system calls: open, read, write, stat
v
+----------------------------------------------+
| VFS (Virtual Filesystem Switch) |
| Uniform interface: inodes, dentries, files |
+----------------------------------------------+
| | | | |
v v v v v
+------+ +------+ +------+ +------+ +------+
| ext4 | | XFS | | proc | | sys | | NFS |
+------+ +------+ +------+ +------+ +------+
| | | | |
v v v v v
[disk] [disk] [kernel] [kernel] [network]
The VFS defines a set of data structures that every filesystem must implement:
- superblock: Metadata about the entire filesystem (size, block size, state)
- inode: Metadata about a single file
- dentry: A directory entry (maps a name to an inode)
- file: An open file (tracks current position, open flags, etc.)
Each filesystem driver provides its own implementations of operations like "read inode from disk" or "create a new file." The VFS calls the right one.
# You can see all mounted filesystem types
mount | awk '{print $5}' | sort -u
# Or more cleanly
cat /proc/filesystems
# nodev sysfs
# nodev tmpfs
# nodev proc
# ext4
# xfs
# nodev devtmpfs
# ...
# "nodev" means the filesystem doesn't use a block device
Think About It: Why is the VFS architecture so powerful? Think about what it means for userspace programs. A program that works with files does not need to know or care whether it is reading from a local ext4 disk, a network NFS share, or a virtual
/procfile. The VFS makes them all look the same.
/proc -- The Process Filesystem
/proc is a virtual filesystem -- none of its files exist on disk. The kernel generates their contents on the fly when you read them. It serves two purposes: exposing process information and providing kernel tuning parameters.
Process Information
Every running process has a directory in /proc named by its PID:
# Your current shell's PID
echo $$
# Explore your own process
ls /proc/$$/
# Key files in a process directory
cat /proc/$$/cmdline | tr '\0' ' '; echo # Command line
cat /proc/$$/status | head -10 # Status info
cat /proc/$$/environ | tr '\0' '\n' | head -5 # Environment vars
ls -l /proc/$$/fd/ # Open file descriptors
cat /proc/$$/maps | head -10 # Memory mappings
readlink /proc/$$/exe # Path to executable
readlink /proc/$$/cwd # Current working directory
System Information
# CPU information
cat /proc/cpuinfo | grep "model name" | uniq
# Memory
cat /proc/meminfo | head -5
# Kernel version
cat /proc/version
# Uptime (in seconds)
cat /proc/uptime
# Load averages
cat /proc/loadavg
# Mounted filesystems (from kernel's perspective)
cat /proc/mounts | head -10
# Command line the kernel was booted with
cat /proc/cmdline
Kernel Tuning via /proc/sys/
The /proc/sys/ directory contains files that map to kernel parameters. You can read and write them to tune system behavior at runtime.
# IP forwarding (routing)
cat /proc/sys/net/ipv4/ip_forward
# Maximum number of open files
cat /proc/sys/fs/file-max
# Maximum number of PIDs
cat /proc/sys/kernel/pid_max
# Hostname
cat /proc/sys/kernel/hostname
# Change a parameter at runtime (requires root)
sudo sh -c 'echo 1 > /proc/sys/net/ipv4/ip_forward'
# The sysctl command provides a friendlier interface
sysctl net.ipv4.ip_forward
sudo sysctl -w net.ipv4.ip_forward=1
/sys -- The Sysfs Filesystem
/sys (sysfs) exports the kernel's view of devices, drivers, buses, and other kernel objects as a filesystem hierarchy. Introduced in Linux 2.6, it replaced much of the hardware information that used to live in /proc.
# Block devices and their attributes
ls /sys/block/
cat /sys/block/sda/size 2>/dev/null # Size in 512-byte sectors
cat /sys/block/sda/queue/scheduler 2>/dev/null # I/O scheduler
# Network interfaces
ls /sys/class/net/
cat /sys/class/net/eth0/address 2>/dev/null # MAC address
cat /sys/class/net/eth0/mtu 2>/dev/null # MTU
cat /sys/class/net/eth0/operstate 2>/dev/null # up/down
# Power management
ls /sys/power/ 2>/dev/null
cat /sys/power/state 2>/dev/null
# CPU information
ls /sys/devices/system/cpu/
cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq 2>/dev/null
# Modules (loaded kernel modules)
ls /sys/module/ | head -10
Writing to /sys
Some /sys files are writable, allowing you to change hardware and driver settings:
# Example: Change I/O scheduler for a disk
cat /sys/block/sda/queue/scheduler 2>/dev/null
# [mq-deadline] none
# sudo sh -c 'echo "none" > /sys/block/sda/queue/scheduler'
# Example: Control CPU frequency governor
cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor 2>/dev/null
Device Files in /dev
As discussed in Chapter 5, /dev contains device files. Now that you understand inodes, let us look at how device files actually work.
Device files have a major number (identifies the driver) and a minor number (identifies the specific device within that driver):
ls -l /dev/sda /dev/null /dev/tty /dev/zero 2>/dev/null
# brw-rw---- 1 root disk 8, 0 ... /dev/sda (block, major 8, minor 0)
# crw-rw-rw- 1 root root 1, 3 ... /dev/null (char, major 1, minor 3)
# crw-rw-rw- 1 root tty 5, 0 ... /dev/tty (char, major 5, minor 0)
# crw-rw-rw- 1 root root 1, 5 ... /dev/zero (char, major 1, minor 5)
Block Devices vs Character Devices
Block Devices (b):
- Accessed in blocks (typically 512 bytes or 4096 bytes)
- Support random access (seek to any position)
- Examples: hard drives, SSDs, USB drives
- /dev/sda, /dev/nvme0n1, /dev/loop0
Character Devices (c):
- Accessed one character (byte) at a time
- Usually sequential access (some support seek)
- Examples: terminals, serial ports, random number generators
- /dev/tty, /dev/null, /dev/urandom
Special Device Files
# /dev/null -- The black hole. Discard any output.
echo "this goes nowhere" > /dev/null
# /dev/zero -- Infinite stream of zero bytes.
head -c 1024 /dev/zero | xxd | head
# /dev/urandom -- Infinite stream of random bytes.
head -c 32 /dev/urandom | xxd
# /dev/full -- Always reports "disk full" on write.
echo "test" > /dev/full
# bash: echo: write error: No space left on device
# /dev/random -- Random bytes (may block if entropy pool is empty, on older kernels)
head -c 32 /dev/random | xxd
The udev System
Modern Linux uses udev (managed by systemd as systemd-udevd) to dynamically create and manage device files. When you plug in a USB drive, udev detects the hardware event, creates the appropriate /dev entry, and can run rules to set permissions, create symlinks, or trigger scripts.
# View udev rules
ls /etc/udev/rules.d/
ls /usr/lib/udev/rules.d/ | head -20
# Monitor device events in real time (plug/unplug something)
sudo udevadm monitor --property
# (Press Ctrl+C to stop)
# Get device info
sudo udevadm info /dev/sda 2>/dev/null | head -20
Hands-On: Inode and Link Deep Dive
Exercise 1: Watching the Link Count
mkdir -p /tmp/inode-lab && cd /tmp/inode-lab
# Create a file and check its link count
echo "test data" > file.txt
stat file.txt | grep Links
# Links: 1
# Create hard links
ln file.txt link1.txt
ln file.txt link2.txt
ln file.txt link3.txt
stat file.txt | grep Links
# Links: 4
# All share the same inode
ls -li file.txt link1.txt link2.txt link3.txt
# All show the same inode number
# Delete some links
rm link1.txt link2.txt
stat file.txt | grep Links
# Links: 2
rm link3.txt
stat file.txt | grep Links
# Links: 1 (only the original name remains)
Exercise 2: Why Deleted Files Still Use Space
# Create a large file
dd if=/dev/zero of=/tmp/bigfile bs=1M count=100
# Check space
df -h /tmp
# Open the file in the background (keep a file descriptor open)
sleep 3600 < /tmp/bigfile &
BG_PID=$!
# Delete the file
rm /tmp/bigfile
# Check: ls shows it's gone
ls /tmp/bigfile 2>&1
# No such file or directory
# But space is NOT freed!
df -h /tmp
# (same usage as before)
# Find deleted-but-open files
sudo lsof +L1 2>/dev/null | grep bigfile
# The inode still exists because the process holds a file descriptor
# The link count in the directory is 0, but the kernel count includes open FDs
# Kill the background process
kill $BG_PID
# NOW the space is freed
df -h /tmp
This is exactly the mystery from our opening scenario. Now you know how to diagnose it.
Exercise 3: Directory Link Counts
# A directory's link count has a special meaning
mkdir -p /tmp/dir-links/sub1/sub2
# Check the link count of /tmp/dir-links
stat /tmp/dir-links | grep Links
# Links: 3
# Why 3?
# 1. The entry "dir-links" in /tmp
# 2. The "." entry inside /tmp/dir-links itself
# 3. The ".." entry inside /tmp/dir-links/sub1
#
# Formula: link_count = 2 + number_of_immediate_subdirectories
ls -la /tmp/dir-links
# drwxr-xr-x 3 user user ... . <-- "." is a hard link to itself
# drwxrwxrwt ... ... .. <-- ".." links to parent
# Add another subdirectory
mkdir /tmp/dir-links/sub3
stat /tmp/dir-links | grep Links
# Links: 4 (2 + 2 subdirectories)
Debug This
A user reports: "I created a symbolic link to /opt/app/config.yaml, but the application says the file does not exist, even though I can see it with ls -l."
You check:
$ ls -l /home/alice/config.yaml
lrwxrwxrwx 1 alice alice 25 Jan 15 10:00 /home/alice/config.yaml -> ../opt/app/config.yaml
What is wrong?
Solution
The symlink uses a relative path: ../opt/app/config.yaml. Relative symlink targets are resolved relative to the symlink's location, not the current working directory.
The symlink is at /home/alice/config.yaml. The relative path ../opt/app/config.yaml resolves to /home/opt/app/config.yaml, which does not exist.
The fix is to use an absolute path:
rm /home/alice/config.yaml
ln -s /opt/app/config.yaml /home/alice/config.yaml
Or a correct relative path:
ln -s ../../opt/app/config.yaml /home/alice/config.yaml
Lesson: When in doubt, use absolute paths for symlinks. Relative paths are useful when the entire directory tree might move (e.g., inside a container or chroot), but they are a common source of bugs.
What Just Happened?
+------------------------------------------------------------------+
| CHAPTER 8 RECAP |
+------------------------------------------------------------------+
| |
| Inodes: The hidden data structure behind every file |
| - Stores metadata: permissions, timestamps, data block ptrs |
| - Does NOT store the filename (that's in the directory) |
| - Use stat to inspect, ls -i for inode numbers, df -i count |
| |
| Hard Links: Additional name -> same inode |
| - Same inode number, shared data |
| - Survives "deletion" of other names (link count > 0) |
| - Cannot cross filesystems, cannot link to directories |
| |
| Soft (Symbolic) Links: Separate file containing a path |
| - Different inode, stores target path as content |
| - Can cross filesystems, can link to directories |
| - Can become dangling (broken) if target is deleted |
| |
| VFS: The kernel abstraction making all filesystems uniform |
| - ext4, XFS, proc, sys, NFS all behind one interface |
| - Userspace uses same syscalls for all filesystem types |
| |
| /proc: Virtual FS for process info and kernel tuning |
| /sys: Virtual FS for device/driver/bus information |
| /dev: Device files (block, character) managed by udev |
| |
| Tools: stat, ls -i, ln, ln -s, readlink, lsof +L1, df -i |
| |
+------------------------------------------------------------------+
Try This
Exercises
-
Inode investigation. Create a file, then create 5 hard links to it. Use
statto verify the link count at each step. Delete the links one by one and confirm the count decreases. At what point is the data actually freed? -
Symlink maze. Create a chain of symlinks:
a -> b -> c -> d -> real_file. Does readingawork? Now deletec. What happens when you reada? What error message do you get? -
The
/procexplorer. Write a script that takes a PID as an argument and prints:- The command that started the process
- Its current working directory
- The number of open file descriptors
- Its memory usage (from
/proc/<PID>/status)
-
Deleted-but-open files. Create a 50MB file, open it with
tail -fin the background, delete the file, and verify withdfthat the space is not freed. Then killtailand verify the space is freed. Uselsof +L1to find the deleted-but-open file before killing the process. -
Directory link count formula. Verify that a directory's link count equals 2 + (number of immediate subdirectories). Create a directory with 0, 1, 3, and 5 subdirectories, checking the link count each time.
Bonus Challenge
Write a script called link-analyzer.sh that takes a filename as an argument and reports:
- Whether it is a regular file, symlink, directory, device, etc.
- Its inode number
- Its link count
- If it is a symlink, the target path (and whether the target exists)
- If it is a regular file with link count > 1, find all other hard links to the same inode
#!/bin/bash
# link-analyzer.sh -- Analyze links and inodes for a given file
FILE="${1:?Usage: $0 <filename>}"
if [ ! -e "$FILE" ] && [ ! -L "$FILE" ]; then
echo "Error: '$FILE' does not exist"
exit 1
fi
echo "=== Link Analysis: $FILE ==="
echo ""
# File type
TYPE=$(stat -c %F "$FILE" 2>/dev/null || stat -f %HT "$FILE" 2>/dev/null)
echo "Type: $TYPE"
# Inode number
INODE=$(stat -c %i "$FILE" 2>/dev/null)
echo "Inode: $INODE"
# Link count
LINKS=$(stat -c %h "$FILE" 2>/dev/null)
echo "Link count: $LINKS"
# If symlink, show target
if [ -L "$FILE" ]; then
TARGET=$(readlink "$FILE")
echo "Symlink target: $TARGET"
if [ -e "$FILE" ]; then
echo "Target status: EXISTS"
echo "Resolved path: $(readlink -f "$FILE")"
else
echo "Target status: BROKEN (dangling symlink)"
fi
fi
# If regular file with multiple hard links, find siblings
if [ -f "$FILE" ] && [ "$LINKS" -gt 1 ]; then
echo ""
echo "--- Other hard links to inode $INODE ---"
MOUNT=$(df --output=target "$FILE" 2>/dev/null | tail -1)
if [ -n "$MOUNT" ]; then
sudo find "$MOUNT" -inum "$INODE" 2>/dev/null | grep -v "^${FILE}$"
fi
fi
echo ""
echo "--- Full stat output ---"
stat "$FILE"
This completes Part II of the book. You now understand how Linux organizes files on disk (Chapter 5), how access is controlled (Chapter 6), how physical storage is managed (Chapter 7), and the internal mechanisms of inodes, links, and the VFS (Chapter 8). Next, in Part III, we move to the dynamic side of Linux: users, processes, signals, and inter-process communication.