NFS & Network Filesystems

Why This Matters

You have a team of five developers, each working on a separate server. They all need access to the same project files, configuration data, and shared libraries. You could copy files between servers manually, but every copy is stale the moment someone makes a change. You could use rsync on a schedule, but that introduces delays and conflicts.

Or you could use NFS -- the Network File System. With NFS, one server exports a directory, and all other servers mount it as if it were a local directory. When a developer saves a file on one server, every other server sees the change instantly. There is no copying, no syncing, no conflicts.

NFS has been the standard way to share filesystems across Unix and Linux machines since 1984. It is used everywhere: shared home directories in universities, shared media libraries, centralized configuration distribution, and shared data stores in compute clusters. If you manage more than one Linux server, you will eventually need NFS.

This chapter covers NFS server and client setup, performance tuning, security considerations, and alternatives like SSHFS and CIFS/Samba for Windows interoperability.


Try This Right Now

Check if your system already has NFS capabilities:

# Check if NFS client utilities are installed
$ which mount.nfs 2>/dev/null && echo "NFS client available" || echo "NFS client not installed"

# Check if NFS server is running (it likely is not on a workstation)
$ systemctl status nfs-server 2>/dev/null || systemctl status nfs-kernel-server 2>/dev/null

# Check for any existing NFS mounts
$ mount -t nfs,nfs4

# Check what your system might be exporting
$ cat /etc/exports 2>/dev/null

How NFS Works

NFS allows a server to share (export) directories over the network. Clients mount these exports and access files as though they were local. The key concept is transparency -- applications do not need to know they are using a network filesystem.

┌─────────────────────┐          ┌─────────────────────┐
│     NFS Server       │          │     NFS Client       │
│                      │          │                      │
│  /srv/shared/        │  Network │  /mnt/shared/        │
│    ├── project/      │◄────────►│    ├── project/      │
│    ├── data/         │  NFSv4   │    ├── data/         │
│    └── configs/      │  TCP/2049│    └── configs/      │
│                      │          │                      │
│  Exports via         │          │  Mounts remote       │
│  /etc/exports        │          │  export as local     │
└─────────────────────┘          └─────────────────────┘

NFS Versions

VersionKey Features
NFSv3Stateless, uses multiple ports (portmap), UDP or TCP
NFSv4Stateful, single port (2049/TCP), built-in security (Kerberos), ACL support
NFSv4.1Parallel NFS (pNFS), session trunking
NFSv4.2Server-side copy, sparse files, application I/O hints

Use NFSv4 unless you have a specific reason to use v3. It is simpler (one port), more secure, and performs better over WANs.


Setting Up an NFS Server

Install the NFS Server

# Debian/Ubuntu
$ sudo apt install nfs-kernel-server

# Fedora/RHEL
$ sudo dnf install nfs-utils

# Arch
$ sudo pacman -S nfs-utils

Create Directories to Share

# Create a shared directory
$ sudo mkdir -p /srv/nfs/shared
$ sudo mkdir -p /srv/nfs/readonly

# Put some content in them
$ sudo sh -c 'echo "Hello from the NFS server" > /srv/nfs/shared/welcome.txt'
$ sudo sh -c 'echo "Reference data" > /srv/nfs/readonly/reference.txt'

# Set ownership -- for simple setups, use nobody:nogroup
$ sudo chown -R nobody:nogroup /srv/nfs/shared
$ sudo chown -R nobody:nogroup /srv/nfs/readonly

Configure Exports

The /etc/exports file defines what gets shared and with whom:

$ sudo vim /etc/exports
# /etc/exports
#
# Syntax: directory    client(options) [client(options)] ...
#
# Share /srv/nfs/shared with the 192.168.1.0/24 network, read-write
/srv/nfs/shared    192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)

# Share /srv/nfs/readonly with everyone, read-only
/srv/nfs/readonly  *(ro,sync,no_subtree_check)

# Share with a specific host
# /srv/nfs/private  webserver.local(rw,sync,no_subtree_check)

Key export options explained:

OptionMeaning
rwRead-write access
roRead-only access
syncWrite data to disk before replying (safe, slower)
asyncReply before data is written to disk (fast, risky)
no_subtree_checkDisables subtree checking (improves reliability)
root_squashMap root (UID 0) on client to nobody (default, safer)
no_root_squashAllow client root to act as root on server (needed for some use cases)
all_squashMap all users to nobody
anonuid=1000Map anonymous users to UID 1000
anongid=1000Map anonymous groups to GID 1000

WARNING: no_root_squash is a security risk. A client with root access can read/write any file on the export as root. Only use it when truly necessary (e.g., for diskless clients or specific applications that require it).

Apply and Start

# Apply export changes
$ sudo exportfs -ra

# Verify what is exported
$ sudo exportfs -v
/srv/nfs/shared    192.168.1.0/24(rw,wdelay,no_root_squash,no_subtree_check,...)
/srv/nfs/readonly  <world>(ro,wdelay,root_squash,no_subtree_check,...)

# Start and enable the NFS server
$ sudo systemctl enable --now nfs-server

# Verify it is listening
$ sudo ss -tlnp | grep 2049
LISTEN  0  64  *:2049  *:*

Distro Note: On Debian/Ubuntu, the service is called nfs-kernel-server. On Fedora/RHEL/Arch, it is nfs-server.


Setting Up an NFS Client

Install Client Utilities

# Debian/Ubuntu
$ sudo apt install nfs-common

# Fedora/RHEL
$ sudo dnf install nfs-utils

# Arch
$ sudo pacman -S nfs-utils

Test What the Server Exports

# Show exports from a server
$ showmount -e 192.168.1.10
Export list for 192.168.1.10:
/srv/nfs/shared   192.168.1.0/24
/srv/nfs/readonly *

Mount an NFS Share

# Create mount point
$ sudo mkdir -p /mnt/shared

# Mount the NFS share
$ sudo mount -t nfs 192.168.1.10:/srv/nfs/shared /mnt/shared

# Verify
$ mount | grep nfs
192.168.1.10:/srv/nfs/shared on /mnt/shared type nfs4 (rw,relatime,...)

# Test it
$ cat /mnt/shared/welcome.txt
Hello from the NFS server

# Write something (if rw)
$ echo "Hello from the client" | sudo tee /mnt/shared/client_message.txt

Specifying NFS Mount Options

# Mount with specific options
$ sudo mount -t nfs -o vers=4,rw,hard,intr,timeo=600,retrans=2 \
    192.168.1.10:/srv/nfs/shared /mnt/shared

Key client mount options:

OptionMeaning
vers=4Force NFSv4
hardRetry NFS requests indefinitely (default; safe for data integrity)
softGive up after retrans retries (can cause data corruption)
intrAllow signals to interrupt hung NFS operations
timeo=NTimeout in tenths of a second before retry
retrans=NNumber of retries before giving up (soft mounts)
rsize=1048576Read buffer size in bytes
wsize=1048576Write buffer size in bytes

Think About It: What happens to applications accessing an NFS mount when the server goes down? How do hard and soft mount options change this behavior?

Making Mounts Persistent with /etc/fstab

# Add to /etc/fstab for automatic mounting at boot
$ sudo vim /etc/fstab
# NFS mounts in /etc/fstab
192.168.1.10:/srv/nfs/shared   /mnt/shared   nfs   defaults,_netdev   0 0
192.168.1.10:/srv/nfs/readonly /mnt/readonly nfs   ro,_netdev          0 0

The _netdev option is critical -- it tells the system to wait for network to be available before attempting to mount. Without it, the system may hang at boot waiting for an NFS mount.

# Test fstab entries without rebooting
$ sudo mount -a

# Verify
$ df -hT | grep nfs
192.168.1.10:/srv/nfs/shared  nfs4  50G  12G  38G  24% /mnt/shared

Autofs: Mount on Demand

Instead of mounting NFS shares permanently, autofs mounts them automatically when accessed and unmounts them after a period of inactivity. This is ideal for shares that are not needed constantly.

# Install autofs
$ sudo apt install autofs        # Debian/Ubuntu
$ sudo dnf install autofs        # Fedora/RHEL

# Configure the master map
$ sudo vim /etc/auto.master
# /etc/auto.master
# Mount point       Map file              Options
/mnt/auto           /etc/auto.nfs         --timeout=300
# Configure the NFS map
$ sudo vim /etc/auto.nfs
# /etc/auto.nfs
# Key       Options                           Location
shared      -rw,sync                          192.168.1.10:/srv/nfs/shared
readonly    -ro                               192.168.1.10:/srv/nfs/readonly
# Start autofs
$ sudo systemctl enable --now autofs

# Now just access the directory -- autofs mounts it automatically
$ ls /mnt/auto/shared
welcome.txt  client_message.txt

# After 300 seconds of inactivity, it unmounts automatically

The power of autofs is that it avoids hung mounts. If the NFS server is down, accessing the directory simply fails rather than hanging the entire mount table.


NFS Performance Tuning

Server-Side Tuning

# Increase the number of NFS daemon threads (default is 8)
# For busy servers, use one thread per CPU core or more
$ sudo vim /etc/nfs.conf
[nfsd]
threads = 16
# Or set it at runtime
$ sudo sh -c 'echo 16 > /proc/fs/nfsd/threads'

# Check current thread count
$ cat /proc/fs/nfsd/threads
16

Client-Side Tuning

# Mount with larger read/write buffers
$ sudo mount -t nfs -o rsize=1048576,wsize=1048576 \
    192.168.1.10:/srv/nfs/shared /mnt/shared

# Check current NFS mount settings
$ nfsstat -m
/mnt/shared from 192.168.1.10:/srv/nfs/shared
 Flags: rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,...

Monitoring NFS Performance

# NFS statistics on the server
$ nfsstat -s
Server rpc stats:
calls      badcalls   badfmt     badauth    badclnt
14523      0          0          0          0

# NFS statistics on the client
$ nfsstat -c

# Per-mount statistics
$ cat /proc/self/mountstats | grep -A 20 "nfs"

# Quick bandwidth test with dd
$ dd if=/dev/zero of=/mnt/shared/testfile bs=1M count=100 oflag=direct
100+0 records out
104857600 bytes (105 MB) copied, 1.23 s, 85.2 MB/s

NFS Security Considerations

NFS was designed for trusted networks. By default, NFS trusts client machines to report user identities honestly. This means:

  1. IP-based access control only: NFS exports restrict access by IP address, not by user authentication.
  2. UID/GID must match: If user alice is UID 1001 on the client but UID 1002 on the server, she will access the wrong files.
  3. No encryption by default: NFSv3 traffic is sent in the clear. NFSv4 supports Kerberos but it requires additional setup.

Basic Security Practices

# Restrict exports to specific subnets, never use *
/srv/nfs/shared    192.168.1.0/24(rw,sync,no_subtree_check)

# Use root_squash (the default) to prevent client root from being server root
# Only disable it when absolutely necessary

# Use firewall rules to restrict NFS access
$ sudo iptables -A INPUT -p tcp --dport 2049 -s 192.168.1.0/24 -j ACCEPT
$ sudo iptables -A INPUT -p tcp --dport 2049 -j DROP

NFSv4 with Kerberos (Overview)

For environments that need stronger security, NFSv4 supports three Kerberos security modes:

ModeDescription
krb5Authentication only (verifies identity)
krb5iAuthentication + integrity checking
krb5pAuthentication + integrity + encryption

Setting up Kerberos is beyond the scope of this chapter, but know that it exists when you need it.


SSHFS: The Simple Alternative

SSHFS mounts a remote directory over SSH. It is slower than NFS but requires zero server-side configuration -- if you can SSH to a machine, you can SSHFS to it.

# Install SSHFS
$ sudo apt install sshfs        # Debian/Ubuntu
$ sudo dnf install fuse-sshfs   # Fedora/RHEL

# Mount a remote directory
$ mkdir -p ~/remote_server
$ sshfs user@192.168.1.10:/home/user ~/remote_server

# Verify
$ ls ~/remote_server
Documents  Downloads  projects

# Unmount
$ fusermount -u ~/remote_server

SSHFS advantages:

  • No server-side setup required (just SSH)
  • Encrypted by default
  • Works through firewalls (port 22)
  • Non-root users can mount

SSHFS disadvantages:

  • Significantly slower than NFS (SSH encryption overhead)
  • Not suitable for high-throughput workloads
  • Can be unreliable on spotty connections

Making SSHFS Persistent

# Add to /etc/fstab (requires key-based SSH auth)
user@192.168.1.10:/home/user  /mnt/remote  fuse.sshfs  defaults,_netdev,allow_other,IdentityFile=/home/localuser/.ssh/id_ed25519  0 0

Think About It: When would you choose SSHFS over NFS? When would NFS be the clear winner?


CIFS/Samba: Windows Interoperability

If you need to share files between Linux and Windows, Samba implements the SMB/CIFS protocol.

Accessing Windows Shares from Linux

# Install CIFS utilities
$ sudo apt install cifs-utils    # Debian/Ubuntu
$ sudo dnf install cifs-utils    # Fedora/RHEL

# Mount a Windows share
$ sudo mkdir -p /mnt/windows_share
$ sudo mount -t cifs //windows-server/ShareName /mnt/windows_share \
    -o username=admin,domain=WORKGROUP

# With credentials file (more secure)
$ cat ~/.smbcredentials
username=admin
password=secret
domain=WORKGROUP

$ chmod 600 ~/.smbcredentials
$ sudo mount -t cifs //windows-server/ShareName /mnt/windows_share \
    -o credentials=/home/user/.smbcredentials

# fstab entry
//windows-server/ShareName /mnt/windows_share cifs credentials=/home/user/.smbcredentials,_netdev 0 0

Setting Up a Samba Server (Brief)

# Install Samba
$ sudo apt install samba

# Configure a share
$ sudo vim /etc/samba/smb.conf
[shared]
   path = /srv/samba/shared
   browseable = yes
   read only = no
   valid users = @smbgroup
# Create a Samba user
$ sudo smbpasswd -a username

# Restart Samba
$ sudo systemctl restart smbd

# Test configuration
$ testparm

Debug This

A user reports: "I can mount the NFS share from one client but not from another. Both are on the same subnet."

Server /etc/exports:

/srv/nfs/data    192.168.1.0/24(rw,sync,no_subtree_check)

Working client: 192.168.1.15 -- mounts successfully. Failing client: 192.168.1.25 -- gets "access denied."

Both clients can ping the server. What could be wrong?

Checklist to investigate:

# On the failing client, check if NFS utils are installed
$ which mount.nfs

# Check if the server's firewall is blocking the specific client
$ sudo iptables -L -n | grep 2049

# Check if the export was applied
$ sudo exportfs -v    # on the server

# Common gotcha: space between host and options
# WRONG (exports to everyone read-only):
/srv/nfs/data    192.168.1.0/24 (rw,sync)
#                              ^ this space is the problem!

# RIGHT (exports to subnet read-write):
/srv/nfs/data    192.168.1.0/24(rw,sync)

That accidental space is a classic NFS trap. With the space, 192.168.1.0/24 gets the default (read-only) export, and (rw,sync) becomes a separate entry exporting to everyone with rw. Remove the space and re-run exportfs -ra.


┌──────────────────────────────────────────────────────────┐
│                  What Just Happened?                      │
├──────────────────────────────────────────────────────────┤
│                                                           │
│  NFS shares filesystems across the network:               │
│  - Server exports directories via /etc/exports            │
│  - Clients mount them like local filesystems              │
│  - NFSv4 is the modern choice (single port, better       │
│    security)                                              │
│                                                           │
│  Key files and commands:                                  │
│  - /etc/exports         → server export config            │
│  - exportfs -ra         → apply export changes            │
│  - mount -t nfs         → mount on client                 │
│  - /etc/fstab + _netdev → persistent mounts               │
│  - autofs               → mount on demand                 │
│                                                           │
│  Alternatives:                                            │
│  - SSHFS: simple, encrypted, no server setup              │
│  - CIFS/Samba: Linux-Windows file sharing                 │
│                                                           │
│  Security: NFS trusts the network. Restrict by IP,        │
│  use root_squash, consider Kerberos for sensitive data.   │
│                                                           │
└──────────────────────────────────────────────────────────┘

Try This

  1. NFS server and client: If you have two Linux VMs (or use containers), set up an NFS server on one and mount the export on the other. Write files from the client and verify they appear on the server.

  2. Export options: Experiment with ro, all_squash, and root_squash. Create a file as root on the client with each option and check the ownership on the server.

  3. Autofs setup: Configure autofs to mount an NFS share on demand. Access the directory, verify the mount appears, then wait for the timeout and verify it unmounts.

  4. SSHFS experiment: Mount a remote directory over SSHFS. Compare the speed of copying a large file over SSHFS versus NFS (if both are available).

  5. Bonus challenge: Set up an NFS share that uses anonuid and anongid to map all clients to a specific user. Verify that files created by any client user end up owned by the target user on the server.