LXC/LXD & System Containers

Why This Matters

When most people hear "containers," they think of Docker -- packaging a single application into a lightweight image. But there is another kind of container that behaves more like a virtual machine: the system container.

Imagine you need to give 50 students each their own Linux environment for a class. Virtual machines would work, but each one needs its own kernel, gobbling up RAM and boot time. Docker containers run single applications, not full OS environments. What you want is something in between: a container that feels like a full Linux system -- with systemd, multiple services, user accounts, SSH access -- but runs as lightweight as a container.

That is exactly what LXC (Linux Containers) and LXD (now called Incus after the project forked) provide. System containers run a full init system, support multiple processes, and feel like virtual machines but start in seconds and share the host kernel.

If you are setting up development environments, CI/CD build farms, multi-tenant hosting, or any scenario where you need many lightweight Linux instances, system containers are the right tool.


Try This Right Now

If you have LXD or Incus installed:

$ lxc launch ubuntu:22.04 my-first-container
Creating my-first-container
Starting my-first-container

$ lxc exec my-first-container -- bash
root@my-first-container:~# systemctl status
● my-first-container
    State: running
     Jobs: 0 queued
   Failed: 0 units
root@my-first-container:~# cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04 LTS"
root@my-first-container:~# exit

You just launched a full Ubuntu system in seconds. It has systemd, can run multiple services, has its own network, and behaves exactly like a lightweight VM.


System Containers vs Application Containers

This is the key distinction that defines when to use LXC/LXD versus Docker/Podman.

Application Container (Docker/Podman):     System Container (LXC/LXD):

┌─────────────────────────┐              ┌─────────────────────────┐
│    Single Application   │              │   Full Init System      │
│    (nginx, python, etc.)│              │   (systemd/openrc)      │
│                         │              │                         │
│    No init system       │              │   Multiple services     │
│    One main process     │              │   (sshd, cron, nginx,   │
│    No SSH daemon        │              │    logging, etc.)       │
│    No cron, no syslog   │              │                         │
│    Ephemeral            │              │   User accounts         │
│    Immutable image      │              │   Package manager       │
│                         │              │   Feels like a VM       │
└─────────────────────────┘              └─────────────────────────┘

Use for: microservices,                  Use for: dev environments,
CI/CD, app packaging                     VPS hosting, testing,
                                         full OS simulation
FeatureApplication ContainerSystem Container
Init systemNone (PID 1 is the app)Full (systemd, etc.)
ProcessesSingle process (ideally)Multiple services
Image modelLayered, immutableFull OS, mutable
LifecycleCreate, run, destroyLong-lived, like a VM
Package managementBaked into image at build timeapt/dnf inside the container
SSH accessNot typical (use exec)Supported and common
Boot timeMilliseconds1-3 seconds
KernelShares host kernelShares host kernel
DensityThousands per hostHundreds per host

LXC: The Foundation

LXC (Linux Containers) is the original Linux container technology. It uses the same kernel features as Docker (namespaces, cgroups, chroot) but is designed to run full system environments rather than single applications.

LXC provides:

  • Low-level container runtime
  • Configuration files for each container
  • Template-based container creation
  • Direct mapping to kernel features
# Install LXC
$ sudo apt install -y lxc          # Debian/Ubuntu
$ sudo dnf install -y lxc          # Fedora/RHEL

LXC works but is relatively low-level. Most users today interact with it through LXD or Incus, which provide a much better user experience.


LXD and Incus: The Modern Manager

LXD was created by Canonical as a user-friendly management layer on top of LXC. In 2023, Canonical moved LXD under their corporate control, and the Linux Containers community forked it into Incus. Incus is the community-maintained successor.

┌──────────────────────────────────────┐
│          Management Layer            │
│  ┌──────────┐    ┌──────────┐       │
│  │   LXD    │    │  Incus   │       │
│  │(Canonical)│   │(Community)│       │
│  └────┬─────┘    └────┬─────┘       │
│       └───────┬───────┘             │
│               ▼                      │
│  ┌──────────────────────┐           │
│  │    LXC (runtime)     │           │
│  └──────────┬───────────┘           │
│             ▼                        │
│  ┌──────────────────────┐           │
│  │ Namespaces + Cgroups  │          │
│  │ (Linux Kernel)        │          │
│  └──────────────────────┘           │
└──────────────────────────────────────┘

Both LXD and Incus use the same lxc client command. In this chapter, we use lxc commands that work with either backend.

Installing LXD

On Ubuntu (snap-based):

$ sudo snap install lxd
$ sudo lxd init

The lxd init wizard configures storage, networking, and defaults:

Would you like to use LXD clustering? (yes/no) [default=no]: no
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes
Name of the new storage pool [default=default]: default
Name of the storage backend (btrfs, dir, lvm, zfs) [default=zfs]: dir
Would you like to connect to a MAAS server? (yes/no) [default=no]: no
Would you like to create a new local network bridge? (yes/no) [default=yes]: yes
What should the new bridge be called? [default=lxdbr0]: lxdbr0
What IPv4 address should be used? (CIDR, "auto" or "none") [default=auto]: auto
What IPv6 address should be used? (CIDR, "auto" or "none") [default=auto]: none
Would you like the LXD server to be available over the network? (yes/no) [default=no]: no
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: yes
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: no

Distro Note: On Fedora, use Incus instead: sudo dnf install incus and sudo incus admin init. The incus command replaces lxc but works identically. On Arch Linux: sudo pacman -S incus.

Add your user to the lxd group:

$ sudo usermod -aG lxd $USER
# Log out and back in

Hands-On: Working with Containers

Launching Containers

# Launch an Ubuntu container
$ lxc launch ubuntu:22.04 web-server

# Launch a Debian container
$ lxc launch images:debian/12 db-server

# Launch a CentOS container
$ lxc launch images:centos/9-Stream build-server

# Launch an Alpine container (very small)
$ lxc launch images:alpine/3.19 tiny

# List all containers
$ lxc list
+──────────────+─────────+──────────────────────+──────────────+────────────+
│     NAME     │  STATE  │        IPV4          │     TYPE     │ SNAPSHOTS  │
+──────────────+─────────+──────────────────────+──────────────+────────────+
│ web-server   │ RUNNING │ 10.10.10.100 (eth0)  │ CONTAINER    │ 0          │
│ db-server    │ RUNNING │ 10.10.10.101 (eth0)  │ CONTAINER    │ 0          │
│ build-server │ RUNNING │ 10.10.10.102 (eth0)  │ CONTAINER    │ 0          │
│ tiny         │ RUNNING │ 10.10.10.103 (eth0)  │ CONTAINER    │ 0          │
+──────────────+─────────+──────────────────────+──────────────+────────────+

Interacting with Containers

# Execute a command
$ lxc exec web-server -- cat /etc/os-release

# Get a shell
$ lxc exec web-server -- bash

# Run as a specific user
$ lxc exec web-server -- su - ubuntu

# Push a file into the container
$ lxc file push local-file.txt web-server/root/file.txt

# Pull a file from the container
$ lxc file pull web-server/var/log/syslog ./syslog-copy

# Edit a file inside the container (opens in $EDITOR)
$ lxc file edit web-server/etc/nginx/nginx.conf

Container Lifecycle

# Stop a container (graceful shutdown)
$ lxc stop web-server

# Start a container
$ lxc start web-server

# Restart a container
$ lxc restart web-server

# Force stop (like power off)
$ lxc stop web-server --force

# Pause (freeze) a container
$ lxc pause web-server

# Delete a container
$ lxc delete web-server

# Delete a running container (force)
$ lxc delete web-server --force

Safety Warning: lxc delete permanently removes the container and all its data. There is no undo. Always use snapshots or backups before deleting containers with important data.

Installing Services Inside System Containers

Because system containers run a full OS, you can use them like regular machines:

$ lxc exec web-server -- bash

# Inside the container -- it is a full system!
root@web-server:~# apt update
root@web-server:~# apt install -y nginx
root@web-server:~# systemctl enable --now nginx
root@web-server:~# curl http://localhost
<!DOCTYPE html>
<html>
<head><title>Welcome to nginx!</title></head>
...
root@web-server:~# exit

Think About It: You just installed nginx inside a container using apt install, and it is running under systemd with systemctl enable. You could not do this in a Docker container. This is what makes system containers feel like VMs -- they run a real init system and manage services normally.


Profiles

Profiles are reusable configuration templates that you can apply to containers. They control resource limits, networking, storage, and more.

# List profiles
$ lxc profile list
+─────────+──────────+
│  NAME   │ USED BY  │
+─────────+──────────+
│ default │    4     │
+─────────+──────────+

# View the default profile
$ lxc profile show default
config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default

Creating a Custom Profile

# Create a profile for web servers
$ lxc profile create web-server

$ lxc profile edit web-server
config:
  limits.cpu: "2"
  limits.memory: 1GB
  limits.memory.swap: "false"
  security.nesting: "false"
description: Profile for web server containers
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    size: 10GB
    type: disk
# Launch a container with this profile
$ lxc launch ubuntu:22.04 prod-web --profile web-server

# Apply a profile to an existing container
$ lxc profile add existing-container web-server

# View container configuration
$ lxc config show prod-web

Setting Resource Limits Directly

# Set CPU limit
$ lxc config set web-server limits.cpu 2

# Set memory limit
$ lxc config set web-server limits.memory 512MB

# Set disk quota (requires btrfs or zfs storage)
$ lxc config device set web-server root size=5GB

# View resource usage
$ lxc info web-server

Storage Pools

LXD manages storage through pools. Different backends offer different features:

BackendSnapshotsQuotasFast CloneBest For
dirYes (slow)NoNoSimple setups
btrfsYes (instant)YesYes (CoW)Development
zfsYes (instant)YesYes (CoW)Production
lvmYesYesYesEnterprise
# List storage pools
$ lxc storage list

# Create a new storage pool
$ lxc storage create fast-pool zfs size=50GB

# View pool details
$ lxc storage show fast-pool

# View pool usage
$ lxc storage info fast-pool

Networking

LXD provides several networking options.

Default Bridge Network

By default, containers connect to a managed bridge (lxdbr0). LXD runs its own DHCP and DNS server on this bridge.

# View network configuration
$ lxc network show lxdbr0
config:
  ipv4.address: 10.10.10.1/24
  ipv4.nat: "true"
  ipv4.dhcp: "true"
  dns.mode: managed
name: lxdbr0
type: bridge

Containers on the bridge can reach each other by name:

$ lxc exec web-server -- ping -c 2 db-server
PING db-server (10.10.10.101): 56 data bytes
64 bytes from 10.10.10.101: seq=0 ttl=64 time=0.050 ms

Port Forwarding

To make a container service accessible from the host network:

# Forward host port 80 to container port 80
$ lxc config device add web-server http proxy \
    listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80

# Remove the port forward
$ lxc config device remove web-server http

Macvlan (Direct Network Access)

For containers that need to appear directly on the physical network:

# Create a macvlan profile
$ lxc profile create direct-net
$ lxc profile device add direct-net eth0 nic \
    nictype=macvlan parent=eth0

# Launch a container with direct network access
$ lxc launch ubuntu:22.04 direct-vm --profile direct-net
# This container gets an IP from your physical network's DHCP

Snapshots

Snapshots capture the complete state of a container at a point in time.

# Create a snapshot
$ lxc snapshot web-server clean-install

# List snapshots
$ lxc info web-server
...
Snapshots:
  clean-install (taken at 2024/02/21 10:00 UTC) (stateless)

# Restore from a snapshot
$ lxc restore web-server clean-install

# Create a new container from a snapshot
$ lxc copy web-server/clean-install web-server-clone

# Delete a snapshot
$ lxc delete web-server/clean-install

# Automatic snapshots (create one daily, keep 7)
$ lxc config set web-server snapshots.schedule "0 2 * * *"
$ lxc config set web-server snapshots.schedule.stopped "false"
$ lxc config set web-server snapshots.expiry 7d

Think About It: With ZFS or Btrfs storage backends, snapshots are nearly instant and consume no extra space initially (copy-on-write). This makes them incredibly useful for testing: snapshot, make changes, restore if something breaks.


Image Management

LXD pulls images from remote image servers.

# List configured remotes
$ lxc remote list
+──────────────+──────────────────────────────+──────────+
│    NAME      │             URL              │ PROTOCOL │
+──────────────+──────────────────────────────+──────────+
│ images       │ https://images.linuxcontai...│ simplestr│
│ ubuntu       │ https://cloud-images.ubunt...│ simplestr│
│ ubuntu-daily │ https://cloud-images.ubunt...│ simplestr│
+──────────────+──────────────────────────────+──────────+

# List available images from the "images" remote
$ lxc image list images: | head -30

# Search for Debian images
$ lxc image list images: debian

# List locally cached images
$ lxc image list

# Create an image from an existing container
$ lxc publish web-server --alias my-web-template

# Launch from your custom image
$ lxc launch my-web-template web-server-2

# Export an image to a file (for transfer)
$ lxc image export my-web-template /tmp/web-template

# Import an image from a file
$ lxc image import /tmp/web-template.tar.gz --alias imported-web

Use Cases for System Containers

1. Development Environments

Give each developer their own full Linux environment:

$ lxc launch ubuntu:22.04 dev-alice --profile dev-workstation
$ lxc launch ubuntu:22.04 dev-bob --profile dev-workstation

# Each developer gets SSH access to their own container

2. CI/CD Build Environments

Clean build environments that spin up in seconds:

# Create a golden image with build tools
$ lxc launch ubuntu:22.04 build-template
$ lxc exec build-template -- bash
root# apt install -y build-essential git cmake python3-pip
root# exit
$ lxc publish build-template --alias ci-base
$ lxc delete build-template

# Each CI job gets a fresh clone
$ lxc launch ci-base build-job-42
# ... run build ...
$ lxc delete build-job-42 --force

3. Multi-Tenant Hosting

Lightweight VPS hosting:

# Create profiles with resource limits
$ lxc profile create tenant-small
# limits.cpu: 1, limits.memory: 512MB, root size: 10GB

$ lxc profile create tenant-medium
# limits.cpu: 2, limits.memory: 2GB, root size: 50GB

# Launch tenant containers
$ lxc launch ubuntu:22.04 tenant-acme --profile tenant-small
$ lxc launch ubuntu:22.04 tenant-bigco --profile tenant-medium

LXD vs Docker: When to Use Which

Need a full Linux system?                    → LXD
Need to package a single application?         → Docker
Need systemd, cron, SSH inside?              → LXD
Need fast, reproducible app deployment?       → Docker
Need persistent, long-lived environments?     → LXD
Need ephemeral, disposable environments?      → Docker
Need to run multiple services per container?  → LXD
Need microservice architecture?               → Docker
Need VM-like behavior without VM overhead?   → LXD
Need CI/CD pipeline integration?              → Both work well

You can even run Docker inside LXD containers (with security.nesting enabled):

$ lxc config set my-container security.nesting true
$ lxc exec my-container -- bash
root# apt install -y docker.io
root# systemctl start docker
root# docker run hello-world

Debug This

A container will not start and shows a permissions error:

$ lxc start my-container
Error: Failed to run: ... apparmor ... DENIED

Diagnosis:

# Check the container log
$ lxc info my-container --show-log

The error is usually related to AppArmor or security profiles blocking the container's operations.

Fix:

# Check if the LXD AppArmor profile is loaded
$ sudo aa-status | grep lxd

# If profiles are missing, reload them
$ sudo systemctl restart snap.lxd.daemon  # snap install
$ sudo systemctl restart lxd              # package install

# As a last resort, set the container to unconfined (not recommended for production)
$ lxc config set my-container raw.lxc "lxc.apparmor.profile=unconfined"

Another common issue -- running out of storage:

$ lxc launch ubuntu:22.04 test
Error: Failed to ... no space left on device

Fix:

# Check storage pool usage
$ lxc storage info default

# If using dir backend, check host disk space
$ df -h

# Clean up unused images
$ lxc image list --format csv | grep -v CACHED || lxc image prune

What Just Happened?

┌─────────────────────────────────────────────────────────────┐
│                    CHAPTER RECAP                             │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  System containers run full Linux systems with init,        │
│  multiple services, and user accounts.                     │
│                                                             │
│  LXC is the low-level runtime; LXD/Incus provide           │
│  the user-friendly management layer.                       │
│                                                             │
│  lxc launch = create + start a container in seconds.        │
│  lxc exec = run commands inside containers.                │
│  lxc snapshot = point-in-time recovery.                    │
│                                                             │
│  Profiles define reusable configuration templates           │
│  for CPU, memory, storage, and networking.                  │
│                                                             │
│  Storage backends (ZFS, Btrfs) enable instant snapshots.    │
│                                                             │
│  Networking: bridge (default), macvlan (direct),            │
│  proxy devices (port forwarding).                          │
│                                                             │
│  Use system containers for: dev environments, CI/CD,        │
│  multi-tenant hosting, VM-like workloads.                  │
│  Use application containers for: microservices,             │
│  app packaging, single-process workloads.                  │
│                                                             │
└─────────────────────────────────────────────────────────────┘

Try This

  1. Launch and explore: Launch containers from three different distributions (Ubuntu, Debian, Alpine). Log into each and observe the differences -- package manager, init system, default packages.

  2. Profile practice: Create a profile called limited that restricts containers to 1 CPU and 256MB of RAM. Launch a container with this profile. Inside the container, run free -m and nproc to verify the limits are enforced.

  3. Snapshot workflow: Launch a container, install nginx, create a snapshot. Then break nginx (delete its config files). Restore from the snapshot and verify nginx works again.

  4. Custom image: Configure a container with your preferred development tools (git, vim, your favorite language runtime). Publish it as a custom image. Launch three clones from that image and verify they are all identical.

  5. Networking lab: Launch two containers. Verify they can ping each other by name. Add a proxy device to make one container's nginx accessible from the host on port 8080.

  6. Bonus Challenge: Launch a container, enable security.nesting, install Docker inside it, and run a Docker container within the LXD container. You are now running containers inside containers. Verify the nested container can access the network.