Interrupts and Async
So far, our code has followed a straight path: do this, wait, do that, repeat. But embedded systems need to react to the real world, and the real world doesn't wait politely for your code to finish a loop iteration.
The Polling Problem
Let's say you want to detect a button press while blinking an LED. The naive approach is polling — checking the button inside your blink loop:
#![allow(unused)] fn main() { loop { led.toggle(); Timer::after_millis(500).await; // Check button between blinks if button.is_low() { defmt::info!("Pressed!"); } } }
This has two serious problems. First, if the button is pressed and released during the 500ms wait, you miss it entirely. Second, you're checking only twice per second — that's a 500ms worst-case response time. For a button that's merely annoying. For a safety-critical sensor signal, it's catastrophic.
🧠 Think About It: Imagine a motor controller that polls a limit switch at 2 Hz. The motor could travel a long distance in 500ms. In industrial systems, missed events don't just cause bugs — they cause damage.
How Interrupts Work
The hardware solution is interrupts. When an event occurs (pin changes state, timer expires, byte received), the hardware immediately pauses whatever the CPU is doing, saves its state, and jumps to a special function called an interrupt handler (or ISR — Interrupt Service Routine). When the handler finishes, the CPU resumes exactly where it left off.
Normal code: ──── running ─────┐ ┌──── continues ────
│ │
Interrupt: └── ISR ──┘
↑
Hardware event fires
STM32 chips use the NVIC (Nested Vectored Interrupt Controller), which supports:
- Priority levels 0–15 (0 is highest priority, 15 is lowest)
- Nesting — a higher-priority interrupt can preempt a lower-priority one
- Dozens of interrupt sources — each peripheral can trigger its own
The Traditional Pain
In C (and bare-metal Rust), working with interrupts is notoriously tricky:
- You write an ISR function that must be short and fast
- You communicate with main code through volatile global variables
- You need critical sections to prevent data races
- You manage priorities carefully to avoid deadlocks
- Debugging is miserable because the flow is non-linear
This is where Embassy changes the game entirely.
Embassy's Async Model
Embassy maps hardware interrupts to Rust's async/await system. Instead of writing ISR callbacks and juggling shared volatile state, you write tasks that look like normal sequential code. When a task calls .await, it yields the CPU and sleeps until the hardware event occurs.
Under the hood, Embassy still uses interrupts — but it wraps them so you never touch the raw interrupt machinery yourself.
Traditional: main() ←──shared volatile──→ ISR()
(manual synchronization, unsafe, error-prone)
Embassy: async task1().await async task2().await
(compiler-checked, safe, composable)
Your First Async Task
Let's write a button handler as a standalone Embassy task:
use embassy_executor::Spawner; use embassy_stm32::exti::ExtiInput; use embassy_stm32::gpio::{Pull}; use embassy_time::Timer; #[embassy_executor::task] async fn button_task(mut button: ExtiInput<'static>) { loop { button.wait_for_falling_edge().await; defmt::info!("Button pressed!"); Timer::after_millis(50).await; // debounce } } #[embassy_executor::main] async fn main(spawner: Spawner) { let p = embassy_stm32::init(Default::default()); let button = ExtiInput::new(p.PC13, p.EXTI13, Pull::Up); spawner.spawn(button_task(button)).unwrap(); }
When button_task hits wait_for_falling_edge().await, it sleeps -- consuming zero CPU -- until the EXTI hardware interrupt fires on pin state change.
Running Multiple Concurrent Tasks
The real power shows up when you have several things happening at once. Embassy's executor runs tasks cooperatively — each task runs until it hits an .await, then the executor checks if any other task is ready to run.
#[embassy_executor::task] async fn blink_task(mut led: Output<'static>) { loop { led.toggle(); Timer::after_millis(500).await; } } #[embassy_executor::task] async fn button_task(mut button: ExtiInput<'static>) { let mut count: u32 = 0; loop { button.wait_for_falling_edge().await; count += 1; defmt::info!("Button pressed {} times", count); Timer::after_millis(50).await; } } #[embassy_executor::task] async fn heartbeat_task() { loop { defmt::info!("System alive"); Timer::after_secs(5).await; } } #[embassy_executor::main] async fn main(spawner: Spawner) { let p = embassy_stm32::init(Default::default()); let led = Output::new(p.PE3, Level::High, Speed::Low); let button = ExtiInput::new(p.PC13, p.EXTI13, Pull::Up); spawner.spawn(blink_task(led)).unwrap(); spawner.spawn(button_task(button)).unwrap(); spawner.spawn(heartbeat_task()).unwrap(); // All tasks are running. Main has nothing else to do. // The executor will put the CPU to sleep when all tasks are awaiting. }
Three tasks run concurrently on a single-core MCU with no RTOS, no threads, and no heap allocation. When all tasks are awaiting, the executor puts the CPU into low-power sleep. The next hardware event wakes it.
💡 Fun Fact: Embassy compiles async tasks into state machines at compile time. No heap allocation, no runtime task control blocks. The resulting code is often smaller and faster than hand-written interrupt-based C.
Sharing Data Between Tasks
Tasks that run independently are nice, but eventually they need to communicate. Embassy provides several primitives for this, all designed for embedded use (no heap, no std).
Channel — Message Queue
A Channel is a fixed-size queue. One task sends values, another receives them. If the channel is full, the sender waits. If empty, the receiver waits.
#![allow(unused)] fn main() { use embassy_sync::channel::Channel; use embassy_sync::blocking_mutex::raw::CriticalSectionRawMutex; // Channel that holds up to 4 u32 values static EVENT_CHANNEL: Channel<CriticalSectionRawMutex, u32, 4> = Channel::new(); #[embassy_executor::task] async fn producer_task(mut button: ExtiInput<'static>) { let mut count: u32 = 0; loop { button.wait_for_falling_edge().await; count += 1; EVENT_CHANNEL.send(count).await; Timer::after_millis(50).await; } } #[embassy_executor::task] async fn consumer_task(mut led: Output<'static>) { loop { let count = EVENT_CHANNEL.receive().await; defmt::info!("Event #{}", count); // Flash LED to acknowledge led.set_low(); Timer::after_millis(100).await; led.set_high(); } } }
Signal — Latest Value Notification
A Signal holds a single value and notifies the waiting task. Unlike Channel, if you signal multiple times before the receiver wakes up, only the latest value is kept. Perfect for sensor readings where you always want the most recent data.
#![allow(unused)] fn main() { use embassy_sync::signal::Signal; use embassy_sync::blocking_mutex::raw::CriticalSectionRawMutex; static TEMPERATURE: Signal<CriticalSectionRawMutex, i32> = Signal::new(); #[embassy_executor::task] async fn sensor_task() { loop { let temp = read_temperature(); TEMPERATURE.signal(temp); Timer::after_secs(1).await; } } #[embassy_executor::task] async fn display_task() { loop { let temp = TEMPERATURE.wait().await; defmt::info!("Temperature: {} C", temp); } } }
Mutex — Shared Mutable Access
When multiple tasks need to read and write the same data structure, wrap it in a Mutex. The async mutex yields instead of spinning, so other tasks can run while one holds the lock.
#![allow(unused)] fn main() { use embassy_sync::mutex::Mutex; use embassy_sync::blocking_mutex::raw::CriticalSectionRawMutex; static STATE: Mutex<CriticalSectionRawMutex, u32> = Mutex::new(0); #[embassy_executor::task] async fn button_handler(mut button: ExtiInput<'static>) { loop { button.wait_for_falling_edge().await; { let mut count = STATE.lock().await; *count += 1; } // lock released here Timer::after_millis(50).await; } } }
🧠 Think About It: We use
CriticalSectionRawMutexeverywhere. On a single-core MCU, a critical section (briefly disabling interrupts) is the simplest correct synchronization. On multi-core chips, you'd useThreadModeRawMutexinstead.
Choosing the Right Primitive
| Primitive | Best For | Behavior |
|---|---|---|
| Channel | Event streams, command queues | Buffered FIFO, backpressure when full |
| Signal | Latest-value notifications | Single value, newer overwrites older |
| Mutex | Shared state accessed by multiple tasks | Exclusive access, async-aware locking |
What's Next?
You can now run concurrent tasks that communicate safely. The next chapter covers timers and PWM — hardware counters that let you generate precise waveforms, control servos, and run real-time control loops at exact frequencies.