Axum
Foundations
Axum is not a monolith. It is the thin routing layer sitting atop a composed stack of libraries, each with a single clear responsibility. You will encounter all of them as you build the NOC API. Understanding who does what prevents confusion when error messages mention an unfamiliar type name.
THE AXUM STACK ───────────────────────────────────────────────────────── Your Application Code │ ▼ ┌─────────────────────────────────────────────────────┐ │ axum │ │ Router: maps paths+methods → handler functions │ │ Extractors: parses request into typed arguments │ │ Responders: converts handler return → HTTP response│ └──────────────────────────┬──────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────┐ │ tower + tower-http │ │ Middleware: logging, CORS, auth, compression │ │ Service trait: composable request→response units │ └──────────────────────────┬──────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────┐ │ hyper │ │ HTTP/1.1 + HTTP/2 parser and connection management │ └──────────────────────────┬──────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────┐ │ tokio │ │ Async runtime: I/O, timers, thread pool │ └─────────────────────────────────────────────────────┘ Each layer is independently useful and versioned. Axum 0.7 uses Hyper 1.x and Tower 0.4.
On embedded RP2350 targets you used Embassy as the async executor. On the server, Tokio is the executor — a multi-threaded work-stealing runtime that can drive thousands of concurrent connections on a single machine. Both are implementations of the same Rust async model you studied in Chapter 4; the mental model transfers completely.
§ 11.2The web chapters use standard Rust — std is available, heap allocation is fine, you have a full operating system. Create a new project and add the dependencies:
cargo new sprint-noc-api cd sprint-noc-api
[package]
name = "sprint-noc-api"
version = "0.1.0"
edition = "2021"
[dependencies]
axum = { version = "0.7", features = ["ws", "macros"] }
tokio = { version = "1", features = ["full"] }
tower = { version = "0.4" }
tower-http = { version = "0.5", features = ["cors", "trace", "compression-gzip"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
anyhow = "1"
thiserror = "1"
chrono = { version = "0.4", features = ["serde"] }
uuid = { version = "1", features = ["v4", "serde"] }An Axum router maps HTTP method + path to an async handler function. The path can contain segments that are extracted into typed Rust values — the compiler ensures your handler receives exactly what the path promises. There is no string matching at runtime for path parameters.
use axum::{ Router, routing::{get, post}, response::Json, extract::Path, }; use serde::{Deserialize, Serialize}; use tokio::net::TcpListener; #[tokio::main] async fn main() { // Initialise structured logging to stdout tracing_subscriber::fmt() .with_env_filter("sprint_noc_api=debug,tower_http=debug") .init(); let app = Router::new() .route("/", get(health)) .route("/api/v1/sites", get(list_sites)) .route("/api/v1/sites/:id", get(get_site)); let listener = TcpListener::bind("0.0.0.0:8080").await.unwrap(); tracing::info!("Sprint NOC API listening on :8080"); axum::serve(listener, app).await.unwrap(); } // Handler: GET / async fn health() -> &'static str { "Sprint NOC API — operational" } // Handler: GET /api/v1/sites async fn list_sites() -> Json<Vec<&static str>> { Json(vec!["Kilimanjaro", "Serengeti", "Drakensberg", "Rwenzori"]) } // Handler: GET /api/v1/sites/:id // Path extractor automatically parses :id from the URL async fn get_site(Path(id): Path<String>) -> String { format!("Site details for: {}", id) }
Extractors are the defining feature of Axum. A handler is just an async function whose arguments implement the FromRequest or FromRequestParts trait. Axum calls your handler with whatever types you declare, extracting them from the incoming HTTP request automatically. If extraction fails, Axum returns an appropriate 4xx error before your handler runs.
Common built-in extractors
Path(id): Path<String> — captures path segments. /sites/:id → Path(id).
Query(params): Query<T> — deserialises query string into T (must impl Deserialize).
Json(body): Json<T> — reads and deserialises the request body as JSON into T.
State(state): State<AppState> — gives the handler access to shared application state (database pool, config, broadcast channels).
Extension(user): Extension<AuthUser> — type-map inserted by middleware, consumed by handlers. This is how auth middleware passes the authenticated user to route handlers.
headers: HeaderMap — raw access to all HTTP headers.
Extractors compose: you can have as many as you need in one handler. The only restriction: the body extractor (Json, Bytes, String) can appear at most once, and must be the last argument.
use axum::extract::{Path, Query, State}; use serde::Deserialize; #[derive(Deserialize)] struct AlertQuery { site: Option<String>, severity: Option<String>, limit: Option<u32>, } // GET /api/v1/alerts?site=Kilimanjaro&severity=high&limit=10 async fn list_alerts( State(state): State<AppState>, Query(q): Query<AlertQuery>, ) -> Json<Vec<Alert>> { let limit = q.limit.unwrap_or(20); let alerts = state.db.fetch_alerts(q.site, q.severity, limit).await; Json(alerts) } // POST /api/v1/alerts — JSON body is the last (only body) extractor #[derive(Deserialize)] struct CreateAlertRequest { site: String, message: String, severity: String, } async fn create_alert( State(state): State<AppState>, Json(body): Json<CreateAlertRequest>, // body extractor last ) -> impl axum::response::IntoResponse { // handle creation ... (axum::http::StatusCode::CREATED, Json(serde_json::json!({"created": true}))) }
Axum runs your handlers concurrently across Tokio's thread pool. Multiple requests can execute the same handler simultaneously. Shared state — database connection pools, configuration, broadcast channels — must be safe to access concurrently. The standard pattern is Arc<AppState>: reference-counted, heap-allocated, immutable-by-default. For values that need interior mutability (like a counter), you add Mutex<T> inside the Arc.
use std::sync::Arc; use tokio::sync::broadcast; // Everything your handlers might need #[derive(Clone)] pub struct AppState { // Clone of Arc is cheap — it just increments the reference count pub db: Arc<Database>, // Chapter 12: SQLx pool pub config: Arc<Config>, // Broadcast channel for real-time alerts (Chapter 14) pub alert_tx: broadcast::Sender<AlertEvent>, } impl AppState { pub fn new(db: Database, config: Config) -> Self { let (alert_tx, _) = broadcast::channel(256); Self { db: Arc::new(db), config: Arc::new(config), alert_tx, } } } // Register with the router: let app = Router::new() .route("/api/v1/alerts", get(list_alerts).post(create_alert)) .with_state(state); // <-- Arc clone distributed to all handlers
Handlers can return anything that implements IntoResponse. The common return type for fallible handlers is Result<impl IntoResponse, AppError> where AppError is your custom error type that itself implements IntoResponse. This is the same thiserror pattern from Chapter 3, extended to HTTP.
use axum::{ response::{IntoResponse, Response}, http::StatusCode, Json, }; use serde_json::json; use thiserror::Error; #[derive(Debug, Error)] pub enum ApiError { #[error("not found: {0}")] NotFound(String), #[error("unauthorised")] Unauthorised, #[error("database error: {0}")] Database(#[from] sqlx::Error), #[error("internal error")] Internal(#[from] anyhow::Error), } // Converts ApiError into an HTTP response automatically impl IntoResponse for ApiError { fn into_response(self) -> Response { let (status, message) = match &self { ApiError::NotFound(msg) => (StatusCode::NOT_FOUND, msg.clone()), ApiError::Unauthorised => (StatusCode::UNAUTHORIZED, self.to_string()), ApiError::Database(_) => (StatusCode::INTERNAL_SERVER_ERROR, "database error".into()), ApiError::Internal(_) => (StatusCode::INTERNAL_SERVER_ERROR, "internal error".into()), }; (status, Json(json!({ "error": message }))).into_response() } } // Usage in any handler: async fn get_alert( Path(id): Path<uuid::Uuid>, State(state): State<AppState>, ) -> Result<Json<Alert>, ApiError> { let alert = state.db.find_alert(id).await? // ? converts sqlx::Error → ApiError::Database .ok_or_else(|| ApiError::NotFound(format!("alert {}", id)))?; Ok(Json(alert)) }
Tower middleware wraps every request in a Service call chain. You add middleware with .layer() on the router. Layers are applied inside-out — the last .layer() call wraps outermost, runs first. This ordering matters for auth middleware (which must run before business logic) and logging (which should wrap everything to capture total latency).
use tower_http::{ cors::{CorsLayer, Any}, trace::TraceLayer, compression::CompressionLayer, }; use tower::ServiceBuilder; let middleware = ServiceBuilder::new() .layer(TraceLayer::new_for_http()) // structured request/response logging .layer(CompressionLayer::new()) // gzip response compression .layer( CorsLayer::new() .allow_origin(Any) // restrict in production .allow_methods([Method::GET, Method::POST]) .allow_headers([CONTENT_TYPE, AUTHORIZATION]) ); let app = Router::new() .route("/api/v1/alerts", get(list_alerts)) .layer(middleware) // applies to all routes .with_state(state); // Route-specific middleware (e.g., auth on /api/v1/* only): let protected = Router::new() .route("/api/v1/alerts", get(list_alerts).post(create_alert)) .layer(auth_middleware()); // Chapter 13 let app = Router::new() .route("/", get(health)) // public — no auth .merge(protected) // protected routes .layer(TraceLayer::new_for_http());
The mental model you already have
You built Sprint's network around layers: BGP peering layer, IGP layer, LDP MPLS label layer, application services on top. Axum layers are the same architectural discipline applied to HTTP. TraceLayer is your syslog. CorsLayer is your ACL. AuthLayer is your RADIUS gate. Each does one job cleanly. You compose them in a deterministic order and the system becomes predictable. That is why Rust attracted you to infrastructure in the first place.
Build the Sites Endpoints
Create handlers for the four Sprint Tanzania sites: Kilimanjaro, Serengeti, Drakensberg, and Rwenzori. Each site should have a name, city, status ("operational" | "degraded" | "down"), and latency_ms field. Store them in a Vec<Site> inside AppState behind an Arc<RwLock<Vec<Site>>>. Implement GET /api/v1/sites (list all) and GET /api/v1/sites/:name (get one, returning 404 if not found using your ApiError). Test with curl.
Request Timing Middleware
Write a custom Tower middleware using tower::layer::layer_fn that records the duration of every request and adds an X-Response-Time-Ms header to every response. Log each request with method, path, status code, and duration using tracing::info!. This pattern is how Isaac's team at Sprint monitors API health before Zabbix integration.