Rust Security Code Review: When Memory Safety Isn't Enough
14 min read
December 7, 2025
🚧 Site Migration Notice
I've recently migrated this site from Ghost CMS to a new Astro-based frontend. While I've worked hard to ensure everything transferred correctly, some articles may contain formatting errors or broken elements.
If you spot any issues, I'd really appreciate it if you could let me know! Your feedback helps improve the site for everyone.

Table of contents
👋 Introduction
Hey everyone!
On November 18, 2025, Cloudflare went down and took half the Internet with it. ChatGPT stopped responding. Claude returned errors. Shopify, Uber, Dropbox. All showing 5xx errors for hours. The culprit? A single line of Rust code.
.unwrap()
That’s it. One .unwrap() in production code that assumed “this will never happen.” But it did happen. A configuration file doubled in size. The code panicked. And 330+ data centers across the globe stopped serving traffic.
This incident got me digging deeper into Rust security. I’ve been studying Rust for blockchain work (Solana programs, mostly) and kept hearing the same mantra: “It’s memory safe, so it’s secure.” But the Cloudflare outage proved what I suspected. Memory safety doesn’t mean security.
After going through the postmortem, analyzing similar incidents, and reviewing Rust CVEs, I realized Rust has security problems that most developers don’t talk about. Panics that cause DoS. Integer overflows that wrap silently in release mode. Logic bugs the compiler can’t catch. And unsafe blocks where all bets are off.
Rust is being adopted everywhere. The Linux kernel, Android, Windows components, Solana smart contracts, crypto wallets, embedded systems. As Rust codebases grow, so does the attack surface. And most developers assume the compiler catches everything. It doesn’t.
The worst part? Traditional security tools designed for C/C++ don’t understand Rust’s semantics. And auditors trained on memory corruption bugs often miss the subtle logic flaws that Rust allows.
In this issue, we’ll cover:
- Common security vulnerabilities in Rust code
- The dangers hiding in
unsafeblocks - Integer overflow and underflow exploitation
- Panic-based denial of service attacks
- Logic bugs the borrow checker can’t catch
- FFI security pitfalls
- Tools and techniques for Rust security audits
- Defense strategies and secure coding patterns
If you’re auditing Rust code, building Rust applications, or just curious about the security landscape beyond memory safety, this is essential knowledge.
Let’s break some assumptions 👇
🔍 The Myth of Complete Safety
Let’s be clear about what Rust guarantees and what it doesn’t.
What Rust Prevents
Memory Safety:
// This won't compile
let mut v = vec![1, 2, 3];
let first = &v[0];
v.push(4); // Error: can't mutate while borrowed
println!("{}", first);
The borrow checker catches this at compile time. No dangling pointers, no data races.
Thread Safety:
// This won't compile
let mut data = vec![1, 2, 3];
std::thread::spawn(move || {
data.push(4);
});
data.push(5); // Error: value moved
Rust’s ownership system prevents data races by construction.
What Rust Doesn’t Prevent
Logic Bugs:
// Compiles fine, logic is wrong
fn withdraw(balance: &mut u64, amount: u64) -> bool {
*balance -= amount; // VULNERABILITY: No balance check at all
true
}
The borrow checker doesn’t care about business logic. This function will happily underflow if amount > *balance.
Integer Overflow (in release mode):
// Compiles fine, overflows in production
fn calculate_fee(price: u64) -> u64 {
price * 10 / 100 // VULNERABILITY: Can overflow
}
In debug builds, this panics. In release builds, it wraps silently.
Panic-Based DoS:
// Compiles fine, panics on invalid input
fn process(data: &[u8]) {
let value = data[0]; // VULNERABILITY: Panics if data is empty
// ...
}
Out-of-bounds access panics instead of corrupting memory. But a panic can still take down your service.
🧨 Unsafe Rust: Where Dragons Live
The unsafe keyword is Rust’s escape hatch. It disables the borrow checker for specific operations. And that’s where memory corruption bugs can still hide.
When Unsafe Is Necessary
Rust requires unsafe for:
- Dereferencing raw pointers
- Calling
unsafefunctions - Implementing
unsafetraits - Accessing mutable statics
- FFI calls to C/C++ code
Legitimate use:
unsafe fn read_volatile_register(addr: usize) -> u32 {
std::ptr::read_volatile(addr as *const u32)
}
This is necessary for hardware interaction. But it’s also where bugs creep in.
Vulnerable Unsafe Code
Unvalidated Pointer Dereference:
// VULNERABILITY: No bounds checking
unsafe fn get_element(ptr: *const u32, index: usize) -> u32 {
*ptr.add(index) // Can read arbitrary memory
}
If index is attacker-controlled, this is a memory disclosure vulnerability.
Use-After-Free:
// VULNERABILITY: Dangling pointer
let mut data = Box::new(42);
let ptr = &*data as *const i32;
drop(data); // Memory freed
unsafe {
println!("{}", *ptr); // Use-after-free
}
Inside unsafe, Rust can’t protect you.
Incorrect Lifetime Assumptions:
// VULNERABILITY: Returns dangling reference
unsafe fn dangling_ref<'a>() -> &'a str {
let s = String::from("temp");
std::mem::transmute::<&str, &'a str>(s.as_str())
// s is dropped, reference is dangling
}
Using transmute to lie about lifetimes bypasses safety checks.
🔢 Integer Overflow and Underflow
Rust’s integer behavior changes between debug and release builds.
Debug vs Release Mode
Debug mode (default for cargo build):
let x: u8 = 255;
let y = x + 1; // Panics: attempt to add with overflow
Release mode (cargo build --release):
let x: u8 = 255;
let y = x + 1; // Wraps to 0, no panic
This is dangerous. Code tested in debug mode can silently misbehave in production.
Exploitable Overflow
Token Balance Calculation:
// VULNERABILITY: Overflow in fee calculation
fn calculate_total(price: u64, quantity: u32) -> u64 {
price * (quantity as u64) // Can overflow
}
// Attacker sets quantity = u32::MAX
// price * quantity wraps, total becomes tiny
In a marketplace, this could let attackers buy expensive items for pennies.
Safe Alternatives
Checked arithmetic:
fn safe_multiply(a: u64, b: u64) -> Result<u64, &'static str> {
a.checked_mul(b).ok_or("Overflow")
}
Saturating arithmetic:
fn safe_add(a: u64, b: u64) -> u64 {
a.saturating_add(b) // Clamps to u64::MAX instead of wrapping
}
Wrapping arithmetic (explicit):
fn intentional_wrap(a: u8, b: u8) -> u8 {
a.wrapping_add(b) // Makes wrapping behavior explicit
}
Use checked_*, saturating_*, or wrapping_* methods. Never rely on default overflow behavior in security-critical code.
💥 Panic-Based Denial of Service
Panics in Rust are like exceptions in other languages, but with a key difference: by default, panics unwind the stack and terminate the thread. In single-threaded services, that means the entire service crashes.
Panic Sources
Array Indexing:
// VULNERABILITY: Panics if index is out of bounds
fn get_user_score(scores: &[u32], user_id: usize) -> u32 {
scores[user_id] // Panics if user_id >= scores.len()
}
If user_id comes from user input, an attacker can crash the service.
Unwrap and Expect:
// VULNERABILITY: Panics if input is invalid
fn parse_config(json: &str) -> Config {
serde_json::from_str(json).unwrap() // Panics on invalid JSON
}
Any malformed input crashes the application.
Division by Zero:
// VULNERABILITY: Panics if denominator is zero
fn calculate_ratio(numerator: u64, denominator: u64) -> u64 {
numerator / denominator // Panics if denominator == 0
}
Slice Operations:
// VULNERABILITY: Panics if range is invalid
fn extract_header(data: &[u8]) -> &[u8] {
&data[0..20] // Panics if data.len() < 20
}
Real-World Example: Solana Programs
Solana smart contracts (programs) are written in Rust. A panic in a program causes the transaction to fail. Attackers can use this for griefing attacks:
// Vulnerable Solana program
pub fn process_instruction(accounts: &[AccountInfo], data: &[u8]) -> ProgramResult {
// VULNERABILITY: Panics if data.len() < 8
let amount_bytes: [u8; 8] = data[0..8].try_into().unwrap();
let amount = u64::from_le_bytes(amount_bytes);
// If data is shorter than 8 bytes, this panics
// Transaction fails, attacker can DoS the program
Ok(())
}
Safe Alternatives
Use safe accessors:
// Safe: Returns None instead of panicking
fn get_user_score_safe(scores: &[u32], user_id: usize) -> Option<u32> {
scores.get(user_id).copied()
}
// Safe: Returns Result
fn parse_config_safe(json: &str) -> Result<Config, serde_json::Error> {
serde_json::from_str(json)
}
// Safe: Explicit check
fn calculate_ratio_safe(numerator: u64, denominator: u64) -> Option<u64> {
if denominator == 0 {
None
} else {
Some(numerator / denominator)
}
}
Rule of thumb: In production code, avoid unwrap(), expect(), direct indexing, and unchecked arithmetic. Use ?, match, if let, and get().
🔗 FFI: The Unsafe Boundary
Foreign Function Interface (FFI) allows Rust to call C/C++ code. This is necessary for interacting with existing libraries, but it’s also where Rust’s guarantees end.
FFI Vulnerabilities
Unvalidated C Strings:
use std::ffi::{CStr, c_char};
// VULNERABILITY: No validation of C string
unsafe fn call_c_function(input: *const c_char) {
let c_str = CStr::from_ptr(input); // UNSAFE: Assumes input is valid & null-terminated
// If input is not null-terminated, this can read past buffer
let rust_str = c_str.to_str().unwrap();
}
Buffer Overflow via C:
extern "C" {
fn unsafe_copy(dest: *mut u8, src: *const u8, len: usize);
}
// VULNERABILITY: C function doesn't check bounds
unsafe fn copy_data(dest: &mut [u8], src: &[u8]) {
unsafe_copy(dest.as_mut_ptr(), src.as_ptr(), src.len());
// If src.len() > dest.len(), buffer overflow
}
Type Confusion:
// VULNERABILITY: C function expects different layout
struct Item {
id: u32,
value: u64,
}
#[repr(C)]
struct Data {
count: u32,
items: *mut Item,
}
// If C code expects different field order or alignment, memory corruption
Safe FFI Practices
- Validate all inputs before passing to C:
fn safe_c_string(s: &str) -> Result<CString, NulError> {
CString::new(s) // Validates no null bytes in middle
}
- Check buffer sizes before calling C:
unsafe fn safe_copy(dest: &mut [u8], src: &[u8]) -> Result<(), &'static str> {
if src.len() > dest.len() {
return Err("Buffer too small");
}
unsafe_copy(dest.as_mut_ptr(), src.as_ptr(), src.len());
Ok(())
}
- Use
#[repr(C)]for FFI structs:
#[repr(C)] // Ensures C-compatible layout
struct FfiData {
x: u32,
y: u64,
}
- Never trust C code: Assume C functions can violate Rust’s invariants. Validate everything.
🐛 Logic Bugs the Compiler Can’t Catch
These are the vulnerabilities that make Rust codebases just as vulnerable as any other language when it comes to application logic.
Authentication Bypass
// VULNERABILITY: Logic error in authentication
fn authenticate(username: &str, password: &str, stored_hash: &str) -> bool {
let hash = compute_hash(password);
hash == stored_hash // VULNERABILITY: No error handling
}
// Usage
if authenticate("admin", user_input, stored) {
grant_access();
}
// If compute_hash() returns empty string on error,
// attacker can trigger error condition to bypass auth
Race Conditions
Rust prevents data races, but not logical race conditions:
use std::sync::{Arc, Mutex};
struct Account {
balance: u64,
}
// VULNERABILITY: Time-of-check to time-of-use (TOCTOU)
fn withdraw(account: &Arc<Mutex<Account>>, amount: u64) -> bool {
let balance = {
let acc = account.lock().unwrap();
acc.balance // Check
};
if balance >= amount {
// VULNERABILITY: Another thread can withdraw between check and use
std::thread::sleep(std::time::Duration::from_millis(100));
let mut acc = account.lock().unwrap();
acc.balance -= amount; // Use
true
} else {
false
}
}
Two threads can both pass the check and double-spend the balance.
Incorrect Access Control
// VULNERABILITY: Missing permission check
fn delete_post(post_id: u64, user_id: u64) -> Result<(), &'static str> {
let post = get_post(post_id)?;
// VULNERABILITY: Never checks if user_id owns post
delete_from_db(post_id);
Ok(())
}
The function compiles. The types are correct. But the authorization logic is missing.
Cryptographic Misuse
// VULNERABILITY: Token too short
fn generate_session_token() -> String {
use rand::Rng;
let mut rng = rand::thread_rng();
format!("{:x}", rng.gen::<u64>()) // VULNERABILITY: Only 64 bits (8 bytes)
}
While thread_rng() is cryptographically secure, a 64-bit token is too short for session tokens (only 2^64 possible values). Secure session tokens should be at least 128 bits (16 bytes). Use proper token generation with sufficient entropy:
use rand::Rng;
fn generate_session_token() -> String {
let mut rng = rand::thread_rng();
let token: [u8; 32] = rng.gen(); // 256 bits
hex::encode(token) // Requires: hex = "0.4" in Cargo.toml
}
🛠️ Tools of the Trade
For general-purpose code review tools like Semgrep, CodeQL, and static analysis fundamentals, check out Issue #16 where we covered secure code review tooling in depth. Here, we’ll focus on Rust-specific security tools.
Rust-Specific Static Analysis:
Clippy: Official Rust linter with security-focused rules.
cargo clippy -- -W clippy::all -W clippy::pedantic
cargo-audit: Checks dependencies for known vulnerabilities.
cargo install cargo-audit
cargo audit
cargo-deny: Lints for dependency licenses, sources, security advisories.
cargo install cargo-deny
cargo deny check
cargo-geiger: Detects usage of unsafe in dependencies.
cargo install cargo-geiger
cargo geiger
Rust-Specific Dynamic Analysis:
cargo-fuzz: Fuzzing for Rust using libFuzzer.
cargo install cargo-fuzz
cargo fuzz init
cargo fuzz run target_name
American Fuzzy Lop (AFL): AFL fuzzer for Rust.
Miri: Interpreter that detects undefined behavior and memory errors.
rustup component add miri
cargo miri test
Manual Review Tools:
ripgrep: Fast grep for finding patterns.
rg "unsafe|unwrap|expect|panic" src/
tokei: Count lines of code, useful for scoping reviews.
cargo install tokei
tokei src/
🔒 Defense and Detection
For Developers
1. Enable Overflow Checks in Release Mode
By default, release builds don’t check for overflow. Enable them:
[profile.release]
overflow-checks = true
2. Use Strict Clippy Lints
Add to .cargo/config.toml:
[target.'cfg(all())']
rustflags = [
"-W", "clippy::unwrap_used",
"-W", "clippy::expect_used",
"-W", "clippy::panic",
"-W", "clippy::indexing_slicing",
]
3. Minimize Unsafe Code
Isolate unsafe blocks in dedicated modules. Document invariants:
/// SAFETY: Caller must ensure `ptr` is valid for `len` bytes
unsafe fn read_bytes(ptr: *const u8, len: usize) -> &[u8] {
std::slice::from_raw_parts(ptr, len)
}
4. Use Result Instead of Panic
Replace unwrap() with proper error handling:
// Bad
let value = map.get("key").unwrap();
// Good
let value = map.get("key").ok_or("Key not found")?;
5. Test with Miri
Run tests under Miri to detect undefined behavior:
cargo miri test
6. Fuzz Critical Code Paths
Use cargo-fuzz on parsing, deserialization, and crypto code:
#[cfg(fuzzing)]
pub fn fuzz_parse(data: &[u8]) {
let _ = parse_message(data);
}
7. Enable Security-Focused Features
[dependencies]
serde = { version = "1.0", features = ["derive"] }
[profile.dev]
panic = "abort" # Catch panics in testing
[profile.release]
panic = "abort" # Smaller binary, clearer behavior
overflow-checks = true
For Auditors
Audit Checklist:
- Run
cargo geigerto find allunsafeusage - Review every
unsafeblock for memory safety - Search for
unwrap(),expect(),panic!(),[]indexing - Check integer arithmetic in financial/critical code
- Verify FFI boundaries are properly validated
- Look for TOCTOU race conditions in multi-threaded code
- Verify cryptographic library usage (key generation, randomness)
- Check authentication and authorization logic
- Test panic behavior with malformed inputs
- Run
cargo auditfor known vulnerable dependencies - Use Miri on test suite to catch UB
Red Flags:
- High percentage of
unsafecode transmuteusage (lifetime manipulation)- Manual memory management with
Box::from_raw,ptr::write - Arithmetic on user-controlled values without checks
- FFI calls without validation
panic="unwind"in production services
🎯 Key Takeaways
- Memory safety ≠ security. Rust eliminates memory corruption but allows logic bugs, integer overflow, and panic-based DoS
unsafeblocks require manual auditing. The borrow checker is disabled, so all memory safety rules must be verified manually- Integer overflow behavior changes between debug and release builds. Always use
checked_*,saturating_*, orwrapping_*methods in security-critical code - Panics can cause DoS. Avoid
unwrap(),expect(), direct indexing, and unchecked operations in production - FFI is the danger zone. Validate all inputs before passing to C/C++ code, never trust C return values
- Logic bugs are language-agnostic. Authentication, authorization, race conditions, and crypto misuse exist in Rust too
- Tooling is essential. Use
cargo-audit,cargo-geiger,clippy, andmirias part of your security workflow - Rust is evolving. Stay updated with RustSec advisories and the Security WG
📚 Further Reading
- Rust Security Guidelines (ANSSI): Comprehensive security guide from French cybersecurity agency
- RustSec Advisory Database: Curated database of security vulnerabilities in Rust crates
- The Rustonomicon: The Dark Arts of Unsafe Rust - official guide to
unsafecode - Rust Security Working Group: Community working group focused on Rust security
- Memory-Safety Challenge Considered Solved?: Academic study analyzing all Rust CVEs through 2020, showing that memory-safety bugs require unsafe code
- Understanding Memory and Thread Safety Practices: Research paper analyzing 70 real-world Rust memory-safety issues
- Solana Security Best Practices: Security patterns for Rust-based smart contracts
That’s it for this week!
If you’re building or auditing Rust code, don’t assume the compiler catches everything. Spend time reviewing unsafe blocks, checking integer arithmetic, and testing panic scenarios. The memory safety is real, but the security depends on you.
Thanks for reading, and happy hacking 🔐
— Ruben
Chapters
Previous Issue
Next Issue