24 KiB
NexFS - Native Zig Flash Filesystem for NexusOS
The sovereign flash filesystem for Libertaria nodes and embedded devices
What is NexFS?
NexFS is a native Zig implementation of a flash-aware filesystem designed for Libertaria nodes and embedded sovereign devices. It provides reliable, wear-leveling-aware storage for resource-constrained environments where data integrity and flash longevity are critical.
Key Design Goals
- Flash-First Architecture: Optimized for raw NAND/NOR flash with wear leveling awareness
- Zero Dynamic Allocation: All buffers provided by caller - no runtime memory allocation
- Platform Agnostic: Works with any flash HAL via callback interface
- Data Integrity: CRC32C checksums on all metadata structures
- Sovereign by Design: No external dependencies, no vendor lock-in, fully auditable
Build Profiles & Architecture
NexFS implements a modular Baukasten architecture (SPEC-004) with three build profiles. Not one binary - three configurations compiled from modules:
Three Build Profiles
| Profile | Modules | Footprint | Use Case |
|---|---|---|---|
| nexfs-core | CAS + Block Valve + Hash | ~40KB | IoT, Satellite, Sensor |
| nexfs-sovereign | core + CDC + DAG | ~120KB | Phone, Laptop, Desktop |
| nexfs-mesh | sovereign + UTCP + Gossip | ~200KB | Home Box, Chapter Node |
Module Compilation
NexFS uses Nim's compile-time conditionals for module selection:
# nexfs/config.nim
const
nexfs_cas* = defined(nexfs_cas) # CAS Layer (always)
nexfs_cdc* = defined(nexfs_cdc) # Content-Defined Chunking
nexfs_mesh* = defined(nexfs_mesh) # UTCP Cluster + Gossip
nexfs_dag* = defined(nexfs_dag) # Merkle DAG Directories
nexfs_dedup* = defined(nexfs_dedup) # Cross-Node Dedup
Build Commands:
# Core profile (IoT sensor)
nim c -d:nexfs_cas nexfs
# Sovereign profile (laptop)
nim c -d:nexfs_cas -d:nexfs_cdc -d:nexfs_dag nexfs
# Mesh profile (home box)
nim c -d:nexfs_cas -d:nexfs_cdc -d:nexfs_dag -d:nexfs_mesh nexfs
Profile Comparison
nexfs-core is your LittleFS successor:
- ✅ Robust, power-safe
- ✅ Content-addressed storage
- ✅ Block-level wear leveling
- ❌ No network code
- ❌ No chunker
- ❌ No gossip
- Target: Root partition for everything that blinks and beeps
nexfs-sovereign adds local intelligence:
- ✅ CDC-Chunking for efficient snapshots
- ✅ Local deduplication
- ✅ Merkle DAGs for Time-Travel
- ❌ Still single-node
- Target: Phone, laptop, desktop
nexfs-mesh enables network layer:
- ✅ UTCP cluster communication
- ✅ Gossip-based peer discovery
- ✅ Cross-node deduplication
- ✅ Chapter-Mesh integration
- ✅ Kinetic Credits economy
- Target: Home box, Chapter node
Specifications
NexFS is defined by three SPECs that build on each other:
| SPEC | Name | Scope | Size |
|---|---|---|---|
| SPEC-084 | NexFS Core Format | Superblock, Block Pointers, Hash-ID, Profile-Byte | Foundation |
| SPEC-085 | NexFS Sovereign Extensions | CDC Chunking, Merkle DAG, Local Dedup, Time Travel | +80KB |
| SPEC-704 | UTCP Storage Extensions | UTCP Storage Ops, Gossip, Chapter-Mesh, Kinetic Credits | +80KB |
Dependency Chain:
SPEC-084 (Core) → SPEC-085 (Sovereign) → SPEC-704 (Mesh)
↓ ↓ ↓
40KB +80KB +80KB
IoT Laptop Home Box
Each SPEC is independently implementable. core compiles without the others. sovereign needs core. mesh needs both.
Use Cases
1. Libertaria Mesh Nodes
Primary Use Case: Storage layer for Libertaria Capsule nodes
┌─────────────────────────────────────────┐
│ Libertaria Capsule Node │
│ │
│ ┌─────────────────────────────────┐ │
│ │ L3 Gossip (QVL Trust Edges) │ │
│ └─────────────────────────────────┘ │
│ ┌─────────────────────────────────┐ │
│ │ L2 Session (Noise Handshakes) │ │
│ └─────────────────────────────────┘ │
│ ┌─────────────────────────────────┐ │
│ │ L1 Identity (SoulKeys) │ │
│ └─────────────────────────────────┘ │
│ ┌─────────────────────────────────┐ │
│ │ NexFS (Persistent Storage) │◄──┘
│ └─────────────────────────────────┘
│ ┌─────────────────────────────────┐
│ │ Raw Flash (NAND/NOR/SPI) │
│ └─────────────────────────────────┘
└─────────────────────────────────────────┘
Why NexFS for Libertaria?
- Persistence: SoulKeys, peer tables, trust graphs survive reboots
- Integrity: CRC32C ensures metadata hasn't been corrupted
- Wear Leveling: Tracks erase counts to maximize flash lifespan
- Minimal Footprint: Zero allocation design fits embedded constraints
- Fast Boot: No journal replay, direct mount from superblock
2. Embedded Sovereign Devices
Secondary Use Case: IoT devices, Raspberry Pi, ESP32, microcontrollers
Examples:
- Solar Monitor Nodes: Store sensor readings, config, firmware updates
- Weather Network: Log environmental data locally before sync
- Pager Devices: Message queue persistence
- Home Automation: Device state, automation rules, logs
Why NexFS for Embedded?
- Raw Flash Support: Works directly with SPI flash, no FTL layer needed
- Power-Loss Resilience: Dual superblock backup survives sudden power loss
- Deterministic: Fixed buffer sizes, predictable memory usage
- No OS Dependencies: Works bare-metal or with any RTOS
Architecture
On-Disk Layout
┌─────────────────────────────────────────────┐
│ Block 0: Primary Superblock (128 bytes) │
├─────────────────────────────────────────────┤
│ Block 1: Backup Superblock (128 bytes) │
├─────────────────────────────────────────────┤
│ Blocks 2-N: Block Allocation Map (BAM) │
│ - Tracks allocation status │
│ - Records erase counts (wear leveling) │
│ - Bad block marking │
├─────────────────────────────────────────────┤
│ Blocks N+1-N+4: Inode Table │
│ - File/directory metadata │
│ - Inode IDs 1-128 │
├─────────────────────────────────────────────┤
│ Blocks N+5+: Data Blocks │
│ - File/directory contents │
│ - Wear-leveled allocation │
└─────────────────────────────────────────────┘
Key Components
1. Superblock
- Magic number:
0x4E455846("NEXF") - Generation counter for crash recovery
- Mount count for health monitoring
- CRC32C checksum for integrity
2. Block Allocation Map (BAM)
- Per-block metadata: allocated, bad, reserved, needs_erase
- Erase count tracking for wear leveling
- Generation counter for block age
3. Inode Table
- File/directory metadata
- Supports: Regular, Directory, Symlink, Device nodes
- Max filename: 255 characters
4. Flash Interface
pub const FlashInterface = struct {
read: *const fn (ctx: *anyopaque, addr: u64, buffer: []u8) NexFSError!usize,
write: *const fn (ctx: *anyopaque, addr: u64, buffer: []const u8) NexFSError!void,
erase: *const fn (ctx: *anyopaque, block_addr: BlockAddr) NexFSError!void,
sync: *const fn (ctx: *anyopaque) NexFSError!void,
};
Features
✅ Implemented (v0.1.0)
- Format/Initialization:
format()creates fresh filesystem - Superblock Management: Primary + backup with checksums
- Block Allocation: BAM-based allocation with wear tracking
- Inode Operations: Create, read, write, delete
- Directory Operations: mkdir, rmdir, readdir, lookup
- File Operations: open, read, write, close, seek
- Path Resolution: Full path support (
/path/to/file) - Checksum Verification: CRC32C on all metadata
- Zero Allocation: All buffers provided by caller
🚧 Planned (Future Versions)
- Wear Leveling Algorithm: Active block rotation based on erase counts
- Bad Block Management: Automatic bad block detection and marking
- Defragmentation: Reclaim fragmented data blocks
- Snapshots: Point-in-time filesystem snapshots
- Compression: Optional LZ4 compression for data blocks
- Encryption: Optional XChaCha20-Poly1305 encryption
Chapter-Mesh Architecture (nexfs-mesh)
The Strategic Moat
When a user installs a Sovereign device, NexFS creates a two-partition layout:
┌─ /dev/nvme0 ─────────────────────────────────┐
│ │
│ Partition 1: nexfs-sovereign (Root, Private)│
│ ├── /Cas (System, Immutable) │
│ ├── /Nexus (Config, Encrypted) │
│ └── /Data (User, Encrypted) │
│ │
│ Partition 2: nexfs-mesh (Chapter Pool) │
│ ├── Encrypted at rest (Monolith) │
│ ├── Auto-joins Chapter via UTCP Discovery │
│ ├── Contributes: Storage + Bandwidth │
│ └── Receives: Redundancy + Content Access │
│ │
└────────────────────────────────────────────────┘
Installation Flow
# During Sovereign installation
nexus forge --chapter "frankfurt-01"
# User prompt:
# "How much storage do you want to contribute to the Chapter Network?"
# [Slider: 10GB ... 500GB ... 2TB]
# Default: 10% of disk or 50GB, whichever is smaller
nexfs format /dev/nvme0p2 --mode sovereign-mesh \
--chapter-id "frankfurt-01" \
--quota 50G \
--replication-factor 3
What Happens in the Background
-
UTCP Discovery (SPEC-703 Pattern):
- Box broadcasts
PEER_ANNOUNCEon local network - Connects to known Chapter gateways
- Box broadcasts
-
Gossip Sync:
- Learns which other Sovereigns are online
- Exchanges peer lists and chunk availability
-
Lazy Replication:
- Popular chunks (Nip packages, community content) automatically cached
- Replication factor enforced (default: 3 copies)
-
Sealed Storage:
- User cannot see Chapter partition as normal directory
- It's an opaque block pool
- No user snooping on transit data possible
The Economy (Kinetic Credits)
Integration with Kinetic Economy Model (SPEC-053):
You give: 100GB Mesh-Storage + Uptime
You get: Kinetic Credits
You use: Credits for Package-Downloads, Mesh-Access, Priority-Routing
Engineered Reciprocity - No altruism, no charity:
- More contribution = more credits
- No contribution = still can read (Commonwealth packages are free)
- Premium content and priority bandwidth cost credits
The Torrent Killer
Why NexFS Chapter-Mesh beats BitTorrent:
| Feature | BitTorrent | NexFS Chapter-Mesh |
|---|---|---|
| FS Integration | Separate client | Native FS (nip install firefox) |
| Deduplication | Full file transfer | CAS-Dedup (5% delta transfer) |
| Encryption | Optional | Encrypted at rest by default |
| NAT Traversal | Problematic | UTCP CellID routing (no NAT issues) |
| Discovery | Tracker/DHT | Gossip-based peer discovery |
| Economy | None | Kinetic Credits |
Example: Firefox 120.0.1 and 120.0.2 share 95% of chunks:
- BitTorrent: Transfer entire
.tar.gzagain - NexFS: Transfer only 5% difference
Superblock Extension
One byte in the superblock determines the profile:
type NexfsSuperblock* = object
magic*: uint32 # "NXFS"
version*: uint16
hash_algo*: uint8 # 0x00=BLAKE3-256, 0x01=BLAKE3-128, 0x02=XXH3-128
profile*: uint8 # 0x00=core, 0x01=sovereign, 0x02=mesh ← NEW
chapter_id*: CellID # 128-bit (only for mesh, otherwise 0)
mesh_quota*: uint64 # Bytes dedicated to mesh (only for mesh)
replication_target*: uint8 # Desired copies (default 3)
# ... rest as usual
Filesystem Hierarchy Extension
/Data/
├── Volume/
│ ├── Local/ # Standard
│ ├── Remote/ # Cloud-Mounts
│ ├── External/ # USB
│ └── Mesh/ # NEW: Chapter Pool
│ ├── .quota # 50GB
│ ├── .chapter-id # frankfurt-01
│ └── .peers # Known CellIDs
UTCP Storage Protocol (SPEC-704)
Wire Protocol Extensions:
| Category | Operations | Size |
|---|---|---|
| Block Ops | GET, PUT, DELETE, REPLICATE, VERIFY, REPAIR, STATS | 7 ops |
| DAG Ops | RESOLVE, ANCESTORS, DIFF | 3 ops |
| Peer Ops | ANNOUNCE, LEAVE, HEARTBEAT | 3 ops |
All operations use 16-byte extension header on existing UTCP frames.
Key Features:
- Gossip-based peer discovery
- Credit-based flow control (SPEC-702 pattern reuse)
- Replication-factor enforcement
- Chapter-Mesh integration with Kinetic Credits
Quick Start
Installation
Add NexFS to your build.zig.zon:
.{
.name = "your-project",
.version = "0.1.0",
.dependencies = .{
.nexfs = .{
.url = "https://git.sovereign-society.org/nexus/nexfs/archive/main.tar.gz",
.hash = "...",
},
},
}
Example: Basic Usage
const std = @import("std");
const nexfs = @import("nexfs");
// 1. Define your flash interface
const MyFlash = struct {
flash_data: []u8,
pub fn read(ctx: *anyopaque, addr: u64, buffer: []u8) nexfs.NexFSError!usize {
const self = @ptrCast(*MyFlash, @alignCast(ctx));
@memcpy(buffer, self.flash_data[addr..][0..buffer.len]);
return buffer.len;
}
pub fn write(ctx: *anyopaque, addr: u64, buffer: []const u8) nexfs.NexFSError!void {
const self = @ptrCast(*MyFlash, @alignCast(ctx));
@memcpy(self.flash_data[addr..][0..buffer.len], buffer);
}
pub fn erase(ctx: *anyopaque, block_addr: nexfs.BlockAddr) nexfs.NexFSError!void {
// Erase flash block (set to 0xFF for NAND)
}
pub fn sync(ctx: *anyopaque) nexfs.NexFSError!void {
// Flush any caches
}
};
pub fn main() !void {
var flash = MyFlash{ .flash_data = try allocator.alloc(u8, 1024 * 1024) };
// 2. Configure NexFS
var read_buf: [4096]u8 = undefined;
var write_buf: [4096]u8 = undefined;
var workspace: [256]u8 = undefined;
const config = nexfs.Config{
.flash = .{
.ctx = &flash,
.read = MyFlash.read,
.write = MyFlash.write,
.erase = MyFlash.erase,
.sync = MyFlash.sync,
},
.device_size = 1024 * 1024,
.block_size = 4096,
.block_count = 256,
.page_size = 256,
.checksum_algo = .CRC32C,
.read_buffer = &read_buf,
.write_buffer = &write_buf,
.workspace = &workspace,
.time_source = null,
.verbose = true,
};
// 3. Format the filesystem
try nexfs.format(&config.flash, &config, &write_buf);
// 4. Create a file
var fs = try nexfs.NexFS.init(allocator, config);
const fd = try fs.create("/config.txt");
try fs.write(fd, "hello nexfs");
try fs.close(fd);
// 5. Read it back
var buf: [64]u8 = undefined;
const fd2 = try fs.open("/config.txt");
const len = try fs.read(fd2, &buf);
try std.io.getStdOut().writeAll(buf[0..len]);
try fs.close(fd2);
}
Configuration Options
const Config = struct {
flash: FlashInterface, // Your flash HAL
device_size: u64, // Total flash size in bytes
block_size: BlockSize, // Flash block size (512, 1024, 2048, 4096)
block_count: u32, // Number of blocks
page_size: PageSize, // Flash page size for alignment
checksum_algo: ChecksumAlgo, // None, CRC16, or CRC32C
read_buffer: []u8, // Buffer >= block_size
write_buffer: []u8, // Buffer >= block_size
workspace: []u8, // Buffer >= page_size
time_source: ?TimeSource, // Optional timestamp provider
verbose: bool, // Enable debug logging
};
Recommended Configurations
1. Raspberry Pi with SPI Flash (1MB)
.block_size = 4096,
.page_size = 256,
.block_count = 256,
.checksum_algo = .CRC32C,
2. ESP32 with Flash (4MB)
.block_size = 4096,
.page_size = 256,
.block_count = 1024,
.checksum_algo = .CRC32C,
3. Microcontroller with NOR Flash (512KB)
.block_size = 2048,
.page_size = 256,
.block_count = 256,
.checksum_algo = .CRC16, // Faster on limited CPUs
Design Philosophy
Sovereign Storage Principles
- No Secrets: All code is open source and auditable (LSL-1.0)
- No Dependencies: Zero external libraries, pure Zig
- No Vendor Lock-in: Standard interfaces, portable anywhere
- No Hidden Allocation: Explicit memory management
- No Trust Required: Verify integrity with checksums
Flash-Aware Design
Why Raw Flash?
- Predictable Performance: No FTL latency spikes
- Full Control: Wear leveling algorithm you control
- Longer Lifespan: Avoid consumer-grade FTL write amplification
- Lower Power: No background garbage collection
Wear Leveling Strategy:
- Track erase counts per block (BAM)
- Prefer blocks with lowest erase counts for writes
- Reserve high-erase-count blocks for cold data
- Target: Even wear distribution across flash lifetime
Performance Characteristics
| Operation | Typical Latency | Notes |
|---|---|---|
| Mount | < 10ms | Read superblock, validate checksum |
| Format | 100-500ms | Initialize all metadata blocks |
| File Create | 5-20ms | Allocate inode, write metadata |
| File Read (4KB) | 1-5ms | Single block read |
| File Write (4KB) | 10-30ms | Erase + write cycle |
| Directory Lookup | 1-5ms | Inode table scan |
Memory Requirements:
- Minimum: 2 × block_size + page_size (e.g., 8KB + 256B = ~8.5KB)
- Recommended: 2 × block_size + 2 × page_size (for async ops)
- Allocator: Not required (zero dynamic allocation)
Roadmap
Version 0.2.0 (Q2 2026) - Sovereign Profile
Focus: SPEC-085 Implementation
- FastCDC chunker implementation
- CAS store with inline dedup
- Merkle DAG directory structure
- TimeWarp snapshots (
nexfs snap create/rollback/diff) - Build profiles: core vs sovereign
- Active wear leveling algorithm
- Bad block management
Version 0.3.0 (Q3 2026) - Mesh Profile
Focus: SPEC-704 Implementation
- UTCP storage protocol (7 Block Ops, 3 DAG Ops, 3 Peer Ops)
- Gossip-based peer discovery
- Cross-node deduplication
- Chapter-Mesh integration
- Kinetic Credits integration
- Credit-based flow control
- Compression support (LZ4)
- Defragmentation tool
- Filesystem check utility (fsck)
Version 1.0.0 (Q4 2026) - Production Hardening
Focus: Production Readiness
- Encryption support (XChaCha20-Poly1305)
- CRDT for concurrent DAG edits
- Multi-Chapter federation
- Performance benchmarks
- Production-hardened
- Full Libertaria stack integration
Future Considerations
- CRDT-Specification for concurrent DAG edits (Phase 50+)
- Multi-Chapter-Federation for cross-chapter chunk exchange (Phase 50+)
Note: Features grow into existing SPECs as Sections when needed, not as standalone documents.
Testing
Current Test Coverage: 251/253 tests passing (99.2%)
# Run tests
zig build test
# Run with verbose output
zig build test -Dverbose
Test Categories:
- ✅ Superblock validation
- ✅ Checksum verification
- ✅ Block allocation/deallocation
- ✅ Inode operations
- ✅ Directory operations
- ✅ File operations
- ✅ Path resolution
- 🔄 Wear leveling (in progress)
- 🔄 Bad block handling (planned)
Security Considerations
Data Integrity:
- CRC32C protects all metadata from silent corruption
- Dual superblock survives single-block corruption
- Bad block marking prevents data loss
Power-Loss Resilience:
- Primary + backup superblock
- Metadata writes are atomic (single block)
- No journal to replay
Future Security Features:
- Optional encryption at rest (v1.0)
- Authenticated encryption (AEAD)
- Key derivation from SoulKey (Libertaria integration)
Contributing
Development Status: Alpha (v0.1.0)
Contribution Areas:
- Wear leveling algorithm improvements
- Bad block detection strategies
- Performance optimizations
- Test coverage improvements
- Documentation enhancements
Code Style:
- Follow Zig style guidelines
- SPDX license headers required
- BDD-style tests preferred
- Panopticum architecture compliance
License
License: LSL-1.0 (Libertaria Source License 1.0)
Summary:
- ✅ Open source and auditable
- ✅ Free to use for sovereign applications
- ✅ Modifications must be contributed back
- ✅ No commercial restrictions for sovereign use cases
See LICENSE for full text.
Community
Repository: https://git.sovereign-society.org/nexus/nexfs
Organization: Nexus
- rumpk - Runtime package manager
- nip - Nexus package format
- nexus - Core utilities
- nipbox - Package repository
- nexfs - Flash filesystem
Related Projects:
- Libertaria Stack - P2P mesh networking
- Janus Language - Systems programming language
Acknowledgments
Inspired By:
- LittleFS - Flash-friendly embedded filesystem
- JFFS2 - Journaling flash filesystem
- YAFFS2 - Yet another flash filesystem
Built With:
- Zig - Systems programming language
- Libertaria - Sovereign P2P mesh network
NexFS - Storage for Sovereign Systems
Part of the Nexus ecosystem for Libertaria nodes and embedded devices