Core Architecture
A minimal core that routes messages, enforces access, isolates crashes, and lets modules do all the work. 15 source files. Zero business logic.
Message Flow
Every interaction follows a single path through the core. A message enters through an interface, gets routed by path, checked against ACL, dispatched to the owning module, and a response is returned. Events fan out via pub/sub after the handler completes.
Client → Interface (CLI/HTTP/TCP) → Core Router → ACL Check → Module Handler → Response
↓
Trace (id, timestamp, hops)
↓
Pub/Sub fan-out (if EVENT method)
Every message receives a unique atomic ID at creation. The trace follows it through every hop — local or federated. This is how you debug a request that crosses three nodes and four modules without guessing.
Core Components
The entire core is 15 source files. Each file has a single responsibility. There are no god objects, no shared mutable state outside the hash table, and no file that "does a little of everything."
| Component | File | Description |
|---|---|---|
| Path Router | core_path.c | O(1) FNV-1a hash table + wildcard fallback |
| Module Loader | core_module.c | dlopen/dlsym, reference-counted safe unload |
| Message System | core_message.c | Alloc, route, free with atomic ID generation |
| Authentication | core_auth.c | SHA-256 passwords, API keys, session tokens |
| Event Loop | core_event.c | libev wrapper (epoll/kqueue/select) |
| Pub/Sub | core_pubsub.c | Pattern matching (exact, wildcard, global) |
| Event Registry | core_events.c | ACL-controlled event subscriptions |
| Wire Protocol | core_wire.c | Binary serialization for federation |
| File Store | core_store.c | INI files with atomic writes |
| Multi-Storage | core_storage.c | Provider registry (file+sqlite+psql) |
| Config | core_config.c | INI parser with per-module sections |
| Hash Table | core_hashtable.c | FNV-1a open-addressing, auto-resize at 75% |
| Handlers | core_handlers.c | All /core, /auth, /users, /groups paths |
| Instance | portal_instance.c | Instance wiring + crash isolation |
| Log | core_log.c | Colored timestamped logging |
Path-Based Routing
When a module loads, it registers the paths it owns. These paths are stored in an O(1) hash table using FNV-1a hashing. When a message arrives, the router looks up the full path in the hash table. If no exact match is found, it falls back to progressively shorter wildcard paths until a handler is found or the lookup fails.
# Incoming path: /iot/resources/devices
/iot/resources/devices → exact match (checked first)
/iot/resources/* → wildcard fallback
/iot/* → wildcard fallback
/* → global wildcard (last resort)
This means a module can register /iot/* and handle all IoT traffic, or register specific sub-paths for fine-grained control. The exact match always wins. Wildcards only fire when no exact match exists. The lookup is O(1) for exact matches and O(depth) for wildcard fallback — in practice, paths rarely exceed 4 levels.
Label-Based ACL
Access control in TLS uses labels. Users belong to groups, which give them labels. Paths can require labels. When a message arrives, the router checks if the authenticated user has at least one label that matches the path's required labels. If the path has no labels, it is open to everyone.
| Path Labels | User Labels | Result |
|---|---|---|
| (none) | (any) | ALLOW — open path |
| admin | admin, dev | ALLOW — "admin" matches |
| admin | dev, viewer | DENY — no match |
| admin, dev | dev | ALLOW — "dev" matches |
| (any) | (root user) | ALLOW — root bypasses all |
The check is a simple set intersection. No role hierarchies, no inheritance trees, no RBAC matrices. A label either matches or it does not. This makes access decisions auditable by inspection — you can read a path's labels and a user's labels and know the answer without running code.
Crash Isolation
A crashing module must never take down the core. TLS wraps every module handler call with sigsetjmp/siglongjmp. If a module triggers SIGSEGV or SIGBUS, the signal handler catches it, logs the crash, marks the module as unloaded, and returns an error response. The core continues serving all other modules without interruption.
crash_sig = sigsetjmp(g_crash_jmp, 1);
if (crash_sig == 0) {
rc = module->fn_handle(core, msg, resp); // safe call
} else {
LOG_ERROR("MODULE CRASH: '%s' signal %d", mod_name, crash_sig);
resp->status = PORTAL_INTERNAL_ERROR;
module->loaded = 0; // auto-disable crashed module
}
After a crash, the module is automatically disabled. It can be reloaded with module reload <name> once the bug is fixed. No restart of the core is needed. No other module is affected. This is the difference between "the IoT module crashed" and "the entire system is down."
Event System
Modules register events they can emit. Other modules (or external clients) subscribe to those events. When an event fires, all subscribers are notified. The event registry is ACL-controlled — subscribers need matching labels to receive events from protected paths.
There are two delivery mechanisms:
| Mechanism | Target | How |
|---|---|---|
| Internal callbacks | In-process modules | Direct function call within the core process |
| fd notifications | External clients | write() to the client's file descriptor |
Internal callbacks are synchronous and fast — they execute in the context of the emitting module. File descriptor notifications are asynchronous — the event is serialized and written to the client's connection. This dual mechanism means a module can react to events in real time, and an external client connected via TCP or WebSocket receives the same events without polling.
Storage Architecture
TLS uses a provider-based storage system. When a user or group is modified, the change is written to all registered storage providers simultaneously. When a read is needed, providers are queried in order — the first successful read wins.
User change → core_storage
→ file provider (always) → /etc/portal/<name>/users/admin.conf
→ sqlite provider (if loaded) → portal.db
→ psql provider (if loaded) → remote PostgreSQL
The file provider is always present — it is built into the core. SQLite and PostgreSQL providers are modules that register themselves as storage backends on load. This means a standalone instance uses INI files by default, and adding a database is a one-line config change with no migration.