Home / TLS / Architecture

Core Architecture

A minimal core that routes messages, enforces access, isolates crashes, and lets modules do all the work. 15 source files. Zero business logic.

The core is deliberately small. Every line of code in the core must justify its existence. If functionality can live in a module, it must live in a module. The core routes, loads, guards, and traces — nothing more.

Message Flow

Every interaction follows a single path through the core. A message enters through an interface, gets routed by path, checked against ACL, dispatched to the owning module, and a response is returned. Events fan out via pub/sub after the handler completes.

Client → Interface (CLI/HTTP/TCP) → Core Router → ACL Check → Module Handler → Response
                                            ↓
                                       Trace (id, timestamp, hops)
                                            ↓
                                       Pub/Sub fan-out (if EVENT method)

Every message receives a unique atomic ID at creation. The trace follows it through every hop — local or federated. This is how you debug a request that crosses three nodes and four modules without guessing.

Core Components

The entire core is 15 source files. Each file has a single responsibility. There are no god objects, no shared mutable state outside the hash table, and no file that "does a little of everything."

ComponentFileDescription
Path Routercore_path.cO(1) FNV-1a hash table + wildcard fallback
Module Loadercore_module.cdlopen/dlsym, reference-counted safe unload
Message Systemcore_message.cAlloc, route, free with atomic ID generation
Authenticationcore_auth.cSHA-256 passwords, API keys, session tokens
Event Loopcore_event.clibev wrapper (epoll/kqueue/select)
Pub/Subcore_pubsub.cPattern matching (exact, wildcard, global)
Event Registrycore_events.cACL-controlled event subscriptions
Wire Protocolcore_wire.cBinary serialization for federation
File Storecore_store.cINI files with atomic writes
Multi-Storagecore_storage.cProvider registry (file+sqlite+psql)
Configcore_config.cINI parser with per-module sections
Hash Tablecore_hashtable.cFNV-1a open-addressing, auto-resize at 75%
Handlerscore_handlers.cAll /core, /auth, /users, /groups paths
Instanceportal_instance.cInstance wiring + crash isolation
Logcore_log.cColored timestamped logging

Path-Based Routing

When a module loads, it registers the paths it owns. These paths are stored in an O(1) hash table using FNV-1a hashing. When a message arrives, the router looks up the full path in the hash table. If no exact match is found, it falls back to progressively shorter wildcard paths until a handler is found or the lookup fails.

# Incoming path: /iot/resources/devices

/iot/resources/devices   → exact match (checked first)
/iot/resources/*         → wildcard fallback
/iot/*                   → wildcard fallback
/*                       → global wildcard (last resort)

This means a module can register /iot/* and handle all IoT traffic, or register specific sub-paths for fine-grained control. The exact match always wins. Wildcards only fire when no exact match exists. The lookup is O(1) for exact matches and O(depth) for wildcard fallback — in practice, paths rarely exceed 4 levels.

No regular expressions. Path matching uses hash lookups and progressive shortening. This is deliberate — regex in a hot path is a latency bomb. The wildcard fallback is predictable, debuggable, and fast.

Label-Based ACL

Access control in TLS uses labels. Users belong to groups, which give them labels. Paths can require labels. When a message arrives, the router checks if the authenticated user has at least one label that matches the path's required labels. If the path has no labels, it is open to everyone.

Path LabelsUser LabelsResult
(none)(any)ALLOW — open path
adminadmin, devALLOW — "admin" matches
admindev, viewerDENY — no match
admin, devdevALLOW — "dev" matches
(any)(root user)ALLOW — root bypasses all

The check is a simple set intersection. No role hierarchies, no inheritance trees, no RBAC matrices. A label either matches or it does not. This makes access decisions auditable by inspection — you can read a path's labels and a user's labels and know the answer without running code.

Root bypasses everything. The root user is the only exception to the label system. This is intentional — root is for administration, not for application logic. Modules should authenticate with their own credentials, never as root.

Crash Isolation

A crashing module must never take down the core. TLS wraps every module handler call with sigsetjmp/siglongjmp. If a module triggers SIGSEGV or SIGBUS, the signal handler catches it, logs the crash, marks the module as unloaded, and returns an error response. The core continues serving all other modules without interruption.

crash_sig = sigsetjmp(g_crash_jmp, 1);
if (crash_sig == 0) {
    rc = module->fn_handle(core, msg, resp);  // safe call
} else {
    LOG_ERROR("MODULE CRASH: '%s' signal %d", mod_name, crash_sig);
    resp->status = PORTAL_INTERNAL_ERROR;
    module->loaded = 0;  // auto-disable crashed module
}

After a crash, the module is automatically disabled. It can be reloaded with module reload <name> once the bug is fixed. No restart of the core is needed. No other module is affected. This is the difference between "the IoT module crashed" and "the entire system is down."

Not a substitute for quality. Crash isolation is a safety net, not an excuse. Modules that crash repeatedly should be fixed, not tolerated. The log records every crash with the module name and signal number for immediate diagnosis.

Event System

Modules register events they can emit. Other modules (or external clients) subscribe to those events. When an event fires, all subscribers are notified. The event registry is ACL-controlled — subscribers need matching labels to receive events from protected paths.

There are two delivery mechanisms:

MechanismTargetHow
Internal callbacksIn-process modulesDirect function call within the core process
fd notificationsExternal clientswrite() to the client's file descriptor

Internal callbacks are synchronous and fast — they execute in the context of the emitting module. File descriptor notifications are asynchronous — the event is serialized and written to the client's connection. This dual mechanism means a module can react to events in real time, and an external client connected via TCP or WebSocket receives the same events without polling.

Storage Architecture

TLS uses a provider-based storage system. When a user or group is modified, the change is written to all registered storage providers simultaneously. When a read is needed, providers are queried in order — the first successful read wins.

User change → core_storage
    → file provider (always)      → /etc/portal/<name>/users/admin.conf
    → sqlite provider (if loaded)  → portal.db
    → psql provider (if loaded)    → remote PostgreSQL

The file provider is always present — it is built into the core. SQLite and PostgreSQL providers are modules that register themselves as storage backends on load. This means a standalone instance uses INI files by default, and adding a database is a one-line config change with no migration.

Fan-out writes, first-read wins. This design means all providers stay in sync on writes, and the fastest provider serves reads. If the PostgreSQL server is unreachable, the file provider still works. The system degrades gracefully — it never fails completely because one backend is down.