Home / TLS / Federation

Federation & Networking

Two or more Portal instances connect and share resources over encrypted, authenticated channels. Remote paths work exactly like local paths — federation is transparent.

Topology: Hub Routing

Federation uses a hub-and-spoke model. A node with a public IP acts as the hub, relaying traffic between nodes that sit behind NAT. NAT nodes only make outbound connections — they never need open inbound ports.

During the TLS handshake, the hub advertises its list of connected peers. Each NAT node learns about all other nodes through the hub. Indirect peers are registered automatically — no manual configuration required for reachability.

  ssip841 (NAT) ───TLS───> devtest (public hub) <───TLS─── asus (NAT)
                                    │
                     Hub forwards: ssip841 ↔ asus

The hub inspects the destination node name in each message and forwards it to the correct peer. From the perspective of ssip841, sending a message to asus is identical to sending one to devtest — the hub routing is invisible.

Wire Protocol (PORTAL02)

All federation traffic uses a binary, length-prefixed wire protocol in network byte order (big-endian). Every frame on the wire follows the same serialization rules:

Message serialization:

  • id — unique request identifier for matching responses
  • method — GET, SET, EXEC, SUB, UNSUB, EVENT
  • path — target resource (/node/module/resource)
  • headers — key-value metadata pairs
  • body — arbitrary payload bytes
  • context — authentication identity, trace ID, and labels (travels end-to-end)

Response serialization:

  • status — numeric result code (200 OK, 404 Not Found, 403 Forbidden, etc.)
  • headers — response metadata
  • body — response payload

Handshake sequence:

  1. Initiator sends magic bytes PORTAL02
  2. SHA-256 hash of the federation key
  3. Local node name
  4. List of currently connected peers (advertised peer list)

If the key hash does not match, the connection is terminated immediately.

TLS Encryption

All federation traffic is encrypted using OpenSSL. Every Portal instance maintains its own certs/ directory containing its certificate and private key.

Self-signed certificates are auto-generated on first run with portal -C. This makes deployment trivial — no CA infrastructure is required. Certificate verification is configurable: disable it for self-signed environments, enable it when using a proper CA chain.

The TLS layer sits beneath the wire protocol. The binary PORTAL02 frames are transmitted inside the encrypted channel.

Federation Key Authentication

Federation peers authenticate using a shared secret. During the handshake, the connecting node sends a SHA-256 hash of its configured federation key. The receiving node compares this hash against its own key.

If the hashes do not match, the peer is rejected and the connection is closed. This ensures that only nodes belonging to the same federation can communicate, even if TLS certificate verification is disabled.

The federation key is configured in mod_node.conf and must be identical across all nodes in the federation.

Worker Thread Pool

Each peer connection is backed by a configurable number of worker threads (default: 4). Incoming requests from a peer are dispatched to workers in round-robin order, enabling concurrent processing of multiple messages from the same peer.

Each worker maintains its own persistent TCP connection to the peer. This avoids connection setup overhead and ensures stable throughput under load. The thread count is tunable per node via threads_per_peer in the configuration.

Auto-Reconnect

Federation connections are resilient by design. The core timer API monitors peer health and detects stuck or dead connections. When a configured peer disconnects, the node automatically retries the connection on a backoff schedule.

TCP keepalive is enabled on all federation sockets with aggressive parameters:

  • Idle timeout: 60 seconds before first probe
  • Probe interval: 30 seconds between probes
  • Probe count: 3 failed probes before declaring the connection dead

Indirect peers (discovered through hub advertisement) are removed when the hub connection dies. They are automatically recreated when the hub connection is re-established and the peer list is re-advertised.

Diagnostics

The Portal CLI provides built-in tools for inspecting and troubleshooting federation:

portal:/> ping asus                    # RTT measurement to a specific peer
portal:/> ping all                     # ping every connected peer
portal:/> tracert /asus/core/status    # show hop-by-hop latency to a resource
portal:/> node peers                   # list all peers with traffic counters
portal:/> node status asus             # TLS info, workers, uptime, msgs/bytes

ping measures round-trip time by sending a lightweight probe through the federation channel. tracert shows each hop and its individual latency when a message traverses multiple nodes. node peers and node status provide detailed connection state including TLS version, cipher, worker thread utilization, uptime, and cumulative traffic counters.

Configuration

Federation is configured in mod_node.conf. All options are documented in the config file itself (Law 11):

[mod_node]
node_name        = mynode
listen_port      = 9701
threads_per_peer = 4
tls              = true
cert_file        = /etc/portal/mynode/certs/server.crt
key_file         = /etc/portal/mynode/certs/server.key
tls_verify       = false
federation_key   = shared-secret-here

[nodes]
peer0 = hub-node=10.0.1.5:9706

The [nodes] section lists the peers this instance should connect to on startup. Each entry specifies a peer name and its address. Hub nodes need no entries for NAT peers — those connect inbound and are accepted automatically if the federation key matches.

Transparent Access

Once federation is established, remote resources are accessed using the same path syntax as local ones. The node name is simply the first segment of the path:

portal:/> get /devtest2/core/status     # direct peer
portal:/> get /asus/core/status         # remote NAT peer (routed via hub)
portal:/> get /asus/iot/resources/devices # IoT module on a remote node

From HTTP, the same resources are reachable through the web module's API endpoint:

curl http://host:8080/api/asus/core/status
Modules don't know if a path is local or remote. A Lua script calling portal.get('/warehouse/serial/com1/read') transparently reads a physical serial port on a remote machine through TLS-encrypted federation. The script wrote one line.

Port Forwarding (mod_tunnel)

The mod_tunnel module enables raw TCP port forwarding through the federation network. A local service can be exported so that remote nodes can reach it, and a remote service can be mapped to a local port for direct access.

After the initial tunnel handshake, mod_tunnel performs zero-overhead byte relay — raw TCP bytes are forwarded through the encrypted federation channel with no additional framing or processing.

This enables powerful use cases such as SSH access to machines behind NAT. A node behind NAT exports its SSH port (22) through the federation hub. Any peer in the federation can then map that remote port to a local port and connect with a standard SSH client, as if the remote machine were on the local network.

ACL Across Nodes

The wire protocol carries the full authentication context with every message: user identity, group memberships, and labels. This context is set at the originating node and preserved end-to-end through every hop in the federation.

Each node enforces its own ACL rules independently. When a message arrives from a remote peer, the receiving node inspects the attached auth context and applies its local access control policies. A user who has read access on Node A does not automatically get write access on Node B — each node is sovereign over its own resources.

This design means federation does not weaken security. Connecting two nodes does not merge their permission models. Each node trusts the identity from the wire protocol but enforces its own rules.