⬡ Lumamesh

Run a Lumamesh relay node

Anyone with a public UDP port can run a relay. The relay is a single Go binary — one UDP port, no database, no HTTP by default. It only handles signaling; once two browsers have exchanged SDP they communicate direct peer-to-peer and the relay drops out of the path.

Requirements: A machine reachable on a public IP (or static port-forwarded to one) with one UDP port open (default 3478). Go 1.22+ to build from source. CGNAT home connections won't work for inbound browser sessions — use a small VPS ($3–5/mo) or a home box with a real public IP.

Quick start

1. Build

git clone https://github.com/lumamesh/lumamesh.git
cd lumamesh/pion-server
go build -o lumamesh-relay .

2. Open the UDP port

# Linux UFW
sudo ufw allow 3478/udp

# iptables
sudo iptables -A INPUT -p udp --dport 3478 -j ACCEPT

3. Run

PUBLIC_IP=YOUR.PUBLIC.IP UDP_PORT=3478 ./lumamesh-relay

On first run the relay generates and persists its DTLS cert, ICE password, and Ed25519 node identity key, then prints the relay config blob:

╔══════════════════════════════════════════════════════════════╗
║  Lumamesh relay config — paste into encodeRelayConfig()       ║
╚══════════════════════════════════════════════════════════════╝
{
  "fingerprint": "AB:CD:...:EF",
  "ip":          "YOUR.PUBLIC.IP",
  "nk":          "<base64url Ed25519 pubkey>",
  "port":        3478,
  "pwd":         "<32 hex chars>",
  "ufrag":       "luma"
}

Save the four identity files — they are the node's entire identity. If you lose them you'll need to re-inscribe a new config blob:

server.crt          # DTLS certificate  (fingerprint changes if regenerated)
server.key          # DTLS private key
server.key.icepwd   # ICE password
node.key            # Ed25519 node identity (nk changes if regenerated)

Run as a systemd service

# /etc/systemd/system/lumamesh-relay.service
[Unit]
Description=Lumamesh Relay
After=network-online.target
Wants=network-online.target

[Service]
WorkingDirectory=/opt/lumamesh
ExecStart=/opt/lumamesh/lumamesh-relay
Environment=PUBLIC_IP=YOUR.PUBLIC.IP
Environment=UDP_PORT=3478
Environment=HEALTH_LISTEN=127.0.0.1:7401
Restart=always
RestartSec=3
NoNewPrivileges=true
ProtectSystem=strict
ReadWritePaths=/opt/lumamesh
PrivateTmp=true

[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable --now lumamesh-relay
sudo journalctl -u lumamesh-relay -f

Verify it's working

# Health check (requires HEALTH_LISTEN)
curl http://127.0.0.1:7401/healthz
# {"nodeId":"...","ok":true,"ts":...}

# Mesh state
curl http://127.0.0.1:7401/statsz | jq

Mesh with other nodes (optional)

Multiple relays sharing a MESH_SALT gossip room hints over TCP so browsers always find each other regardless of which node they land on.

# Node A
PUBLIC_IP=203.0.113.10 MESH_SALT=pick-a-long-random-string \
MESH_LISTEN=0.0.0.0:7400 MESH_PEERS=203.0.113.20:7400 ./lumamesh-relay

# Node B
PUBLIC_IP=203.0.113.20 MESH_SALT=pick-a-long-random-string \
MESH_LISTEN=0.0.0.0:7400 MESH_PEERS=203.0.113.10:7400 ./lumamesh-relay
Every node in a mesh cluster must share the identical MESH_SALT value. Different salts = rooms never converge.

Add your node to the network

The public relay list is curated at lumamesh.com/relay.txt. To have your node added, open a pull request on GitHub with your node's config (ip/host, port, fingerprint, nk). The network will automatically discover your node at runtime via the nodes gossip action — the curated list is just the bootstrap entry point.

Environment variables

VarDefaultPurpose
PUBLIC_IP127.0.0.1Public IPv4 browsers dial. Required.
PUBLIC_HOSTDNS name (resolved client-side via DoH). Use for dynamic DNS.
UDP_PORT3478Browser-facing UDP port.
ICE_UFRAGlumaICE username fragment.
ICE_PWDautoICE password. Auto-generated and persisted if unset.
CERT_FILEserver.crtDTLS cert path.
KEY_FILEserver.keyDTLS key path.
NODE_KEYnode.keyEd25519 identity path.
HEALTH_LISTENhost:port for /healthz + /statsz.
MESH_SALTEnables mesh gossip. Identical across all mesh nodes.
MESH_LISTENhost:port to accept gossip peers (TCP).
MESH_PEERSComma-separated host:port of peer nodes to dial.
MAX_SESSIONS1000Concurrent browser sessions.
MAX_ROOM_SIZE250Members per room.