
Every hour you donβt zap, a donkey eats another cabbage. You can stop this. π«
mleku Β· npub1fjqqy4a93z5zsjwsfxqhc2764kvykfdyttvldkkkdera8dr78vhsmmleku
ORLY supports a modular IPC architecture where core functionality runs as independent gRPC services:
orly launcher (process supervisor)
βββ orly db (gRPC :50051) - Event storage & queries
βββ orly acl (gRPC :50052) - Access control
βββ orly bridge (SMTP :2525) - Marmot email bridge (DM β SMTP)
βββ orly sync distributed (gRPC :50061) - Peer-to-peer sync
βββ orly sync cluster (gRPC :50062) - Cluster replication
βββ orly sync relaygroup (gRPC :50063) - Relay group config (Kind 39105)
βββ orly sync negentropy (gRPC :50064) - NIP-77 set reconciliation
βββ orly (WebSocket/HTTP) - Main relay
Benefits:
See docs/IPC_SYNC_SERVICES.md for detailed API documentation.
- Prerequisites - Basic Build - Building with Web UI
- Web UI - Sprocket Event Processing - Policy System
- Automated Deployment - Network Options - systemd Service Management - Remote Deployment - Configuration - Firewall Configuration - Monitoring
- Follows ACL - Curation ACL - Cluster Replication
- Client Setup (White Noise, etc.)
Bug reports and feature requests that do not follow the protocol will not be accepted.
Before submitting any issue, you must read and follow BUG_REPORTS_AND_FEATURE_REQUEST_PROTOCOL.md.
Requirements:
Issues missing required information will be closed without review.
IMPORTANT: ORLY requires a minimum of 500MB of free memory to operate. The relay uses adaptive PID-controlled rate limiting to manage memory pressure. By default, it will: - Auto-detect available system memory at startup - Target 66% of available memory, capped at 1.5GB for optimal performance - Fail to start if less than 500MB is available You can override the memory target withORLY_RATE_LIMIT_TARGET_MB(e.g.,ORLY_RATE_LIMIT_TARGET_MB=2000for 2GB). To disable rate limiting (not recommended):ORLY_RATE_LIMIT_ENABLED=false
ORLY is a nostr relay written from the ground up to be performant, low latency, and built with a number of features designed to make it well suited for:
ORLY leverages high-performance libraries and custom optimizations for exceptional speed:
The encoders achieve 24% faster JSON marshaling, 16% faster canonical encoding, and 54-91% reduction in memory allocations through custom buffer pre-allocation and zero-allocation optimization techniques.
ORLY uses a fast embedded badger database with a database designed for high performance querying and event storage.
ORLY is a standard Go application that can be built using the Go toolchain.
To build the unified binary (relay + all subcommands):
git clone <repository-url>
cd next.orly.dev
go build -o orly ./cmd/orly
To build the relay-only binary (no subcommands):
go build -o orly .
To build with the embedded web interface:
# Build the Svelte web application
cd app/web
bun install
bun run build
# Build the Go binary from project root
cd ../../
go build -o orly ./cmd/orly
The recommended way to build and embed the web UI is using the provided script:
./scripts/update-embedded-web.sh
This script will:
app/web to app/web/dist using Bun (preferred) or fall back to npm/yarn/pnpmgo install from the repository root so the binary picks up the new embedded assetsFor manual builds, you can also use:
#!/bin/bash
# build.sh
echo "Building Svelte app..."
cd app/web
bun install
bun run build
echo "Building Go binary..."
cd ../../
go build -o orly ./cmd/orly
echo "Build complete!"
Make it executable with chmod +x build.sh and run with ./build.sh.
ORLY includes a modern web-based user interface built with Svelte for relay management and monitoring.
The web UI is embedded in the relay binary and accessible at the relay's root path.
For development with hot-reloading, ORLY can proxy web requests to a local dev server while still handling WebSocket relay connections and API requests.
Environment Variables:
ORLY_WEB_DISABLE - Set to true to disable serving the embedded web UIORLY_WEB_DEV_PROXY_URL - URL of the dev server to proxy web requests to (e.g., localhost:8080)Setup:
cd app/web
bun install
bun run dev
Note the port sirv is listening on (e.g., http://localhost:8080).
export ORLY_WEB_DISABLE=true
export ORLY_WEB_DEV_PROXY_URL=localhost:8080
./orly
The relay will:
/ for Nostr protocol/api/*With a reverse proxy/tunnel:
If you're running behind a reverse proxy or tunnel (e.g., Caddy, nginx, Cloudflare Tunnel), the setup is the same. The relay listens locally and your reverse proxy forwards traffic to it:
Browser οΏ½ Reverse Proxy οΏ½ ORLY (port 3334) οΏ½ Dev Server (port 8080)
οΏ½
WebSocket/API
Example with the relay on port 3334 and sirv on port 8080:
# Terminal 1: Dev server
cd app/web && bun run dev
# Output: Your application is ready~!
# Local: http://localhost:8080
# Terminal 2: Relay
export ORLY_WEB_DISABLE=true
export ORLY_WEB_DEV_PROXY_URL=localhost:8080
export ORLY_PORT=3334
./orly
Disabling the web UI without a proxy:
If you only want to disable the embedded web UI (without proxying to a dev server), just set ORLY_WEB_DISABLE=true without setting ORLY_WEB_DEV_PROXY_URL. The relay will return 404 for web UI requests while still handling WebSocket and API requests.
ORLY includes a powerful sprocket system for external event processing scripts. Sprocket scripts enable custom filtering, validation, and processing logic for Nostr events before storage.
accept, reject, or shadowReject events based on custom logicexport ORLY_SPROCKET_ENABLED=true
export ORLY_APP_NAME="ORLY"
# Place script at ~/.config/ORLY/sprocket.sh
For detailed configuration and examples, see the sprocket documentation.
ORLY includes a comprehensive policy system for fine-grained control over event storage and retrieval. Configure custom validation rules, access controls, size limits, and age restrictions.
export ORLY_POLICY_ENABLED=true
# Default policy file: ~/.config/ORLY/policy.json
# OPTIONAL: Use a custom policy file location
# WARNING: ORLY_POLICY_PATH MUST be an ABSOLUTE path (starting with /)
# Relative paths will be REJECTED and the relay will fail to start
export ORLY_POLICY_PATH=/etc/orly/policy.json
For detailed configuration and examples, see the Policy Usage Guide.
ORLY includes an automated deployment script that handles Go installation, dependency setup, building, and systemd service configuration.
The deployment script (scripts/deploy.sh) provides a complete setup solution:
# Clone the repository
git clone <repository-url>
cd next.orly.dev
# Run the deployment script
./scripts/deploy.sh
The script will:
~/.local/go)~/.goenv and updating ~/.bashrcupdate-embedded-web.sh~/.local/bin/orlyAfter deployment, reload your shell environment:
source ~/.bashrc
ORLY can handle TLS itself (direct mode) or sit behind a reverse proxy. Choose one.
Run ORLY on localhost and let Caddy, nginx, or another proxy handle TLS termination, WebSocket upgrades, and certificate renewal. This is the production setup used at relay.orly.dev.
Internet (wss://relay.example.com)
β Caddy/nginx (:443, TLS termination)
β ORLY (127.0.0.1:3334, plain HTTP/WebSocket)
1. Configure ORLY to listen on localhost only:
export ORLY_LISTEN=127.0.0.1
export ORLY_PORT=3334
Do NOT set ORLY_TLS_DOMAINS β the reverse proxy handles TLS.
2. Install and configure the reverse proxy.
Caddy (recommended β automatic HTTPS, minimal config):
# Install Caddy (Ubuntu/Debian)
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
sudo apt install caddy
Create /etc/caddy/Caddyfile:
relay.example.com {
reverse_proxy 127.0.0.1:3334
}
That's it. Caddy handles TLS certificates, HTTPS, and WebSocket upgrades automatically. No additional configuration needed for WebSocket β Caddy proxies upgrade requests by default.
Reload Caddy:
sudo systemctl reload caddy
nginx alternative:
server {
listen 443 ssl http2;
server_name relay.example.com;
ssl_certificate /etc/letsencrypt/live/relay.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/relay.example.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:3334;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
}
server {
listen 80;
server_name relay.example.com;
return 301 https://$host$request_uri;
}
With nginx you must obtain certificates separately (e.g., certbot --nginx).
3. Open firewall ports:
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
4. Verify:
# Check ORLY is listening on localhost
ss -tlnp | grep 3334
# Check proxy is listening externally
ss -tlnp | grep -E ':(80|443)'
# Test WebSocket connection from outside
wscat -c wss://relay.example.com
ORLY handles TLS itself using Let's Encrypt ACME. No reverse proxy needed. ORLY binds directly to ports 80 and 443, which requires setcap since it runs as a non-root user.
Internet (wss://relay.example.com)
β ORLY (:443 HTTPS/WSS + :80 ACME challenges)
1. Set capabilities on the binary:
# Allow binding to privileged ports without root
sudo setcap 'cap_net_bind_service=+ep' ~/.local/bin/orly
Note: setcap must be re-applied after every binary update.
2. Configure TLS domains:
export ORLY_TLS_DOMAINS=relay.example.com
When ORLY_TLS_DOMAINS is set, ORLY ignores ORLY_PORT and listens on :443 (HTTPS/WSS) and :80 (ACME challenges) instead.
3. Optional: custom certificates:
# Load certificates from files instead of (or in addition to) ACME
export ORLY_CERTS=/path/to/cert1,/path/to/cert2
Certificate files should be named with .pem and .key extensions:
/path/to/cert1.pem (certificate)/path/to/cert1.key (private key)4. Open firewall ports:
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
5. Verify:
# ORLY should be listening on 80 and 443
ss -tlnp | grep orly
# Test WebSocket
wscat -c wss://relay.example.com
| Reverse Proxy | Direct Listening | |
|---|---|---|
| TLS management | Caddy/nginx handles it | ORLY's built-in ACME |
| Multiple services on same IP | Yes (proxy routes by domain) | No (ORLY owns port 443) |
| WebSocket config | Automatic (Caddy) or manual (nginx) | Built-in |
| Binary updates | Just restart ORLY | Restart + re-run setcap |
| Additional software | Caddy or nginx | None |
| Used at relay.orly.dev | Yes (Caddy) | No |
The deployment script creates a systemd service for easy management:
# Start the service
sudo systemctl start orly
# Stop the service
sudo systemctl stop orly
# Restart the service
sudo systemctl restart orly
# Enable service to start on boot
sudo systemctl enable orly --now
# Disable service from starting on boot
sudo systemctl disable orly --now
# Check service status
sudo systemctl status orly
# View service logs
sudo journalctl -u orly -f
# View recent logs
sudo journalctl -u orly --since "1 hour ago"
You can deploy ORLY on a remote server using SSH:
# Deploy to a VPS with SSH key authentication
ssh user@your-server.com << 'EOF'
# Clone and deploy
git clone <repository-url>
cd next.orly.dev
./scripts/deploy.sh
# Configure your relay
echo 'export ORLY_TLS_DOMAINS=relay.example.com' >> ~/.bashrc
echo 'export ORLY_ADMINS=npub1your_admin_key_here' >> ~/.bashrc
# Start the service
sudo systemctl start orly --now
EOF
# Check deployment status
ssh user@your-server.com 'sudo systemctl status orly'
After deployment, configure your relay by setting environment variables in your shell profile:
# Add to ~/.bashrc or ~/.profile
export ORLY_TLS_DOMAINS=relay.example.com
export ORLY_ADMINS=npub1your_admin_key
export ORLY_ACL_MODE=follows
export ORLY_APP_NAME="MyRelay"
Then restart the service:
source ~/.bashrc
sudo systemctl restart orly
Ensure your firewall allows the necessary ports:
# For TLS-enabled relays
sudo ufw allow 80/tcp # HTTP (ACME challenges)
sudo ufw allow 443/tcp # HTTPS/WSS
# For non-TLS relays
sudo ufw allow 3334/tcp # Default ORLY port
# Enable firewall if not already enabled
sudo ufw enable
Monitor your relay using systemd and standard Linux tools:
# Service status and logs
sudo systemctl status orly
sudo journalctl -u orly -f
# Resource usage
htop
sudo ss -tulpn | grep orly
# Disk usage (database grows over time)
du -sh ~/.local/share/ORLY/
# Check TLS certificates (if using Let's Encrypt)
ls -la ~/.local/share/ORLY/autocert/
ORLY includes comprehensive testing tools for protocol validation and performance testing.
relay-tester for Nostr protocol compliance validationFor detailed testing instructions, multi-relay testing scenarios, and advanced usage, see the Relay Testing Guide.
The benchmark suite provides comprehensive performance testing and comparison across multiple relay implementations, including throughput, latency, and memory usage metrics.
ORLY includes several command-line utilities in the cmd/ directory for testing, debugging, and administration.
Nostr protocol compliance testing tool. Validates that a relay correctly implements the Nostr protocol specification.
# Run all protocol compliance tests
go run ./cmd/relay-tester -url ws://localhost:3334
# List available tests
go run ./cmd/relay-tester -list
# Run specific test
go run ./cmd/relay-tester -url ws://localhost:3334 -test "Basic Event"
# Output results as JSON
go run ./cmd/relay-tester -url ws://localhost:3334 -json
Comprehensive relay performance benchmarking tool. Tests event storage, queries, and subscription performance with detailed latency metrics (P90, P95, P99).
# Run benchmarks against local database
go run ./cmd/benchmark -data-dir /tmp/bench-db -events 10000 -workers 4
# Run benchmarks against a running relay
go run ./cmd/benchmark -relay ws://localhost:3334 -events 5000
# Use different database backends
go run ./cmd/benchmark -dgraph -events 10000
go run ./cmd/benchmark -neo4j -events 10000
The cmd/benchmark/ directory also includes Docker Compose configurations for comparative benchmarks across multiple relay implementations (strfry, nostr-rs-relay, khatru, etc.).
Load testing tool for evaluating relay performance under sustained high-traffic conditions. Generates events with random content and tags to simulate realistic workloads.
# Run stress test with 10 concurrent workers
go run ./cmd/stresstest -url ws://localhost:3334 -workers 10 -duration 60s
# Generate events with random p-tags (up to 100 per event)
go run ./cmd/stresstest -url ws://localhost:3334 -workers 5
Tests the Blossom blob storage protocol (BUD-01/BUD-02) implementation. Validates upload, download, and authentication flows.
# Test with generated key
go run ./cmd/blossomtest -url http://localhost:3334 -size 1024
# Test with specific nsec
go run ./cmd/blossomtest -url http://localhost:3334 -nsec nsec1...
# Test anonymous uploads (no authentication)
go run ./cmd/blossomtest -url http://localhost:3334 -no-auth
Event aggregation utility that fetches events from multiple relays using bloom filters for deduplication. Useful for syncing events across relays with memory-efficient duplicate detection.
go run ./cmd/aggregator -relays wss://relay1.com,wss://relay2.com -output events.jsonl
Key format conversion utility. Converts between hex and bech32 (npub/nsec) formats for Nostr keys.
# Convert npub to hex
go run ./cmd/convert npub1abc...
# Convert hex to npub
go run ./cmd/convert 0123456789abcdef...
# Convert secret key (nsec or hex) - outputs both nsec and derived npub
go run ./cmd/convert --secret nsec1xyz...
Free Internet Name Daemon - CLI tool for the distributed naming system. Manages name registration, transfers, and certificate issuance.
# Validate a name format
go run ./cmd/FIND verify-name example.nostr
# Generate a new key pair
go run ./cmd/FIND generate-key
# Create a registration proposal
go run ./cmd/FIND register myname.nostr
# Transfer a name to a new owner
go run ./cmd/FIND transfer myname.nostr npub1newowner...
Tests the policy system for event write control. Validates that policy rules correctly allow or reject events based on kind, pubkey, and other criteria.
go run ./cmd/policytest -url ws://localhost:3334 -type event -kind 4678
go run ./cmd/policytest -url ws://localhost:3334 -type req -kind 1
go run ./cmd/policytest -url ws://localhost:3334 -type publish-and-query -count 5
Tests policy-based filtering with authorized and unauthorized pubkeys. Validates access control rules for specific users.
go run ./cmd/policyfiltertest -url ws://localhost:3334 \
-allowed-pubkey <hex> -allowed-sec <hex> \
-unauthorized-pubkey <hex> -unauthorized-sec <hex>
Tests WebSocket subscription stability over extended periods. Monitors for dropped subscriptions and connection issues.
# Run subscription stability test for 60 seconds
go run ./cmd/subscription-test -url ws://localhost:3334 -duration 60 -kind 1
# With verbose output
go run ./cmd/subscription-test -url ws://localhost:3334 -duration 120 -v
Simplified subscription stability test that verifies subscriptions remain active without dropping over the test duration.
go run ./cmd/subscription-test-simple -url ws://localhost:3334 -duration 120
ORLY provides four ACL (Access Control List) modes to control who can publish events to your relay:
| Mode | Description | Best For |
|---|---|---|
none | Open relay, anyone can write | Public relays |
follows | Write access based on admin follow lists | Personal/community relays |
managed | Explicit allow/deny lists via NIP-86 API | Private relays |
curating | Three-tier classification with rate limiting | Curated community relays |
export ORLY_ACL_MODE=follows # or: none, managed, curating
The follows ACL system provides flexible relay access control based on social relationships in the Nostr network.
export ORLY_ACL_MODE=follows
export ORLY_ADMINS=npub1fjqqy4a93z5zsjwsfxqhc2764kvykfdyttvldkkkdera8dr78vhsmmleku
./orly
The system grants write access to users followed by designated admins, with read-only access for others. Follow lists update dynamically as admins modify their relationships.
The curation ACL mode provides sophisticated content curation with a three-tier publisher classification system:
Key features:
export ORLY_ACL_MODE=curating
export ORLY_OWNERS=npub1your_owner_key
./orly
After starting, publish a configuration event (kind 30078) to enable the relay. The web UI at /#curation provides a complete management interface.
For detailed configuration and API documentation, see the Curation Mode Guide.
ORLY supports distributed relay clusters using active replication. When configured with peer relays, ORLY will automatically synchronize events between cluster members using efficient HTTP polling.
export ORLY_RELAY_PEERS=https://peer1.example.com,https://peer2.example.com
export ORLY_CLUSTER_ADMINS=npub1cluster_admin_key
Privacy Considerations: By default, ORLY propagates all events including privileged events (DMs, gift wraps, etc.) to cluster peers for complete synchronization. This ensures no data loss but may expose private communications to other relay operators in your cluster.
To enhance privacy, you can disable propagation of privileged events:
export ORLY_CLUSTER_PROPAGATE_PRIVILEGED_EVENTS=false
Important: When disabled, privileged events will not be replicated to peer relays. This provides better privacy but means these events will only be available on the originating relay. Users should be aware that accessing their privileged events may require connecting directly to the relay where they were originally published.
ORLY includes a bidirectional Nostr DM to SMTP email bridge. Users DM the bridge's Nostr pubkey to subscribe (via Lightning payment), send outbound email, and receive inbound email as encrypted DMs.
To: / Subject: headers to the bridge pubkey β it sends the email via SMTPnpub@yourdomain is delivered as an encrypted DM with a reply link# 1. Build the unified binary
go build -o orly ./cmd/orly
# 2. Set environment
export ORLY_BRIDGE_ENABLED=true
export ORLY_BRIDGE_DOMAIN=yourdomain.com
export ORLY_BRIDGE_SMTP_PORT=2525
export ORLY_BRIDGE_NWC_URI="nostr+walletconnect://..."
# 3. Start (bridge runs alongside the relay)
./orly
Or run standalone against any Nostr relay:
export ORLY_BRIDGE_RELAY_URL=wss://relay.example.com
./orly bridge
The bridge publishes a kind 0 (profile metadata) event on startup so Nostr clients can discover it. Create a profile.txt in the bridge data directory (or set ORLY_BRIDGE_PROFILE):
name: Marmot Bridge
about: Nostr-Email bridge at yourdomain.com. DM 'subscribe' to get started.
picture: https://yourdomain.com/avatar.png
nip05: bridge@yourdomain.com
lud16: tips@yourdomain.com
website: https://yourdomain.com
See `profile.example.txt` for a template.
NIP-17 messaging clients like White Noise need the bridge to have a kind 10002 relay list event (NIP-65) to know where to send DMs. Without it, the client sees the bridge profile but reports the bridge "isn't on White Noise yet."
Until the bridge publishes kind 10002 automatically, publish one manually using nak:
export NOSTR_SECRET_KEY=nsec1... # Bridge identity
nak event --kind 10002 \
--tag r='wss://your-relay.com/' \
--tag r='wss://relay.damus.io/;read' \
--tag r='wss://nos.lol/;read' \
wss://your-relay.com wss://relay.damus.io wss://nos.lol
See Bridge Deployment Guide β Client Setup: White Noise for full instructions.
ORLY ships a single Docker image that serves both the relay and the email bridge. The default entrypoint runs the relay; pass bridge as the command to run the email bridge.
docker build -t orly .
docker run --rm -p 3334:3334 -v orly-data:/data orly
docker run --rm -p 2525:2525 --env-file .env.bridge orly bridge
A ready-made compose file is provided for bridge deployment:
cp .env.bridge.example .env.bridge # edit with your values
docker compose -f docker-compose.bridge.yml up --build
See `.env.bridge.example` for all available configuration variables.
ORLY supports NIP-77 negentropy-based set reconciliation for efficient relay synchronization.
Enable negentropy client support:
export ORLY_NEGENTROPY_ENABLED=true
./orly
Enable peer relay synchronization:
export ORLY_NEGENTROPY_ENABLED=true
export ORLY_SYNC_NEGENTROPY_PEERS=wss://relay.orly.dev,wss://other-relay.com
./orly
For production deployments, run negentropy as a separate service:
# Build binaries
CGO_ENABLED=0 go build -o orly-sync-negentropy ./cmd/orly-sync-negentropy
# Configure launcher
export ORLY_LAUNCHER_SYNC_NEGENTROPY_ENABLED=true
export ORLY_LAUNCHER_SYNC_NEGENTROPY_BINARY=/path/to/orly-sync-negentropy
export ORLY_SYNC_NEGENTROPY_PEERS=wss://peer-relay.com
ORLY's negentropy implementation is compatible with strfry:
# Pull events from ORLY using strfry
strfry sync wss://your-orly-relay.com --filter '{"kinds": [0, 1, 3]}' --dir down
For detailed configuration including Docker deployments, filtering options, and troubleshooting, see the Negentropy Sync Guide.
| Document | Description |
|---|---|
| Bridge Deployment Guide | DNS, DKIM, NWC, SMTP for Marmot email bridge |
| Deployment Testing | Deployment verification procedures |
| Build Platforms | Multi-platform build guide (Linux, macOS, Windows, Android) |
| Purego Build System | CGO-free build system with runtime library loading |
| WASM/Mobile Builds | WebAssembly and mobile build targets |
| Document | Description |
|---|---|
| Policy Usage Guide | Event filtering and validation rules |
| Policy Configuration Reference | Complete policy JSON schema |
| Policy Troubleshooting | Diagnosing policy issues |
| Curation Mode Guide | Three-tier publisher classification ACL |
| HTTP Guard | Bot detection and HTTP rate limiting |
| Branding Guide | Relay name, icon, NIP-11 customization |
| Document | Description |
|---|---|
| IPC Architecture | gRPC split-process design |
| IPC Sync Services | Sync service API reference |
| Negentropy Sync Guide | NIP-77 set reconciliation setup |
| Sync Client Mode | Client-mode relay synchronization |
| Neo4j Backend | Neo4j database driver setup and tuning |
| NIP-77 Analysis | NIP-77 implementation details |
| Document | Description |
|---|---|
| FIND Names Spec | Free Internet Name Daemon protocol |
| FIND Implementation | FIND integration architecture and status |
| NIP-XX Graph Queries | REQ filter extension for graph traversals |
| NIP-XX Cluster Replication | HTTP polling-based cluster replication |
| NIP-XX Responsive Images | Image variant protocol extension |
| NIP Curation | Curation-mode protocol spec |
| NIP NRC | Nostr Relay Connection protocol |
| Document | Description |
|---|---|
| Glossary | ORLY terminology and domain concepts |
| Relay Testing Guide | Protocol compliance testing |
| Web UI Event Templates | Event kind templates for the web UI |
| Applesauce Reference | Applesauce library integration |
| Graph Implementation Phases | Graph query feature tracker |
| Graph Queries Remaining | Outstanding graph query work |
| Neo4j WoT Spec | Web-of-Trust graph schema |
| Neo4j Schema Changes | Guide for modifying the Neo4j schema |
The nostr library (git.mleku.dev/mleku/nostr/encoders/tag) uses binary optimization for e and p tags to reduce memory usage and improve comparison performance.
When events are unmarshaled from JSON, 64-character hex values in e/p tags are converted to 33-byte binary format (32 bytes hash + null terminator).
Important: When working with e/p tag values in code:
tag.Value() directly - it returns raw bytes which may be binary, not hextag.ValueHex() to get a hex string regardless of storage formattag.ValueBinary() to get raw 32-byte binary (returns nil if not binary-encoded)// CORRECT: Use ValueHex() for hex decoding
pt, err := hex.Dec(string(pTag.ValueHex()))
// WRONG: Value() may return binary bytes, not hex
pt, err := hex.Dec(string(pTag.Value())) // Will fail for binary-encoded tags!
The /release command pushes to the origin remote with tags:
git push origin main --tags