This document highlights frequent errors made when implementing Nostr clients and relays, along with solutions.
Problem: Wrong serialization order or missing fields when calculating SHA256.
Correct Serialization:
[
0, // Must be integer 0
<pubkey>, // Lowercase hex string
<created_at>, // Unix timestamp integer
<kind>, // Integer
<tags>, // Array of arrays
<content> // String
]
Common errors:
id or sig fields in serializationFix: Serialize exactly as shown, compact JSON, SHA256 the UTF-8 bytes.
Problem: Using ECDSA instead of Schnorr signatures.
Correct:
Libraries:
Problem: Events with far-future timestamps or very old timestamps.
Best practices:
Math.floor(Date.now() / 1000)created_at > now + 15 minutesFix: Always use current time when creating events.
Problem: Tags that aren't arrays or have wrong structure.
Correct format:
{
"tags": [
["e", "event-id", "relay-url", "marker"],
["p", "pubkey", "relay-url"],
["t", "hashtag"]
]
}
Common errors:
{"e": "..."} ❌["e", "event-id"] when nested in tags is wrongProblem: Showing multiple versions of replaceable events.
Event types:
Fix:
// For replaceable events
const key = `${event.pubkey}:${event.kind}`
if (latestEvents[key]?.created_at < event.created_at) {
latestEvents[key] = event
}
// For parameterized replaceable events
const dTag = event.tags.find(t => t[0] === 'd')?.[1] || ''
const key = `${event.pubkey}:${event.kind}:${dTag}`
if (latestEvents[key]?.created_at < event.created_at) {
latestEvents[key] = event
}
Problem: Loading indicators never finish or show wrong state.
Solution:
const receivedEvents = new Set()
let eoseReceived = false
ws.onmessage = (msg) => {
const [type, ...rest] = JSON.parse(msg.data)
if (type === 'EVENT') {
const [subId, event] = rest
receivedEvents.add(event.id)
displayEvent(event)
}
if (type === 'EOSE') {
eoseReceived = true
hideLoadingSpinner()
}
}
Problem: Memory leaks and wasted bandwidth from unclosed subscriptions.
Fix: Always send CLOSE when done:
ws.send(JSON.stringify(['CLOSE', subId]))
Best practices:
Problem: Not knowing if events were accepted or rejected.
Solution:
ws.onmessage = (msg) => {
const [type, eventId, accepted, message] = JSON.parse(msg.data)
if (type === 'OK') {
if (!accepted) {
console.error(`Event ${eventId} rejected: ${message}`)
handleRejection(eventId, message)
}
}
}
Common rejection reasons:
pow: - Insufficient proof of workblocked: - Pubkey or content blockedrate-limited: - Too many requestsinvalid: - Failed validationProblem: Events lost because WebSocket not connected.
Fix:
const sendWhenReady = (ws, message) => {
if (ws.readyState === WebSocket.OPEN) {
ws.send(message)
} else {
ws.addEventListener('open', () => ws.send(message), { once: true })
}
}
Problem: App breaks when relay goes offline.
Solution: Implement reconnection with exponential backoff:
let reconnectDelay = 1000
const maxDelay = 30000
const connect = () => {
const ws = new WebSocket(relayUrl)
ws.onclose = () => {
setTimeout(() => {
reconnectDelay = Math.min(reconnectDelay * 2, maxDelay)
connect()
}, reconnectDelay)
}
ws.onopen = () => {
reconnectDelay = 1000 // Reset on successful connection
resubscribe() // Re-establish subscriptions
}
}
Problem: Requesting too many events, overwhelming relay and client.
Bad:
{
"kinds": [1],
"limit": 10000
}
Good:
{
"kinds": [1],
"authors": ["<followed-users>"],
"limit": 50,
"since": 1234567890
}
Best practices:
limit (50-500)authors when possiblesince/until for time rangeskindsProblem: Full hex strings in filters unnecessarily.
Optimization:
{
"ids": ["abc12345"], // 8 chars enough for uniqueness
"authors": ["def67890"]
}
Relays support prefix matching for ids and authors.
Problem: Redundant filter conditions.
Bad:
{
"authors": ["pubkey1", "pubkey1"],
"kinds": [1, 1]
}
Good:
{
"authors": ["pubkey1"],
"kinds": [1]
}
Deduplicate filter arrays.
Problem: Missing root/reply markers or wrong tag order.
Correct reply structure (NIP-10):
{
"kind": 1,
"tags": [
["e", "<root-event-id>", "<relay>", "root"],
["e", "<parent-event-id>", "<relay>", "reply"],
["p", "<author1-pubkey>"],
["p", "<author2-pubkey>"]
]
}
Key points:
p tags for all mentioned usersProblem: Authors not notified of replies.
Fix: Always add p tag for:
{
"tags": [
["e", "event-id", "", "reply"],
["p", "original-author"],
["p", "mentioned-user1"],
["p", "mentioned-user2"]
]
}
Problem: Ambiguous thread structure.
Solution: Always use markers in e tags:
root - Root of threadreply - Direct parentmention - Referenced but not replied toWithout markers, clients must guess thread structure.
Problem: Single point of failure, censorship vulnerability.
Solution: Connect to multiple relays (5-15 common):
const relays = [
'wss://relay1.com',
'wss://relay2.com',
'wss://relay3.com'
]
const connections = relays.map(url => connect(url))
Best practices:
Problem: Querying wrong relays, missing user's events.
Correct flow:
10002 event (relay list)async function getUserRelays(pubkey) {
// Fetch kind 10002
const relayList = await fetchEvent({
kinds: [10002],
authors: [pubkey]
})
const readRelays = []
const writeRelays = []
relayList.tags.forEach(([tag, url, mode]) => {
if (tag === 'r') {
if (!mode || mode === 'read') readRelays.push(url)
if (!mode || mode === 'write') writeRelays.push(url)
}
})
return { readRelays, writeRelays }
}
Problem: Violating relay policies, getting rate limited or banned.
Solution: Fetch and respect NIP-11 relay info:
const getRelayInfo = async (relayUrl) => {
const url = relayUrl.replace('wss://', 'https://').replace('ws://', 'http://')
const response = await fetch(url, {
headers: { 'Accept': 'application/nostr+json' }
})
return response.json()
}
// Respect limitations
const info = await getRelayInfo(relayUrl)
const maxLimit = info.limitation?.max_limit || 500
const maxFilters = info.limitation?.max_filters || 10
Problem: Including nsec in client code, logs, or network requests.
Never:
Best practices:
Problem: Accepting invalid events, vulnerability to attacks.
Always verify:
const verifyEvent = (event) => {
// 1. Verify ID
const calculatedId = sha256(serializeEvent(event))
if (calculatedId !== event.id) return false
// 2. Verify signature
const signatureValid = schnorr.verify(
event.sig,
event.id,
event.pubkey
)
if (!signatureValid) return false
// 3. Check timestamp
const now = Math.floor(Date.now() / 1000)
if (event.created_at > now + 900) return false // 15 min future
return true
}
Verify before:
Problem: Weak encryption, vulnerable to attacks.
Solution: Use NIP-44 instead:
Migration: Update to NIP-44 for all new encrypted messages.
Problem: XSS vulnerabilities in displayed content.
Solution: Sanitize before rendering:
import DOMPurify from 'dompurify'
const safeContent = DOMPurify.sanitize(event.content, {
ALLOWED_TAGS: ['b', 'i', 'u', 'a', 'code', 'pre'],
ALLOWED_ATTR: ['href', 'target', 'rel']
})
Especially critical for:
Problem: Re-fetching same events repeatedly, poor performance.
Solution: Implement event cache:
const eventCache = new Map()
const cacheEvent = (event) => {
eventCache.set(event.id, event)
}
const getCachedEvent = (eventId) => {
return eventCache.get(eventId)
}
Cache strategies:
Problem: Slow feeling app, waiting for relay confirmation.
Solution: Show user's events immediately:
const publishEvent = async (event) => {
// Immediately show to user
displayEvent(event, { pending: true })
// Publish to relays
const results = await Promise.all(
relays.map(relay => relay.publish(event))
)
// Update status based on results
const success = results.some(r => r.accepted)
displayEvent(event, { pending: false, success })
}
Problem: User doesn't know if app is working.
Solution: Clear loading indicators:
Problem: Loading entire thread at once, performance issues.
Solution: Implement pagination:
const loadThread = async (eventId, cursor = null) => {
const filter = {
"#e": [eventId],
kinds: [1],
limit: 20,
until: cursor
}
const replies = await fetchEvents(filter)
return { replies, nextCursor: replies[replies.length - 1]?.created_at }
}
Problem: App works with one relay but fails with others.
Solution: Test with:
Critical tests:
Metrics to track:
Event Creation:
WebSocket:
Filters:
Threading:
Relays:
Security:
UX:
Testing: