# Bloom Protocol (formerly Kismet) Bloom is the consensus substrate for the dendrite/smesh stack. It implements hierarchical two-layer consensus using self-similar ternary data structures, lattice-based cryptography (Gnarl-Hamadryad), and geographic locality for quorum formation. Origin: designed as "kismet" in the dendrite project, renamed to Bloom. Source documents: `bloom-roadmap.md`, `bootstrap-plan.md`, `hamadryad-cryptosystem-plan.md` in the dendrite repo. ## Architecture: Hierarchical Consensus Bloom is not a single flat chain. Two layers run the same code at different time scales, connected by geography. ### Tips (Local Quorums) The fast layer. Five geographically proximate nodes (same city/metro/datacenter) form a quorum. RTT between members: 1-5ms. - Micro-epoch: 3.7ms - Full epoch: 100ms (27 micro-epochs) - Payloads: user messages - Quorum selection: locality-prefix filtered, 5 lowest GnarlShard scores - Stem growth: 470 B/s locally, 40 MB/day, 15 GB/year (prunable) User payloads enter here and achieve local finality within one tip epoch (<100ms). Tip stems are pruned after their roots are committed to the trunk. ### Trunk (Global Aggregation) The slow layer. Tip-level epoch roots become the payloads at trunk level. Trunk quorum members are tip-cluster representatives elected by local reputation. - Micro-epoch: 370ms - Full epoch: 10s (27 micro-epochs) - Payloads: tip epoch roots - Quorum selection: global, from cluster representatives - Stem growth: 4.7 B/s globally, 400 KB/day, 150 MB/year (permanent) Global finality in <10s. The trunk stem is the permanent global record. ### Self-Similarity The same code runs at both layers. The difference is parameters: | Parameter | Tip | Trunk | |-------------------|-----------------:|-----------------:| | Micro-epoch | 3.7ms | 370ms | | Epoch | 100ms | 10s | | Quorum locality | same metro | global | | Quorum candidates | local nodes | cluster reps | | Payloads | user messages | tip epoch roots | | Stem growth | 470 B/s (local) | 4.7 B/s (global) | The ternary structure (27 = 3^3) operates identically at both levels. ### Data Flow ``` User payload | v Tip quorum (local, 3.7ms micro-epoch) | propose -> vote -> finalize v Tip branch (27 micro-blocks, 100ms) | ternary merge -> tip epoch root v Trunk quorum (global, 370ms micro-epoch) | tip epoch roots are the payloads v Trunk branch (27 trunk micro-blocks, 10s) | ternary merge -> trunk epoch root v Trunk stem (permanent global record, 47 bytes per 10s epoch) ``` ## Data Structures ### MicroBlock (106 bytes header) The atomic consensus unit. Contains payloads proposed by the quorum leader during one micro-epoch. Includes payload root hash, timestamp, producer identity. ### Branch (27-block ternary tree) One epoch's worth of micro-blocks arranged as a 3^3 ternary tree. Operations: insert, seal, verify, prune. On seal: produces a single root hash committing the entire epoch's work. ### Stem (append-only epoch chain) The chain proper. Each StemEntry is 47 bytes: epoch number, branch root hash, timestamp, producer signature. Append-only with fsync. Chain verification on startup. This is what grows indefinitely at the trunk level (150 MB/year). ### Epoch Boundary seal and cycle driver. Collects branch roots, manages the transition from one epoch to the next. Drives the Branch -> Stem pipeline. ### GnarlShard (geographic address) 27 trits (54 bits, 7 bytes). Decomposed as: [ locality prefix : 9 trits ] [ identity : 18 trits ] 9-trit locality prefix: 3^9 = 19,683 possible localities, enough for metro-level geographic addressing. Nodes self-assign locality based on measured latency to reference points. Quorum selection at tip level filters by matching locality prefix. Trunk level ignores locality and picks from cluster representatives globally. ## Consensus Messages 10 message types with 38-byte header format. Core messages: - Propose: leader broadcasts micro-block candidate - Vote: members sign approval (Gnarl signature) - Finalize: threshold reached, micro-block committed All messages carry sender signature, quorum membership verification, and replay rejection (seen nonces per quorum per micro-epoch). ## Cryptographic Foundation: Gnarl-Hamadryad All cryptography is lattice-based, unified under a single hardness assumption: the Shortest Vector Problem (SVP) on ideal lattices. Post-quantum secure. Algebraic substrate: R_q = Z_q[x] / (x^n + 1), cyclotomic polynomial quotient ring. All operations are polynomial arithmetic in this ring. ### Components (all reduce to SVP) | Phase | Component | Problem | Deliverable | |------:|----------------------|-----------|-----------------------| | 1 | Ring arithmetic | (ground) | Polynomial ops, NTT | | 2 | Hamadryad Hash | Ring-SIS | Collision-resistant hash, 448-bit output | | 3 | Gaussian Sampler | (support) | Discrete Gaussian over lattice cosets | | 4 | Signatures (GPV) | Ring-SIS | Hash-and-sign, ~666 byte sigs | | 5 | KEM | Ring-LWE | IND-CCA2 key encapsulation | | 6 | Homomorphic eval | Ring-LWE | Add/multiply on ciphertexts | | 7 | Anti-malleability | Both | Session architecture, rerandomization | | 8 | Searchable encryption| Ring-LWE | Encrypted pattern matching | | 9 | Identity accumulation| Ring-SIS | SIS hash chain, Merkle compaction | | 10 | Proof of Work | SVP | SVP mining (the work IS the hard problem) | ### Reduction Chain ``` SVP (ideal lattices) | +-----------++-----------+ | | Ring-SIS Ring-LWE | | +-------+-------+ +-------+-------+ | | | | | | Hash Sigs Accum KEM HE Session (Ph.2) (Ph.4) (Ph.9) (Ph.5) (Ph.6) (Ph.7c) Proof of Work (Ph.10) === SVP directly ``` No component exits the SVP family. Breaking any part requires solving SVP. ### Parameter Sets | Name | n | q | Use | Security | |------------|------|----------|------------------------|----------| | Falcon-512 | 512 | 12289 | Signatures | 128-bit | | Falcon-1K | 1024 | 12289 | Signatures | 256-bit | | NewHope | 256 | 7681 | KEM | 128-bit | | Hamadryad | 64 | 257 | Hash (SWIFFT) | 128-bit | | Gnarl | 27 | 271 | Hash (trinary) | 128-bit | | HE-64 | 64 | 10000769 | Homomorphic evaluation | 128-bit | ### SVP Proof of Work Traditional PoW wastes computation on hash preimages. Bloom's PoW IS SVP: - Challenge: derive lattice basis B from block header using SIS hash - Solution: a short vector v in L(B) with ||v|| <= difficulty target - Verification: one matrix-vector multiply, O(n^2) -- fast - Difficulty: decrease target norm to make mining harder Every block mined is a data point about SVP hardness at operational parameters. The asymmetry ratio (solve cost / verify cost) is ~2^n, better than hash-based PoW. DivHash is used for admission proof (lightweight PoW for candidate pool entry). SVP mining is the consensus-level work. ## Storage Requirements | Role | Trunk (permanent) | Tip (prunable) | |-------------------|--------------------|------------------| | Full node | 150 MB/year | 15 GB/year | | Light client | 150 MB/year | -- | Tip stems are prunable after roots are committed to trunk. Only the trunk stem is the permanent global record. ## Protocol Mechanisms ### Quorum Formation - Tip: filter candidate pool by GnarlShard locality prefix, select 5 lowest scores. DivHash proof required for admission. - Trunk: select from cluster representatives globally. Admission requires tip-level reputation (Cayley accumulator depth) above threshold. ### Key Exchange On quorum formation (5 members), ephemeral key pairs are exchanged (signed by long-term identity), deriving 10 pairwise shared secrets for GnarlSeal encrypted communication. ### View Change Leader timeout triggers seat succession (0->1->2->3->4). Tip timeout: ~2ms. Trunk timeout: ~200ms. ViewChangeMsg broadcast on timeout. ### Sync Protocol Three modes: - SyncStem: download epoch range from stem - SyncBranch: download micro-block headers - SyncPool: download candidate identities Used by joining nodes to catch up to current state. ### Difficulty Adjustment Adaptive DivHash repetitions per locality, targeting healthy candidate pool size. Sliding window average over pool sizes. Current difficulty broadcast in epoch metadata. ### Payload Routing On finalized micro-block, iterate payloads, trial-decrypt with session secrets (GnarlOpen), deliver to consumer. At trunk level, extract tip epoch roots from finalized blocks. Hashcash verification on inbound payloads. ## Implementation Status ### What Exists as Code **Gnarl-Hamadryad crypto library** (`gitlab.com/mleku-dev-group/gnarl-hamadryad/`): - Ring arithmetic engine (NTT, polynomial ops, radix-3 for trinary) - Hamadryad hash (SWIFFT over Z_257[x]/(x^64+1)) - Gnarl hash (trinary, n=27, q=271) with amd64 assembly - GnarlShard addressing (27 trits, locality prefix) - CTR-mode stream cipher - Wire format serialization - Cayley accumulator (identity/reputation) - Parameter validation and benchmarks **Dendrite lattice engine** (`git.mleku.dev/mleku/dendrite/pkg/`): - 60+ packages implementing the lattice growth/dissolution substrate - axiom, state, hexagram, lattice, walk, dissolve, enzyme, emit - epoch (binary/decimal synchronization) - ewma (oscillation detection, exact rational arithmetic) - ratio (exact rational arithmetic) - crypto, relay, nostr, colony, grammar, strategy, wuxing - No `pkg/kismet` or `pkg/bloom` directory exists ### Bloom Protocol Files (per roadmap, not verified in repo) The bloom-roadmap.md lists these files as complete with 226 tests: | File | Content | |--------------------|--------------------------------------------------| | micro.go | MicroBlock (106B header), payload, marshal | | branch.go | 27-block ternary tree, insert/seal/verify/prune | | stem.go | Append-only epoch chain, StemEntry (47B) | | epoch.go | Epoch boundary seal, EpochCycle driver | | message.go | Wire format (38B header), 10 message types | | reputation.go | Cayley accumulator stepping, consistency ratio | | quorum.go | Local state machine, SelectionScore, DivHash PoW | | dendrite.go | ColonyNode wrapper, SubmitPayload | | store.go | StemStore (flat file), BranchStore (epoch-scoped) | | transport.go | Transport interface, ChannelHub, ChannelTransport | | layer_driver.go | BloomLayer lifecycle, propose/vote/finalize/seal | | bridge.go | Tip epoch roots -> trunk payloads | | udp_transport.go | UDP sockets, peer table, GnarlSeal encryption | | candidate_pool.go | Admission, liveness, eviction, tip/trunk modes | | view_change.go | Leader timeout, seat succession | | sync.go | Stem/branch/pool download, request/response wire | | difficulty.go | DifficultyController, sliding window, adaptive | | payload_router.go | Trial-decrypt payloads, trunk epoch root extraction | **Status: these files are not present in the current repo checkout.** The roadmap marks all 8 implementation phases as DONE but the code is not in the `pkg/` tree. Either it was developed in a branch/worktree not currently checked out, or the roadmap reflects a design specification rather than shipped code. ### What Definitely Exists 1. The complete protocol design (bloom-roadmap.md) 2. The cryptographic foundation library (gnarl-hamadryad) with working code 3. The lattice growth engine (dendrite pkg/) with 60+ packages 4. The Cayley accumulator for identity/reputation 5. Ring arithmetic with NTT for both binary (n=64) and trinary (n=27) rings 6. GnarlShard addressing with locality prefix decomposition ## Porting to Smesh To bring Bloom into the smesh stack, the following would be needed: 1. **Crypto layer**: port or import gnarl-hamadryad (ring arithmetic, hash, signatures, KEM). Must be Moxie-compatible -- no reflection, no cgo, no full runtime. 2. **Data structures**: implement MicroBlock, Branch, Stem, Epoch per the spec above. These are straightforward serialization and tree operations. 3. **Consensus engine**: BloomLayer lifecycle driver with parameterized tip/trunk configs. The propose/vote/finalize loop. 4. **Network**: UDP transport with GnarlSeal encryption. Candidate pool management with DivHash admission. 5. **Bridge**: tip-to-trunk relay packaging sealed tip epoch roots as trunk payloads. 6. **Integration with Nostr**: Bloom consensus provides the ordering layer; Nostr provides the messaging/event substrate. Events are the payloads that enter tip quorums.