Bloom is the consensus substrate for the dendrite/smesh stack. It implements hierarchical two-layer consensus using self-similar ternary data structures, lattice-based cryptography (Gnarl-Hamadryad), and geographic locality for quorum formation.
Origin: designed as "kismet" in the dendrite project, renamed to Bloom.
Source documents: bloom-roadmap.md, bootstrap-plan.md,
hamadryad-cryptosystem-plan.md in the dendrite repo.
Bloom is not a single flat chain. Two layers run the same code at different time scales, connected by geography.
The fast layer. Five geographically proximate nodes (same city/metro/datacenter) form a quorum. RTT between members: 1-5ms.
User payloads enter here and achieve local finality within one tip epoch (<100ms). Tip stems are pruned after their roots are committed to the trunk.
The slow layer. Tip-level epoch roots become the payloads at trunk level. Trunk quorum members are tip-cluster representatives elected by local reputation.
Global finality in <10s. The trunk stem is the permanent global record.
The same code runs at both layers. The difference is parameters:
| Parameter | Tip | Trunk |
|---|---|---|
| Micro-epoch | 3.7ms | 370ms |
| Epoch | 100ms | 10s |
| Quorum locality | same metro | global |
| Quorum candidates | local nodes | cluster reps |
| Payloads | user messages | tip epoch roots |
| Stem growth | 470 B/s (local) | 4.7 B/s (global) |
The ternary structure (27 = 3^3) operates identically at both levels.
User payload
|
v
Tip quorum (local, 3.7ms micro-epoch)
| propose -> vote -> finalize
v
Tip branch (27 micro-blocks, 100ms)
| ternary merge -> tip epoch root
v
Trunk quorum (global, 370ms micro-epoch)
| tip epoch roots are the payloads
v
Trunk branch (27 trunk micro-blocks, 10s)
| ternary merge -> trunk epoch root
v
Trunk stem (permanent global record, 47 bytes per 10s epoch)
The atomic consensus unit. Contains payloads proposed by the quorum leader during one micro-epoch. Includes payload root hash, timestamp, producer identity.
One epoch's worth of micro-blocks arranged as a 3^3 ternary tree. Operations: insert, seal, verify, prune. On seal: produces a single root hash committing the entire epoch's work.
The chain proper. Each StemEntry is 47 bytes: epoch number, branch root hash, timestamp, producer signature. Append-only with fsync. Chain verification on startup. This is what grows indefinitely at the trunk level (150 MB/year).
Boundary seal and cycle driver. Collects branch roots, manages the transition from one epoch to the next. Drives the Branch -> Stem pipeline.
27 trits (54 bits, 7 bytes). Decomposed as:
[ locality prefix : 9 trits ] [ identity : 18 trits ]
9-trit locality prefix: 3^9 = 19,683 possible localities, enough for metro-level geographic addressing. Nodes self-assign locality based on measured latency to reference points.
Quorum selection at tip level filters by matching locality prefix. Trunk level ignores locality and picks from cluster representatives globally.
10 message types with 38-byte header format. Core messages:
All messages carry sender signature, quorum membership verification, and replay rejection (seen nonces per quorum per micro-epoch).
All cryptography is lattice-based, unified under a single hardness assumption: the Shortest Vector Problem (SVP) on ideal lattices. Post-quantum secure.
Algebraic substrate: Rq = Zq[x] / (x^n + 1), cyclotomic polynomial quotient ring. All operations are polynomial arithmetic in this ring.
| Phase | Component | Problem | Deliverable |
|---|---|---|---|
| 1 | Ring arithmetic | (ground) | Polynomial ops, NTT |
| 2 | Hamadryad Hash | Ring-SIS | Collision-resistant hash, 448-bit output |
| 3 | Gaussian Sampler | (support) | Discrete Gaussian over lattice cosets |
| 4 | Signatures (GPV) | Ring-SIS | Hash-and-sign, ~666 byte sigs |
| 5 | KEM | Ring-LWE | IND-CCA2 key encapsulation |
| 6 | Homomorphic eval | Ring-LWE | Add/multiply on ciphertexts |
| 7 | Anti-malleability | Both | Session architecture, rerandomization |
| 8 | Searchable encryption | Ring-LWE | Encrypted pattern matching |
| 9 | Identity accumulation | Ring-SIS | SIS hash chain, Merkle compaction |
| 10 | Proof of Work | SVP | SVP mining (the work IS the hard problem) |
SVP (ideal lattices)
|
+-----------++-----------+
| |
Ring-SIS Ring-LWE
| |
+-------+-------+ +-------+-------+
| | | | | |
Hash Sigs Accum KEM HE Session
(Ph.2) (Ph.4) (Ph.9) (Ph.5) (Ph.6) (Ph.7c)
Proof of Work (Ph.10) === SVP directly
No component exits the SVP family. Breaking any part requires solving SVP.
| Name | n | q | Use | Security |
|---|---|---|---|---|
| Falcon-512 | 512 | 12289 | Signatures | 128-bit |
| Falcon-1K | 1024 | 12289 | Signatures | 256-bit |
| NewHope | 256 | 7681 | KEM | 128-bit |
| Hamadryad | 64 | 257 | Hash (SWIFFT) | 128-bit |
| Gnarl | 27 | 271 | Hash (trinary) | 128-bit |
| HE-64 | 64 | 10000769 | Homomorphic evaluation | 128-bit |
Traditional PoW wastes computation on hash preimages. Bloom's PoW IS SVP:
Every block mined is a data point about SVP hardness at operational parameters. The asymmetry ratio (solve cost / verify cost) is ~2^n, better than hash-based PoW.
DivHash is used for admission proof (lightweight PoW for candidate pool entry). SVP mining is the consensus-level work.
| Role | Trunk (permanent) | Tip (prunable) |
|---|---|---|
| Full node | 150 MB/year | 15 GB/year |
| Light client | 150 MB/year | -- |
Tip stems are prunable after roots are committed to trunk. Only the trunk stem is the permanent global record.
lowest scores. DivHash proof required for admission.
tip-level reputation (Cayley accumulator depth) above threshold.
On quorum formation (5 members), ephemeral key pairs are exchanged (signed by long-term identity), deriving 10 pairwise shared secrets for GnarlSeal encrypted communication.
Leader timeout triggers seat succession (0->1->2->3->4). Tip timeout: ~2ms. Trunk timeout: ~200ms. ViewChangeMsg broadcast on timeout.
Three modes:
Used by joining nodes to catch up to current state.
Adaptive DivHash repetitions per locality, targeting healthy candidate pool size. Sliding window average over pool sizes. Current difficulty broadcast in epoch metadata.
On finalized micro-block, iterate payloads, trial-decrypt with session secrets (GnarlOpen), deliver to consumer. At trunk level, extract tip epoch roots from finalized blocks. Hashcash verification on inbound payloads.
Gnarl-Hamadryad crypto library (gitlab.com/mleku-dev-group/gnarl-hamadryad/):
Dendrite lattice engine (git.mleku.dev/mleku/dendrite/pkg/):
pkg/kismet or pkg/bloom directory existsThe bloom-roadmap.md lists these files as complete with 226 tests:
| File | Content |
|---|---|
| micro.go | MicroBlock (106B header), payload, marshal |
| branch.go | 27-block ternary tree, insert/seal/verify/prune |
| stem.go | Append-only epoch chain, StemEntry (47B) |
| epoch.go | Epoch boundary seal, EpochCycle driver |
| message.go | Wire format (38B header), 10 message types |
| reputation.go | Cayley accumulator stepping, consistency ratio |
| quorum.go | Local state machine, SelectionScore, DivHash PoW |
| dendrite.go | ColonyNode wrapper, SubmitPayload |
| store.go | StemStore (flat file), BranchStore (epoch-scoped) |
| transport.go | Transport interface, ChannelHub, ChannelTransport |
| layer_driver.go | BloomLayer lifecycle, propose/vote/finalize/seal |
| bridge.go | Tip epoch roots -> trunk payloads |
| udp_transport.go | UDP sockets, peer table, GnarlSeal encryption |
| candidate_pool.go | Admission, liveness, eviction, tip/trunk modes |
| view_change.go | Leader timeout, seat succession |
| sync.go | Stem/branch/pool download, request/response wire |
| difficulty.go | DifficultyController, sliding window, adaptive |
| payload_router.go | Trial-decrypt payloads, trunk epoch root extraction |
Status: these files are not present in the current repo checkout. The
roadmap marks all 8 implementation phases as DONE but the code is not in
the pkg/ tree. Either it was developed in a branch/worktree not currently
checked out, or the roadmap reflects a design specification rather than
shipped code.
To bring Bloom into the smesh stack, the following would be needed:
signatures, KEM). Must be Moxie-compatible -- no reflection, no cgo, no full runtime.
spec above. These are straightforward serialization and tree operations.
tip/trunk configs. The propose/vote/finalize loop.
management with DivHash admission.
payloads.
Nostr provides the messaging/event substrate. Events are the payloads that enter tip quorums.