bloom-protocol.md raw

Bloom Protocol (formerly Kismet)

Bloom is the consensus substrate for the dendrite/smesh stack. It implements hierarchical two-layer consensus using self-similar ternary data structures, lattice-based cryptography (Gnarl-Hamadryad), and geographic locality for quorum formation.

Origin: designed as "kismet" in the dendrite project, renamed to Bloom. Source documents: bloom-roadmap.md, bootstrap-plan.md, hamadryad-cryptosystem-plan.md in the dendrite repo.

Architecture: Hierarchical Consensus

Bloom is not a single flat chain. Two layers run the same code at different time scales, connected by geography.

Tips (Local Quorums)

The fast layer. Five geographically proximate nodes (same city/metro/datacenter) form a quorum. RTT between members: 1-5ms.

User payloads enter here and achieve local finality within one tip epoch (<100ms). Tip stems are pruned after their roots are committed to the trunk.

Trunk (Global Aggregation)

The slow layer. Tip-level epoch roots become the payloads at trunk level. Trunk quorum members are tip-cluster representatives elected by local reputation.

Global finality in <10s. The trunk stem is the permanent global record.

Self-Similarity

The same code runs at both layers. The difference is parameters:

ParameterTipTrunk
Micro-epoch3.7ms370ms
Epoch100ms10s
Quorum localitysame metroglobal
Quorum candidateslocal nodescluster reps
Payloadsuser messagestip epoch roots
Stem growth470 B/s (local)4.7 B/s (global)

The ternary structure (27 = 3^3) operates identically at both levels.

Data Flow

User payload
    |
    v
Tip quorum (local, 3.7ms micro-epoch)
    |  propose -> vote -> finalize
    v
Tip branch (27 micro-blocks, 100ms)
    |  ternary merge -> tip epoch root
    v
Trunk quorum (global, 370ms micro-epoch)
    |  tip epoch roots are the payloads
    v
Trunk branch (27 trunk micro-blocks, 10s)
    |  ternary merge -> trunk epoch root
    v
Trunk stem (permanent global record, 47 bytes per 10s epoch)

Data Structures

MicroBlock (106 bytes header)

The atomic consensus unit. Contains payloads proposed by the quorum leader during one micro-epoch. Includes payload root hash, timestamp, producer identity.

Branch (27-block ternary tree)

One epoch's worth of micro-blocks arranged as a 3^3 ternary tree. Operations: insert, seal, verify, prune. On seal: produces a single root hash committing the entire epoch's work.

Stem (append-only epoch chain)

The chain proper. Each StemEntry is 47 bytes: epoch number, branch root hash, timestamp, producer signature. Append-only with fsync. Chain verification on startup. This is what grows indefinitely at the trunk level (150 MB/year).

Epoch

Boundary seal and cycle driver. Collects branch roots, manages the transition from one epoch to the next. Drives the Branch -> Stem pipeline.

GnarlShard (geographic address)

27 trits (54 bits, 7 bytes). Decomposed as:

[ locality prefix : 9 trits ] [ identity : 18 trits ]

9-trit locality prefix: 3^9 = 19,683 possible localities, enough for metro-level geographic addressing. Nodes self-assign locality based on measured latency to reference points.

Quorum selection at tip level filters by matching locality prefix. Trunk level ignores locality and picks from cluster representatives globally.

Consensus Messages

10 message types with 38-byte header format. Core messages:

All messages carry sender signature, quorum membership verification, and replay rejection (seen nonces per quorum per micro-epoch).

Cryptographic Foundation: Gnarl-Hamadryad

All cryptography is lattice-based, unified under a single hardness assumption: the Shortest Vector Problem (SVP) on ideal lattices. Post-quantum secure.

Algebraic substrate: Rq = Zq[x] / (x^n + 1), cyclotomic polynomial quotient ring. All operations are polynomial arithmetic in this ring.

Components (all reduce to SVP)

PhaseComponentProblemDeliverable
1Ring arithmetic(ground)Polynomial ops, NTT
2Hamadryad HashRing-SISCollision-resistant hash, 448-bit output
3Gaussian Sampler(support)Discrete Gaussian over lattice cosets
4Signatures (GPV)Ring-SISHash-and-sign, ~666 byte sigs
5KEMRing-LWEIND-CCA2 key encapsulation
6Homomorphic evalRing-LWEAdd/multiply on ciphertexts
7Anti-malleabilityBothSession architecture, rerandomization
8Searchable encryptionRing-LWEEncrypted pattern matching
9Identity accumulationRing-SISSIS hash chain, Merkle compaction
10Proof of WorkSVPSVP mining (the work IS the hard problem)

Reduction Chain

                       SVP (ideal lattices)
                              |
                +-----------++-----------+
                |                        |
           Ring-SIS                 Ring-LWE
                |                        |
        +-------+-------+       +-------+-------+
        |       |       |       |       |       |
      Hash    Sigs   Accum    KEM      HE   Session
     (Ph.2)  (Ph.4) (Ph.9)  (Ph.5)  (Ph.6) (Ph.7c)

              Proof of Work (Ph.10) === SVP directly

No component exits the SVP family. Breaking any part requires solving SVP.

Parameter Sets

NamenqUseSecurity
Falcon-51251212289Signatures128-bit
Falcon-1K102412289Signatures256-bit
NewHope2567681KEM128-bit
Hamadryad64257Hash (SWIFFT)128-bit
Gnarl27271Hash (trinary)128-bit
HE-646410000769Homomorphic evaluation128-bit

SVP Proof of Work

Traditional PoW wastes computation on hash preimages. Bloom's PoW IS SVP:

Every block mined is a data point about SVP hardness at operational parameters. The asymmetry ratio (solve cost / verify cost) is ~2^n, better than hash-based PoW.

DivHash is used for admission proof (lightweight PoW for candidate pool entry). SVP mining is the consensus-level work.

Storage Requirements

RoleTrunk (permanent)Tip (prunable)
Full node150 MB/year15 GB/year
Light client150 MB/year--

Tip stems are prunable after roots are committed to trunk. Only the trunk stem is the permanent global record.

Protocol Mechanisms

Quorum Formation

lowest scores. DivHash proof required for admission.

tip-level reputation (Cayley accumulator depth) above threshold.

Key Exchange

On quorum formation (5 members), ephemeral key pairs are exchanged (signed by long-term identity), deriving 10 pairwise shared secrets for GnarlSeal encrypted communication.

View Change

Leader timeout triggers seat succession (0->1->2->3->4). Tip timeout: ~2ms. Trunk timeout: ~200ms. ViewChangeMsg broadcast on timeout.

Sync Protocol

Three modes:

Used by joining nodes to catch up to current state.

Difficulty Adjustment

Adaptive DivHash repetitions per locality, targeting healthy candidate pool size. Sliding window average over pool sizes. Current difficulty broadcast in epoch metadata.

Payload Routing

On finalized micro-block, iterate payloads, trial-decrypt with session secrets (GnarlOpen), deliver to consumer. At trunk level, extract tip epoch roots from finalized blocks. Hashcash verification on inbound payloads.

Implementation Status

What Exists as Code

Gnarl-Hamadryad crypto library (gitlab.com/mleku-dev-group/gnarl-hamadryad/):

Dendrite lattice engine (git.mleku.dev/mleku/dendrite/pkg/):

Bloom Protocol Files (per roadmap, not verified in repo)

The bloom-roadmap.md lists these files as complete with 226 tests:

FileContent
micro.goMicroBlock (106B header), payload, marshal
branch.go27-block ternary tree, insert/seal/verify/prune
stem.goAppend-only epoch chain, StemEntry (47B)
epoch.goEpoch boundary seal, EpochCycle driver
message.goWire format (38B header), 10 message types
reputation.goCayley accumulator stepping, consistency ratio
quorum.goLocal state machine, SelectionScore, DivHash PoW
dendrite.goColonyNode wrapper, SubmitPayload
store.goStemStore (flat file), BranchStore (epoch-scoped)
transport.goTransport interface, ChannelHub, ChannelTransport
layer_driver.goBloomLayer lifecycle, propose/vote/finalize/seal
bridge.goTip epoch roots -> trunk payloads
udp_transport.goUDP sockets, peer table, GnarlSeal encryption
candidate_pool.goAdmission, liveness, eviction, tip/trunk modes
view_change.goLeader timeout, seat succession
sync.goStem/branch/pool download, request/response wire
difficulty.goDifficultyController, sliding window, adaptive
payload_router.goTrial-decrypt payloads, trunk epoch root extraction

Status: these files are not present in the current repo checkout. The roadmap marks all 8 implementation phases as DONE but the code is not in the pkg/ tree. Either it was developed in a branch/worktree not currently checked out, or the roadmap reflects a design specification rather than shipped code.

What Definitely Exists

  1. The complete protocol design (bloom-roadmap.md)
  2. The cryptographic foundation library (gnarl-hamadryad) with working code
  3. The lattice growth engine (dendrite pkg/) with 60+ packages
  4. The Cayley accumulator for identity/reputation
  5. Ring arithmetic with NTT for both binary (n=64) and trinary (n=27) rings
  6. GnarlShard addressing with locality prefix decomposition

Porting to Smesh

To bring Bloom into the smesh stack, the following would be needed:

  1. Crypto layer: port or import gnarl-hamadryad (ring arithmetic, hash,

signatures, KEM). Must be Moxie-compatible -- no reflection, no cgo, no full runtime.

  1. Data structures: implement MicroBlock, Branch, Stem, Epoch per the

spec above. These are straightforward serialization and tree operations.

  1. Consensus engine: BloomLayer lifecycle driver with parameterized

tip/trunk configs. The propose/vote/finalize loop.

  1. Network: UDP transport with GnarlSeal encryption. Candidate pool

management with DivHash admission.

  1. Bridge: tip-to-trunk relay packaging sealed tip epoch roots as trunk

payloads.

  1. Integration with Nostr: Bloom consensus provides the ordering layer;

Nostr provides the messaging/event substrate. Events are the payloads that enter tip quorums.