A message queue I built from scratch.

5,600 lines of Go. Custom storage engine, consensus algorithm, streaming API. No dependencies on Kafka or ZooKeeper.

1.7M
writes/sec
3.9M
reads/sec
5,600+
lines
21
tests

What is this, in plain English?

What does it do?

It passes messages between different parts of an application. App A sends a message, App B receives it later. The messages are stored durably in between.

Why would I need this?

When you have multiple services that need to communicate. Instead of Service A calling Service B directly (and failing if B is down), A drops a message in the queue and moves on. B picks it up when it's ready.

How is this different from Kafka?

Kafka requires Java, ZooKeeper, and a lot of configuration. This is one Go binary with two command line flags. Same concepts, simpler to run.

Is this production-ready?

It's a portfolio project demonstrating distributed systems concepts. The code works and passes tests, but it hasn't been battle-tested at scale like Kafka has.

What did you actually build here?

Everything from scratch: the storage engine (how data is written to disk efficiently), the consensus algorithm (how multiple servers agree on data), the network protocol (how clients talk to servers). No copy-paste from existing projects.

How it works

Producers send records. The broker stores them durably and replicates to followers. Consumers read at their own pace.

graph LR
    P[Producer] --> B[Broker]
    B --> C[Consumer]
    B --> S[Storage]
    S --> WAL[Write-Ahead Log]
    S --> MT[MemTable]
    MT --> SST[SSTables]
                

The storage engine

When you write data, it goes to a write-ahead log first (so we don't lose it if the server crashes), then into memory (for fast reads). A background process periodically flushes memory to disk as sorted files.

sequenceDiagram
    Client->>Broker: write("order:123")
    Broker->>WAL: append to log file
    Broker->>MemTable: insert in memory
    Broker-->>Client: ok, offset=42
    Note over MemTable: background flush
    MemTable->>SSTable: write sorted file to disk
                

Leader election

If you run multiple servers, one becomes the leader and handles writes. If the leader dies, the others vote and pick a new one. This is based on the Raft algorithm.

flowchart TD
    A[Follower] -->|timeout| B[Candidate]
    B -->|wins vote| C[Leader]
    B -->|loses| A
    C -->|higher term| A
    C -->|heartbeats| C
                

The code

Here's what each file does. Everything is in the internal/ directory.

storage/skiplist.go In-memory sorted data structure. O(log n) inserts and lookups. Uses a random number generator to decide node heights.
storage/wal.go Write-ahead log for crash recovery. Every write goes here first. Includes checksums to detect corruption.
storage/sstable.go On-disk sorted files. Data is written in blocks with an index for fast lookups.
storage/lsm.go Coordinates the storage engine. Manages memtable rotation and background flushes.
broker/broker.go Topic and partition management. Routes produce/consume requests to the right place.
broker/consumer_group.go Assigns partitions to consumers. Tracks who has read what.
raft/node.go The state machine for consensus. Handles transitions between follower, candidate, and leader.
raft/replication.go How leaders send data to followers. Handles conflicts and retries.
server/grpc_server.go The network layer. Accepts produce and consume requests over gRPC.
metrics/prometheus.go Exposes stats like throughput and latency for monitoring.

Getting started

You'll need Go 1.22+ installed. That's it—no other dependencies.

step 1: clone the repo
git clone https://github.com/matteso1/sentinel.git
cd sentinel
step 2: run the tests (optional)
go test -v ./...
=== RUN   TestSkipList_BasicOperations
--- PASS: TestSkipList_BasicOperations (0.00s)
=== RUN   TestBroker_ProduceConsume
--- PASS: TestBroker_ProduceConsume (0.17s)
...
PASS (21 tests)
step 3: start the server
go run ./cmd/sentinel-server --port 9092 --data ./data
Starting Sentinel server on port 9092 (data: ./data)
Sentinel server listening on port 9092
step 4: produce messages (new terminal)
go run ./cmd/sentinel-cli produce \
    -broker localhost:9092 \
    -topic orders \
    -partition 0 \
    -message "order:12345" \
    -count 100
✓ Produced 100 record(s) to orders (offset: 0)
step 5: consume messages
go run ./cmd/sentinel-cli consume \
    -broker localhost:9092 \
    -topic orders \
    -partition 0 \
    -from-beginning
Consuming from orders (partition 0, offset 0)...
------------------------------------------------------------
offset=0 | key=(null) | value=order:12345-0
offset=1 | key=(null) | value=order:12345-1
offset=2 | key=(null) | value=order:12345-2
...
------------------------------------------------------------
Consumed 100 message(s)
bonus: run benchmarks
go test -bench=. ./internal/storage/...
BenchmarkSkipList_Put-32    1684428    594.9 ns/op
BenchmarkSkipList_Get-32    3882814    257.5 ns/op
# That's 1.7M writes/sec and 3.9M reads/sec