δΈζ | English
Ocis is a key-value storage project implemented in F# with two runnable forms:
- Ocis: An embedded storage engine with WiscKey-style key/value separation
- Ocis.Server: A TCP server that exposes
SET/GET/DELETEoperations via a custom binary protocol
- Suitable for single-node deployments requiring a compact embedded engine and lightweight TCP server
- Includes durability modes (
Strict,Balanced,Fast), WAL replay, SSTable compaction, and recovery tests
- Not a distributed/replicated database (no Raft, no multi-node consistency, no built-in failover)
Ocis/
βββ Ocis/ # Core storage engine
β βββ Ocis.fs # Main engine implementation
β βββ Ocis.fsproj # Project file
βββ Ocis.Server/ # TCP server
β βββ Program.fs # CLI entry point
β βββ Config.fs # Configuration validation
β βββ Host.fs # Hosted service
β βββ Server.fs # TCP server
β βββ DbDispatcher.fs # Database dispatcher
β βββ Ocis.Server.fsproj # Project file
βββ Ocis.Tests/ # Engine tests
βββ Ocis.Server.Tests/ # Server tests
βββ Ocis.Perf/ # Performance testing tools
βββ Ocis.Perf.Tests/ # Performance test validation
- Language/Runtime: F# on .NET 10
- Hosting Framework:
Microsoft.Extensions.Hosting - Logging:
Microsoft.Extensions.Logging - CLI Framework:
FSharp.SystemCommandLine
- Engine: Strict single-thread affinity with fail-fast thread checks
- Server: Bounded queue + dedicated dispatcher thread with asynchronous request processing
- Key Metadata: Stored in Memtable/SSTable
- Value Data: Stored in append-only ValueLog
- Durability: WAL (Write-Ahead Log) for durability and replay
flowchart LR
C[Client] --> P[Protocol Parse]
P --> H[Request Handler]
H --> Q[DbDispatcher Queue]
Q --> T[Dedicated DB Thread]
T --> E[OcisDB]
E --> WAL[WAL]
E --> VLOG[ValueLog]
E --> MEM[Memtable]
MEM -->|flush| SST[SSTables]
WAL -->|batch/timer durable commit| H
H --> R[Protocol Response]
R --> C
- Strict: Each write waits for durable WAL flush before success
- Balanced: Group commit (time window + batch size trigger)
- Fast: No per-request durable wait (highest throughput, weakest durability)
- Strict single-thread engine is enforced by thread affinity checks in core operations
- Server dispatcher binds the engine to a dedicated worker thread
- Balanced durability was optimized to avoid dispatcher head-of-line blocking by deferred commit waiting
- WAL checkpoint/reset is implemented and covered by tests
Environment: Local developer machine, single-node, value=256B, short repeated runs from Ocis.Perf aggregate outputs in BenchmarkDotNet.Artifacts/results/throughput/.
| Mode | Workload | Throughput (ops/s) | p99 (ms) |
|---|---|---|---|
| Balanced | set | 128.22 | 8.12 |
| Strict | set | 324.36 | 6.04 |
| Fast | set | 49,405.64 | 0.0089 |
| Balanced | get | 997,083.92 | 0.002 |
| Balanced | mixed | 428.73 | 8.09 |
| Mode | ops/s | p99 (ms) |
|---|---|---|
| Balanced | 3,109.27 | 19.79 |
| Strict | 410.06 | 94.04 |
| Fast | 35,196.24 | 21.07 |
Notes:
- The large Balanced improvement comes from deferred commit waiting + batch trigger path
- These are not cross-machine benchmark claims; treat them as current repository baseline snapshots
dotnet build Ocis.sln -c Releaseworking-dir is a required positional argument. Note: The directory must exist before running.
# Create the data directory first
mkdir -p ./data
# Run the server
dotnet run --project Ocis.Server/Ocis.Server.fsproj -- ./data \
--host 0.0.0.0 \
--port 7379 \
--max-connections 1000 \
--flush-threshold 1000 \
--durability-mode Balanced \
--group-commit-window-ms 5 \
--group-commit-batch-size 64 \
--db-queue-capacity 8192 \
--checkpoint-min-interval-ms 30000 \
--log-level Info# Engine + server tests
dotnet test Ocis.Tests/Ocis.Tests.fsproj --filter "TestCategory!=Slow"
dotnet test Ocis.Server.Tests/Ocis.Server.Tests.fsproj
# Performance harness tests
dotnet test Ocis.Perf.Tests/Ocis.Perf.Tests.fsproj# Engine matrix (strict single-thread baseline)
bash scripts/run-throughput-engine.sh
# Server matrix
bash scripts/run-throughput-server.sh 127.0.0.1 7379See docs/operations/performance-testing.md for warmup/repeat/aggregation format and interpretation.
Suitable for:
- Single-node service deployment with explicit durability mode selection
For production exposure:
- Put TLS/auth in front (reverse proxy / gateway)
- Monitor request latency, error rate, dispatcher queue depth, and WAL growth
- Run crash-recovery and throughput checks before release
Related documentation:
docs/operations/production-runbook.mddocs/operations/release-checklist.mddocs/operations/rollback-playbook.md
See LICENSE.