Journal

What it's like building a crypto mining pool

Daemons, stratum, VarDiff, and why we picked Contabo over Vultr. The infrastructure behind Bitmern Solo Pool.

A few months ago I started building a solo mining pool from scratch. The whole thing. Daemons, stratum endpoints, a dashboard, payment processing, five different blockchains running simultaneously on dedicated servers. I was CMO at Bitmern Mining at the time, an institutional Bitcoin mining company with ASIC hosting facilities around the world, and we needed a pool that matched the level of experience we were giving our clients. So I built Bitmern Solo Pool.

I want to walk through what that actually involved, because I think most people underestimate what goes into running a mining pool. The code is the easy part. The infrastructure is where it gets difficult.

Every proof-of-work blockchain needs a full node running to validate transactions and talk to the network. In mining, these are called daemons. A daemon downloads the entire blockchain, stays synced block by block, and gives the pool software what it needs to construct work for miners. For Bitmern Solo, we run daemons for Bitcoin, Bitcoin Cash, Litecoin, Dogecoin, and DigiByte. Each one is its own process with its own config, its own RPC credentials, and its own appetite for resources. Bitcoin's daemon alone needs north of 600GB of disk space for the full chain. DigiByte has 15-second block times, so that daemon is constantly validating new blocks. When any of them fall behind or crash, miners on that coin are dead in the water. There's no partial sync. It works or it doesn't.

We evaluated hosting early and went with Contabo over Vultr. Vultr has a cleaner developer experience, better APIs, faster provisioning. But when you're running five full-node daemons that need heavy disk I/O, 32GB of RAM, and hundreds of gigs of NVMe storage running 24/7, the monthly bill on Vultr gets steep fast. Contabo's pricing on dedicated resources is significantly cheaper. The management panel is rougher. The provisioning takes longer. I didn't care. These servers aren't handling web requests. They're syncing blockchains. Raw compute and storage per dollar is what matters, and Contabo wins that math.

Between the daemons and the actual mining hardware sits the stratum protocol. Stratum is how ASICs communicate with a pool. It's a JSON-RPC protocol over raw TCP, purpose-built for mining. When someone plugs in their Antminer and points it at stratum+tcp://btc.bitmernsolo.com:3102, the machine gets work from the pool, crunches hashes, and submits shares back. When a new block is found on the network, stratum pushes fresh work immediately so the miner isn't wasting cycles on a stale template.

We used Miningcore for the stratum layer, a mature open-source pool engine written in C#. It handles stratum negotiation, share validation, difficulty adjustment, and payment processing. It talks to each daemon via RPC, constructs the block templates, and validates every share that comes back.

Each coin gets its own stratum endpoint with multiple ports at different difficulty levels. Bitcoin has four ports with VarDiff targets ranging from 10k to 25k. Litecoin has three. This matters because a BitAxe running at 500 GH/s needs a completely different share difficulty than an S21 XP running at 270 TH/s. If the difficulty is too high, the small miner never submits a share. Too low, the big miner floods the pool with easy shares that waste bandwidth.

The front-end is Next.js, hosted on Vercel. The dashboard shows pool hashrate, individual miner stats, worker status, earnings over time, payment history, and hashrate charts. We built config generators that spit out ready-to-paste stratum settings for every major ASIC brand: Antminer, Whatsminer, Avalon, Goldshell, IceRiver, Innosilicon, BitAxe, and CGMiner. A miner signs up, picks their coin, and gets exact instructions for their specific hardware.

I originally wanted WebSockets for the dashboard. Real-time hashrate streaming, live worker status changes, block notifications pushed straight to the browser. I spent time trying to figure out if we could tap into the stratum layer and pipe events through a WebSocket gateway to the front-end. The idea was appealing on paper.

In practice, it fell apart. Miningcore exposes a REST API, not a streaming interface. Bolting a WebSocket server onto a system that internally polls a REST endpoint is just adding a layer of complexity that buys you nothing. You end up maintaining a persistent connection server that's secretly just fetching the same JSON every few seconds and forwarding it. That's polling with extra steps and more things to break.

So we went with 10-second HTTP polling and made it smart. The dashboard hits a Next.js API route that fans out parallel requests to Miningcore for pool and miner data, merges the responses, and sends it back. We added EMA smoothing (exponential moving average with an alpha of 0.3) so hashrate values don't jump around between poll ticks. We pause polling when the browser tab isn't visible and resume on focus. We deduplicate in-flight requests so switching between coins quickly doesn't stack up redundant calls. The result feels responsive. And for mining metrics, where the difference between a 10-second update and a 1-second update changes absolutely nothing about what a miner should do, polling was the right architecture.

One thing I underestimated early on was latency. When a miner submits a share, the round-trip time between their hardware and the stratum server directly affects stale rates. If a new block hits the network and the job notification takes 300ms to reach the miner, they're still grinding on a stale template. That share is wasted. For serious miners, anything above 200ms of ping to the stratum endpoint starts eating into efficiency.

This is why multi-region stratum endpoints matter. A miner in Frankfurt connecting to our server in Dallas sees 120 to 150ms on a good day. A miner in Singapore, maybe 250ms or worse. Each region needs its own Miningcore instance running its own set of synced daemons, with DNS routing that sends miners to their closest endpoint. Bitmern Solo is running out of US Central right now, Dallas specifically. EU and Asia-Pacific expansions were on the roadmap, and the architecture supports it. Each regional stack operates independently but shares a single database so user accounts, wallet addresses, and earnings stay unified across regions.

The full stack, for anyone curious: Contabo VPS boxes running the coin daemons and Miningcore for stratum. Supabase for user accounts, wallet addresses, alert preferences, and payment records. Next.js on Vercel for the dashboard, with API routes that proxy to Miningcore's REST API. Resend handles transactional emails like worker-offline alerts and payout notifications. Vercel cron jobs run daily checks for alert conditions and abandoned cart recovery, because yes, they also sell mining equipment through the platform.

Building a mining pool is an infrastructure problem that happens to need software. The daemons need to stay synced. The stratum server needs to stay up. The difficulty targets need to be right for machines ranging from a hobbyist BitAxe to a warehouse full of S21 XPs. The latency needs to be low enough that shares don't go stale. And all of it runs continuously, because miners don't take weekends off and neither do blockchains.

Bitmern Solo hit 2.5 PH/s within a week of launch. Every decision from Contabo over Vultr to polling over WebSockets to the VarDiff port ranges fed into that number. None of it was glamorous. All of it mattered.