Skip to Content

Deploying on Bare Metal

Oxia can be deployed without Kubernetes by running the coordinator and data server processes directly. The Raft metadata provider enables a fully replicated, highly available coordinator cluster suitable for production use.

Building from source

Clone the repository and build the binary:

$ git clone https://github.com/oxia-db/oxia.git $ cd oxia $ make

After building, the binary is at bin/oxia.

Architecture overview

A bare-metal Oxia deployment consists of:

  • 3+ Data Servers — Store and replicate data across shards.
  • 3 Coordinators (recommended) — Manage cluster state using Raft consensus for high availability.

Deploying data servers

Start three or more data server instances. Each server needs unique addresses and separate storage directories.

# data-server-0 $ ./bin/oxia server \ -p 0.0.0.0:6648 -i 0.0.0.0:6649 -m 0.0.0.0:8080 \ --data-dir /var/lib/oxia/node0/db \ --wal-dir /var/lib/oxia/node0/wal # data-server-1 $ ./bin/oxia server \ -p 0.0.0.0:6660 -i 0.0.0.0:6661 -m 0.0.0.0:8081 \ --data-dir /var/lib/oxia/node1/db \ --wal-dir /var/lib/oxia/node1/wal # data-server-2 $ ./bin/oxia server \ -p 0.0.0.0:6662 -i 0.0.0.0:6663 -m 0.0.0.0:8082 \ --data-dir /var/lib/oxia/node2/db \ --wal-dir /var/lib/oxia/node2/wal

Data server flags

FlagDescriptionDefault
-p, --public-addrClient-facing bind address0.0.0.0:6648
-i, --internal-addrInternal (cluster) bind address0.0.0.0:6649
-m, --metrics-addrPrometheus metrics bind address0.0.0.0:8080
--data-dirDirectory for data storage./data/db
--wal-dirDirectory for write-ahead logs./data/wal
--wal-retention-timeWAL entry retention duration1h
--wal-sync-dataFsync WAL writes for durabilitytrue
--db-cache-size-mbShared DB cache size in MB100
--notifications-retention-timeNotification feed retention duration1h

Cluster configuration

Create a configuration file (oxia_conf.yaml) that defines the namespaces and lists all data servers:

namespaces: - name: default initialShardCount: 3 replicationFactor: 3 servers: - public: server0.example.com:6648 internal: server0.example.com:6649 - public: server1.example.com:6660 internal: server1.example.com:6661 - public: server2.example.com:6662 internal: server2.example.com:6663

Deploying coordinators

The Raft metadata provider replicates the cluster state across multiple coordinator instances, providing automatic leader election and failover.

Deploy three coordinator instances, each with a unique --raft-address and the same --raft-bootstrap-nodes list:

# coordinator-0 $ ./bin/oxia coordinator \ --metadata raft \ --cconfig /etc/oxia/oxia_conf.yaml \ --raft-address coordinator0.example.com:6680 \ --raft-bootstrap-nodes coordinator0.example.com:6680,coordinator1.example.com:6680,coordinator2.example.com:6680 \ --raft-data-dir /var/lib/oxia/coordinator0/raft \ -i 0.0.0.0:6649 -m 0.0.0.0:8083 # coordinator-1 $ ./bin/oxia coordinator \ --metadata raft \ --cconfig /etc/oxia/oxia_conf.yaml \ --raft-address coordinator1.example.com:6680 \ --raft-bootstrap-nodes coordinator0.example.com:6680,coordinator1.example.com:6680,coordinator2.example.com:6680 \ --raft-data-dir /var/lib/oxia/coordinator1/raft \ -i 0.0.0.0:6649 -m 0.0.0.0:8084 # coordinator-2 $ ./bin/oxia coordinator \ --metadata raft \ --cconfig /etc/oxia/oxia_conf.yaml \ --raft-address coordinator2.example.com:6680 \ --raft-bootstrap-nodes coordinator0.example.com:6680,coordinator1.example.com:6680,coordinator2.example.com:6680 \ --raft-data-dir /var/lib/oxia/coordinator2/raft \ -i 0.0.0.0:6649 -m 0.0.0.0:8085

All coordinators must be started with the same --raft-bootstrap-nodes list. On first start, the nodes form a Raft cluster and elect a leader. The leader manages shard assignments and leader election for data servers, while followers replicate the cluster state and take over automatically if the leader fails.

Raft coordinator flags

FlagDescriptionDefault
--metadataMetadata provider (raft, file, configmap)file
--cconfigPath to cluster configuration file
--raft-addressThis coordinator’s Raft address (required with raft)
--raft-bootstrap-nodesComma-separated list of all coordinator Raft addresses
--raft-data-dirDirectory for Raft state storagedata/raft
-i, --internal-addrInternal service bind address0.0.0.0:6649
-a, --admin-addrAdmin service bind address0.0.0.0:6650
-m, --metrics-addrMetrics bind address0.0.0.0:8080

With file provider (single coordinator)

For development or testing, a single coordinator can use the file-based metadata provider:

$ ./bin/oxia coordinator \ --metadata file \ --cconfig /etc/oxia/oxia_conf.yaml \ --file-clusters-status-path /var/lib/oxia/cluster-status.json \ -i 0.0.0.0:6649 -m 0.0.0.0:8083

This stores cluster state in a local JSON file. It does not support high availability — if the coordinator goes down, the cluster cannot reassign shards until it recovers.

Testing the deployment

Once all components are running, verify the cluster with the performance tool:

$ ./bin/oxia perf -a server0.example.com:6648 --rate 10000

Or interact with the cluster using the CLI:

$ ./bin/oxia client -a server0.example.com:6648 put /hello <<< "world" $ ./bin/oxia client -a server0.example.com:6648 get /hello
Last updated on