Deploying on Bare Metal
Oxia can be deployed without Kubernetes by running the coordinator and data server processes directly. The Raft metadata provider enables a fully replicated, highly available coordinator cluster suitable for production use.
Building from source
Clone the repository and build the binary:
$ git clone https://github.com/oxia-db/oxia.git
$ cd oxia
$ makeAfter building, the binary is at bin/oxia.
Architecture overview
A bare-metal Oxia deployment consists of:
- 3+ Data Servers — Store and replicate data across shards.
- 3 Coordinators (recommended) — Manage cluster state using Raft consensus for high availability.
Deploying data servers
Start three or more data server instances. Each server needs unique addresses and separate storage directories.
# data-server-0
$ ./bin/oxia server \
-p 0.0.0.0:6648 -i 0.0.0.0:6649 -m 0.0.0.0:8080 \
--data-dir /var/lib/oxia/node0/db \
--wal-dir /var/lib/oxia/node0/wal
# data-server-1
$ ./bin/oxia server \
-p 0.0.0.0:6660 -i 0.0.0.0:6661 -m 0.0.0.0:8081 \
--data-dir /var/lib/oxia/node1/db \
--wal-dir /var/lib/oxia/node1/wal
# data-server-2
$ ./bin/oxia server \
-p 0.0.0.0:6662 -i 0.0.0.0:6663 -m 0.0.0.0:8082 \
--data-dir /var/lib/oxia/node2/db \
--wal-dir /var/lib/oxia/node2/walData server flags
| Flag | Description | Default |
|---|---|---|
-p, --public-addr | Client-facing bind address | 0.0.0.0:6648 |
-i, --internal-addr | Internal (cluster) bind address | 0.0.0.0:6649 |
-m, --metrics-addr | Prometheus metrics bind address | 0.0.0.0:8080 |
--data-dir | Directory for data storage | ./data/db |
--wal-dir | Directory for write-ahead logs | ./data/wal |
--wal-retention-time | WAL entry retention duration | 1h |
--wal-sync-data | Fsync WAL writes for durability | true |
--db-cache-size-mb | Shared DB cache size in MB | 100 |
--notifications-retention-time | Notification feed retention duration | 1h |
Cluster configuration
Create a configuration file (oxia_conf.yaml) that defines the namespaces and lists all data
servers:
namespaces:
- name: default
initialShardCount: 3
replicationFactor: 3
servers:
- public: server0.example.com:6648
internal: server0.example.com:6649
- public: server1.example.com:6660
internal: server1.example.com:6661
- public: server2.example.com:6662
internal: server2.example.com:6663Deploying coordinators
With Raft (recommended for production)
The Raft metadata provider replicates the cluster state across multiple coordinator instances, providing automatic leader election and failover.
Deploy three coordinator instances, each with a unique --raft-address and the same
--raft-bootstrap-nodes list:
# coordinator-0
$ ./bin/oxia coordinator \
--metadata raft \
--cconfig /etc/oxia/oxia_conf.yaml \
--raft-address coordinator0.example.com:6680 \
--raft-bootstrap-nodes coordinator0.example.com:6680,coordinator1.example.com:6680,coordinator2.example.com:6680 \
--raft-data-dir /var/lib/oxia/coordinator0/raft \
-i 0.0.0.0:6649 -m 0.0.0.0:8083
# coordinator-1
$ ./bin/oxia coordinator \
--metadata raft \
--cconfig /etc/oxia/oxia_conf.yaml \
--raft-address coordinator1.example.com:6680 \
--raft-bootstrap-nodes coordinator0.example.com:6680,coordinator1.example.com:6680,coordinator2.example.com:6680 \
--raft-data-dir /var/lib/oxia/coordinator1/raft \
-i 0.0.0.0:6649 -m 0.0.0.0:8084
# coordinator-2
$ ./bin/oxia coordinator \
--metadata raft \
--cconfig /etc/oxia/oxia_conf.yaml \
--raft-address coordinator2.example.com:6680 \
--raft-bootstrap-nodes coordinator0.example.com:6680,coordinator1.example.com:6680,coordinator2.example.com:6680 \
--raft-data-dir /var/lib/oxia/coordinator2/raft \
-i 0.0.0.0:6649 -m 0.0.0.0:8085All coordinators must be started with the same --raft-bootstrap-nodes list. On first start,
the nodes form a Raft cluster and elect a leader. The leader manages shard assignments and
leader election for data servers, while followers replicate the cluster state and take over
automatically if the leader fails.
Raft coordinator flags
| Flag | Description | Default |
|---|---|---|
--metadata | Metadata provider (raft, file, configmap) | file |
--cconfig | Path to cluster configuration file | |
--raft-address | This coordinator’s Raft address (required with raft) | |
--raft-bootstrap-nodes | Comma-separated list of all coordinator Raft addresses | |
--raft-data-dir | Directory for Raft state storage | data/raft |
-i, --internal-addr | Internal service bind address | 0.0.0.0:6649 |
-a, --admin-addr | Admin service bind address | 0.0.0.0:6650 |
-m, --metrics-addr | Metrics bind address | 0.0.0.0:8080 |
With file provider (single coordinator)
For development or testing, a single coordinator can use the file-based metadata provider:
$ ./bin/oxia coordinator \
--metadata file \
--cconfig /etc/oxia/oxia_conf.yaml \
--file-clusters-status-path /var/lib/oxia/cluster-status.json \
-i 0.0.0.0:6649 -m 0.0.0.0:8083This stores cluster state in a local JSON file. It does not support high availability — if the coordinator goes down, the cluster cannot reassign shards until it recovers.
Testing the deployment
Once all components are running, verify the cluster with the performance tool:
$ ./bin/oxia perf -a server0.example.com:6648 --rate 10000Or interact with the cluster using the CLI:
$ ./bin/oxia client -a server0.example.com:6648 put /hello <<< "world"
$ ./bin/oxia client -a server0.example.com:6648 get /hello