../
Horizontal stacked text saying: Apache CloudStack on OVH Bare Metal.

Apache CloudStack Deployment on OVH Bare Metal

A complete guide to setting up an all-in-one Apache CloudStack 4.22 instance on OVH dedicated hardware, covering networking, vRack bridges, NFS storage, KVM, and zone configuration.

Insidious Fiddler February 27, 2026 18 min read infrastructurecloudstackovh
Table of contents

I wanted full control over my VM infrastructure without the unpredictable billing of public clouds. OVH’s bare metal + vRack combo is cheap and gives you a real private network, so I set up CloudStack on it. After a few months of running this in production, here’s the complete walkthrough.

We’re building an all-in-one Apache CloudStack instance on a single OVH dedicated machine. This one host runs the management server, MySQL, NFS storage, and KVM hypervisor — everything needed for a working cloud. It’s based on the Apache CloudStack Quick Installation Guide and targets CloudStack 4.22 on Rocky Linux 8 or 9.

Configuration

Edit the values below to match your setup. All instances in the guide update automatically.

Here’s the network topology we’re building — three distinct paths, each with a specific role:

Network topology showing an OVH bare metal host with three network paths: a public NIC (enp10s0f0) connected directly to the internet, cloudbr0 bridge on the vRack NIC carrying management and storage traffic, and cloudbr1 bridge with a dummy interface for guest VLANs 700-799. KVM virtual machines connect to both bridges.

Why CloudStack?

Now, before we dive in, I want to quickly cover exactly why I chose CloudStack for this project. There are plenty of open-source cloud platforms out there — OpenStack, Proxmox, oVirt, etc. — so why CloudStack?

Simple put, CloudStack is a clean, well-documented, and mature platform that does exactly what I need without unnecessary complexity. It has a straightforward architecture, a user-friendly UI, excellent support for KVM, and a in-house Kubernetes as a service system. For an all-in-one setup on a single host, it hits the sweet spot of features vs. simplicity.

This is in comparison to something like OpenStack which is designed for much larger scales, and higher degress of customization, but that also means more moving parts, a steeper learning curve, and more maintenance overhead. CloudStack’s design is more monolithic and less modular than OpenStack, but for a single-host deployment, that actually makes it easier to set up and manage.

Prerequisites

Before you begin:

  1. OVH Control Panel — order a vRack (free), add your server to it, and add your /28 IP block to the vRack.
  2. DNS — point your CloudStack domain (e.g. %%DOMAIN%%) to the server’s public IP. This is required for TLS later.
  3. Identify your NICs — run ip link show and match MACs against the OVH panel. The public NIC is typically the first port; the one labelled “Private” is the vRack NIC.
  4. IPMI/KVM access — we’re going to reconfigure networking, which can drop your SSH session. Have Serial over LAN ready via the OVH IPMI panel before proceeding.

Operating System

Start with a full system upgrade and install some basic tools we’ll need throughout:

Terminal window
dnf -y upgrade
dnf install -y chrony wget tar bash-completion net-tools bind-utils

SELinux

Setting SELinux to permissive means it logs policy violations but does not enforce them. This weakens the security posture of the host. If you plan to run this in production, consider writing a custom SELinux policy for CloudStack instead.

CloudStack requires SELinux in permissive mode. Set it for the running system:

Terminal window
setenforce 0

Then make it persist across reboots by editing /etc/selinux/config:

SELINUX=permissive
SELINUXTYPE=targeted

NTP (Chrony)

If a host shows “Alert” in the CloudStack UI with no obvious cause, check clock sync first. Even a few seconds of drift can trigger it.

CloudStack is very sensitive to clock drift.

Terminal window
systemctl enable chronyd
systemctl start chronyd

The default chrony configuration is fine for our purposes.

Configuring the CloudStack Package Repository

Create /etc/yum.repos.d/cloudstack.repo:

[cloudstack]
name=cloudstack
baseurl=http://download.cloudstack.org/centos/$releasever/4.22/
enabled=1
gpgcheck=0

We also need to disable firewalld since we’ll manage iptables directly:

Terminal window
systemctl disable --now firewalld 2>/dev/null || true

Kernel Modules

CloudStack needs br_netfilter for bridge-aware netfilter processing. We also need the dummy module to create a virtual interface for the guest traffic bridge.

Terminal window
cat > /etc/modules-load.d/cloudstack.conf << 'EOF'
br_netfilter
dummy
EOF
modprobe br_netfilter
modprobe dummy

The Dummy Interface

cloudbr1 is the guest traffic bridge. CloudStack creates VLAN sub-interfaces on it at runtime, but a Linux bridge needs at least one port to exist. Since there’s no dedicated physical NIC for guest traffic, we create a dummy interface called ens99 and use a systemd service so it survives reboots.

Terminal window
cat > /etc/systemd/system/dummy-ens99.service << 'EOF'
[Unit]
Description=Create dummy interface ens99 for cloudbr1
Before=NetworkManager.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/sbin/ip link add ens99 type dummy
ExecStart=/usr/sbin/ip link set ens99 up
ExecStop=/usr/sbin/ip link del ens99
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable --now dummy-ens99.service

Configuring the Network

This section reconfigures networking and will likely drop your SSH session. Have IPMI/Serial over LAN access ready via the OVH panel before proceeding. If you lose connectivity, use the SoL console to debug with nmcli con show.

This is the most critical step and the one most likely to drop your SSH session.

The network layout is:

The public NIC stays unbridged on purpose. On OVH, putting the public IP on a bridge triggers MAC-based filtering that drops your traffic.

First, clean up any stale connections:

Terminal window
nmcli con delete "System %%PUBLIC_NIC%%" 2>/dev/null || true
nmcli con delete "Wired connection 1" 2>/dev/null || true
nmcli con delete cloudbr1-port0 2>/dev/null || true

Public NIC (standalone)

Terminal window
nmcli con add type ethernet \
con-name public \
ifname %%PUBLIC_NIC%% \
ipv4.addresses "%%PUBLIC_IP%%/24" \
ipv4.gateway "%%PUBLIC_GW%%" \
ipv4.dns "8.8.8.8" \
ipv4.method manual \
ipv6.method disabled \
connection.autoconnect yes

cloudbr0 — vRack Bridge (Management + Public CloudStack traffic)

This bridge gets two IPs: the vRack /28 address (for public CloudStack traffic like virtual routers, SSVM, CPVM) and the management IP (for internal CloudStack communication).

Terminal window
nmcli con add type bridge \
con-name cloudbr0 \
ifname cloudbr0 \
ipv4.addresses "%%VRACK_IP%%/28,%%MGMT_IP%%/24" \
ipv4.method manual \
ipv4.never-default yes \
bridge.stp no \
bridge.forward-delay 0 \
ipv6.method disabled \
connection.autoconnect yes \
connection.autoconnect-slaves 1
nmcli con add type bridge-slave \
con-name cloudbr0-port0 \
ifname %%VRACK_NIC%% \
master cloudbr0

The ipv4.never-default yes prevents cloudbr0 from installing a default route — all internet traffic goes out the public NIC. STP is disabled because in a single-host setup with no bridge loops, it just adds a 30-second boot delay for no benefit.

cloudbr1 — Guest VLAN Bridge

Terminal window
nmcli con add type bridge \
con-name cloudbr1 \
ifname cloudbr1 \
ipv4.method disabled \
ipv6.method disabled \
bridge.stp no \
bridge.forward-delay 0 \
connection.autoconnect yes

Bring Everything Up

Terminal window
nmcli con up public
nmcli con up cloudbr0
nmcli con up cloudbr0-port0
nmcli con up cloudbr1
# Enslave dummy interface to cloudbr1 via ip link
# (nmcli can't match 802-3-ethernet profiles to dummy devices reliably)
ip link set ens99 master cloudbr1

Verify

Terminal window
ip addr show %%PUBLIC_NIC%%
ip addr show cloudbr0
bridge link show
ip route show default
ping -c 2 8.8.8.8
ping -c 2 %%VRACK_GW%% # any IP in your vRack range

If SSH drops during this step, use the IPMI/SoL console and check that the public connection activated correctly with nmcli con show.

Kernel Parameters

Terminal window
cat > /etc/sysctl.d/99-cloudstack.conf << 'EOF'
net.ipv4.ip_forward = 1
net.ipv4.conf.all.forwarding = 1
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-call-ip6tables = 0
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
EOF
sysctl --system

What these do:

iptables / NAT

Terminal window
dnf install -y iptables-services
iptables -F
iptables -t nat -F
iptables -X 2>/dev/null || true
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
# Allow forwarding through both bridges
iptables -A FORWARD -i cloudbr0 -j ACCEPT
iptables -A FORWARD -o cloudbr0 -j ACCEPT
iptables -A FORWARD -i cloudbr1 -j ACCEPT
iptables -A FORWARD -o cloudbr1 -j ACCEPT
# SNAT: give the management subnet internet access through this host's public IP
iptables -t nat -A POSTROUTING -s %%MGMT_NET%%/24 -d %%MGMT_NET%%/24 -j RETURN
iptables -t nat -A POSTROUTING -s %%MGMT_NET%%/24 -o %%PUBLIC_NIC%% -j SNAT --to-source %%PUBLIC_IP%%

The first NAT rule is a short-circuit — traffic staying within the management subnet doesn’t get rewritten. The second rule rewrites the source address of outbound management traffic so it can reach the internet.

Persist and enable:

Terminal window
service iptables save
systemctl enable iptables

Boot-Time Networking Service

Reboots don’t always replay things in the order you’d expect. This oneshot systemd service acts as a safety net — it runs after networking is online, before the CloudStack services, and idempotently ensures everything is in place.

cat > /usr/local/bin/ovh-cloudstack-networking.sh << 'SCRIPT'
#!/bin/bash
modprobe br_netfilter 2>/dev/null || true
sysctl -w net.ipv4.ip_forward=1 > /dev/null
sysctl -w net.ipv4.conf.all.forwarding=1 > /dev/null
sysctl -w net.bridge.bridge-nf-call-iptables=0 > /dev/null 2>&1 || true
sysctl -w net.bridge.bridge-nf-call-arptables=0 > /dev/null 2>&1 || true
ip addr add %%VRACK_IP%%/28 dev cloudbr0 2>/dev/null || true
ip addr add %%MGMT_IP%%/24 dev cloudbr0 2>/dev/null || true
ip link set ens99 master cloudbr1 2>/dev/null || true
iptables -t nat -C POSTROUTING -s %%MGMT_NET%%/24 -d %%MGMT_NET%%/24 -j RETURN 2>/dev/null || \
iptables -t nat -A POSTROUTING -s %%MGMT_NET%%/24 -d %%MGMT_NET%%/24 -j RETURN
iptables -t nat -C POSTROUTING -s %%MGMT_NET%%/24 -o %%PUBLIC_NIC%% -j SNAT --to-source %%PUBLIC_IP%% 2>/dev/null || \
iptables -t nat -A POSTROUTING -s %%MGMT_NET%%/24 -o %%PUBLIC_NIC%% -j SNAT --to-source %%PUBLIC_IP%%
echo "OVH CloudStack vRack networking configured"
SCRIPT
chmod +x /usr/local/bin/ovh-cloudstack-networking.sh

Every iptables command uses -C (check) before -A (append) so rules don’t stack on repeated runs.

Now create the systemd unit:

Terminal window
cat > /etc/systemd/system/ovh-cloudstack-networking.service << 'EOF'
[Unit]
Description=OVH CloudStack vRack Network Setup
After=NetworkManager.service network-online.target dummy-ens99.service
Before=cloudstack-management.service cloudstack-agent.service
Wants=network-online.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/bin/ovh-cloudstack-networking.sh
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable --now ovh-cloudstack-networking.service

Reboot Verification

Before moving on, reboot the machine and verify that everything survives. This catches service ordering issues, missing enable flags, and transient network config that didn’t persist.

Terminal window
reboot

After the machine comes back (use IPMI/SoL if SSH doesn’t reconnect), verify:

Terminal window
# Bridges exist with correct IPs
ip addr show cloudbr0
ip addr show cloudbr1
# Dummy interface is enslaved to cloudbr1
bridge link show
# Default route goes out the public NIC
ip route show default
# Internet connectivity
ping -c 2 8.8.8.8
# vRack connectivity
ping -c 2 %%VRACK_GW%%
# iptables NAT rules persisted
iptables -t nat -L POSTROUTING -n --line-numbers
# Kernel modules loaded
lsmod | grep -E 'br_netfilter|dummy'
# Sysctl values applied
sysctl net.ipv4.ip_forward net.bridge.bridge-nf-call-iptables

If anything is missing, fix it before proceeding — debugging networking issues after CloudStack is installed is significantly harder.

NFS Storage

Our configuration uses NFS for both primary and secondary storage, served from this same host.

Terminal window
dnf install -y nfs-utils
mkdir -p /export/primary /export/secondary

Configure the exports in /etc/exports. We allow access from both the management subnet and the vRack subnet:

Terminal window
cat > /etc/exports << 'EOF'
/export/primary %%MGMT_NET%%/24(rw,async,no_root_squash,no_subtree_check) %%VRACK_NET%%/28(rw,async,no_root_squash,no_subtree_check)
/export/secondary %%MGMT_NET%%/24(rw,async,no_root_squash,no_subtree_check) %%VRACK_NET%%/28(rw,async,no_root_squash,no_subtree_check)
EOF

NFSv4 requires the domain setting to match across all clients. Ensure that /etc/idmapd.conf has:

Domain = local

Start the NFS services:

Terminal window
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
exportfs -rav

Database Installation and Configuration

CloudStack uses MySQL for its database backend.

Terminal window
dnf -y install mysql-server

Add the following to the [mysqld] section of /etc/my.cnf.d/mysql-server.cnf:

innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
max_connections=350
log_bin=mysql-bin
binlog_format=ROW

Start MySQL:

Terminal window
systemctl enable mysqld
systemctl start mysqld

Management Server Installation

Install the management server:

Terminal window
dnf -y install cloudstack-management

CloudStack 4.22 requires Java 17 JRE. The management server package pulls it in automatically, but if you had a previous Java version installed, confirm Java 17 is selected:

Terminal window
alternatives --config java

Initialize the database. This generates a random password for the cloud database user:

Terminal window
cloudstack-setup-databases cloud:$(openssl rand -hex 32)@localhost --deploy-as=root

You should see “CloudStack has successfully initialized the database.” when it finishes.

Complete the management server setup:

Terminal window
cloudstack-setup-management

System Template Setup

CloudStack uses system VMs (SSVM, CPVM) that need a template downloaded to secondary storage. Since we’re on the NFS server itself, we use the local path directly:

Terminal window
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \
-m /export/secondary \
-u http://download.cloudstack.org/systemvm/4.22/systemvmtemplate-4.22.0-x86_64-kvm.qcow2.bz2 \
-h kvm -F

This downloads and extracts the system VM template. It takes a few minutes depending on your connection.

KVM Setup and Installation

Since this is an all-in-one host, we’re using the management server as a compute node as well. We’ve already done the prerequisites (networking, SELinux, chrony, repo), so we just need the agent and KVM packages.

Installation

Terminal window
dnf -y install cloudstack-agent qemu-kvm libvirt

QEMU Configuration

CloudStack requires QEMU to run as root. Edit /etc/libvirt/qemu.conf and ensure these lines are present and uncommented:

user = "root"
group = "root"

Libvirt Configuration

For live migration to work (relevant when you add more hosts later), libvirt needs to listen for unsecured TCP connections. We also disable multicast DNS advertising. Edit /etc/libvirt/libvirtd.conf and add:

listen_tls = 0
listen_tcp = 1
tcp_port = "16509"
auth_tcp = "none"
mdns_adv = 0

On EL8/EL9, libvirt uses systemd socket activation by default, which conflicts with CloudStack’s expectations. Mask the socket units and override the service to use --listen mode:

Terminal window
systemctl mask libvirtd.socket libvirtd-ro.socket libvirtd-admin.socket libvirtd-tls.socket libvirtd-tcp.socket
mkdir -p /etc/systemd/system/libvirtd.service.d
cat > /etc/systemd/system/libvirtd.service.d/override.conf << 'EOF'
[Service]
ExecStart=
ExecStart=/usr/sbin/libvirtd --listen
EOF
systemctl daemon-reload
systemctl enable --now libvirtd
systemctl restart libvirtd

Verify libvirtd is listening on TCP:

Terminal window
ss -tlnp | grep 16509

And verify KVM is loaded:

Terminal window
lsmod | grep kvm

You should see kvm_amd or kvm_intel depending on your CPU.

CloudStack SSH Key Setup

The management server generates an SSH keypair on first start. We need to add that public key to the root users authorized_keys so CloudStack can SSH into this host to manage it as a KVM hypervisor.

First, wait for the management server to finish its initial startup (it generates the key during this process):

Terminal window
# Wait for the key to appear
while [[ ! -f /var/cloudstack/management/.ssh/id_rsa.pub ]]; do
echo "Waiting for management server keypair..."
sleep 5
done

Then add it:

Terminal window
mkdir -p /root/.ssh
chmod 700 /root/.ssh
cat /var/cloudstack/management/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
systemctl restart sshd

CloudStack Usage Server

The usage server tracks resource consumption for billing and reporting. It’s optional but good to have from the start.

Terminal window
dnf -y install cloudstack-usage

Enable it via the CloudStack database:

Terminal window
mysql -u root cloud -e "UPDATE configuration SET value='true' WHERE name='enable.usage.server';"
mysql -u root cloud -e "UPDATE configuration SET value='1440' WHERE name='usage.stats.job.aggregation.range';"
mysql -u root cloud -e "UPDATE configuration SET value='00:15' WHERE name='usage.stats.job.exec.time';"

Start the usage server:

Terminal window
systemctl enable cloudstack-usage
systemctl start cloudstack-usage

Nginx Reverse Proxy + TLS

By default, the CloudStack UI runs on port 8080 over plain HTTP. We’ll put nginx in front of it with a Let’s Encrypt TLS certificate.

Terminal window
dnf -y install epel-release
dnf -y install nginx certbot python3-certbot-nginx

Bind CloudStack to localhost

Edit /etc/cloudstack/management/server.properties and set:

bind.interface=127.0.0.1

This ensures the management UI is only accessible through nginx, not directly on port 8080.

Nginx Configuration

Create /etc/nginx/conf.d/%%DOMAIN%%.conf:

server {
listen 80;
server_name %%DOMAIN%%;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support (CloudStack console proxy)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 600s;
}
}

Remove the default server block and start nginx:

Terminal window
rm -f /etc/nginx/conf.d/default.conf
nginx -t
systemctl enable --now nginx

Obtain TLS Certificate

Terminal window
certbot --nginx \
-d %%DOMAIN%% \
--non-interactive \
--agree-tos \
-m %%EMAIL%%

Certbot will modify the nginx config in-place to add the TLS directives. Enable auto-renewal:

Terminal window
systemctl enable --now certbot-renew.timer

Update CloudStack Endpoint

Tell CloudStack about its new HTTPS URL:

Terminal window
mysql -u root cloud -e "UPDATE configuration SET value='https://%%DOMAIN%%/client/api' WHERE name='endpoint.url';"

Restart the management server to pick up the bind address change:

Terminal window
systemctl restart cloudstack-management

UI Access

Point your browser to https://%%DOMAIN%%. The default username is admin and the default password is password. The zone setup wizard will launch on first login.

Zone Configuration

After logging in, the zone wizard walks you through the initial setup. Here are the values to use with our configuration.

Zone Type

Select Advanced Zone.

Zone Details

FieldValue
Namezone1
IPv4 DNS 1213.186.33.99
IPv4 DNS 21.1.1.1
Internal DNS 1213.186.33.99
HypervisorKVM

Why 213.186.33.99 instead of 8.8.8.8? CloudStack system VMs only have access to the vRack network, and OVH’s DNS servers are reachable there while Google’s public DNS is not. Using an unreachable DNS here will cause SSVM and CPVM to fail template downloads and patching.

Physical Network — Traffic Type Labels

All traffic except Guest goes over cloudbr0. Guest goes over cloudbr1.

Traffic TypeKVM Label
Managementcloudbr0
Publiccloudbr0
Guestcloudbr1
Storagecloudbr0

Public Traffic IP Range

These are the IPs CloudStack assigns to virtual routers, SSVM, CPVM, and static NAT. They come from your /28 block.

FieldValue
GatewayLast usable IP in /28 (e.g. %%VRACK_GW%%)
Netmask%%VRACK_MASK%%
VLAN(leave blank)
Start IPFirst usable IP after your host’s bridge IP
End IPLast usable IP before gateway

Your host’s bridge IP (%%VRACK_IP%% in our example) is not in this range — it’s used by the host itself.

Pod Configuration

FieldValue
Pod Namepod1
Gateway%%MGMT_IP%%
Netmask255.255.255.0
Start IP%%POD_START%%
End IP%%POD_END%%

Guest VLAN Range

FieldValue
VLAN Range700-799

These VLANs are used for isolated guest networks, tagged on cloudbr1.

Cluster

FieldValue
Cluster Namecluster1
HypervisorKVM

Host

FieldValue
Hostname%%MGMT_IP%%
Usernameroot
Password(your root password, or leave blank if SSH key was set up)

Primary Storage

FieldValue
Nameprimary
ProtocolNFS
Server%%MGMT_IP%%
Path/export/primary

Secondary Storage

FieldValue
Namesecondary
ProviderNFS
Server%%MGMT_IP%%
Path/export/secondary

Finish

Click through to create the zone. CloudStack will add the host, start the Secondary Storage VM (SSVM) and Console Proxy VM (CPVM). Wait 5-10 minutes for system VMs to fully boot.

Verification

After zone creation, check that system VMs are healthy:

Terminal window
# Check system VM bridge assignments
for vm in $(virsh list --name); do
echo "--- $vm ---"
virsh domiflist "$vm"
done

SSVM and CPVM should have NICs on cloudbr0 (management + public) and cloud0 (link-local). In the CloudStack UI, check the following:

Infrastructure -> System VMs — both should show “Running”.

Infrastructure -> Hosts should show your host as “Up”.

Troubleshooting

Keep an eye on these log files:

Host shows “Disconnected” — the agent can’t reach the management server. Verify cloudbr0 has the management IP (ip addr show cloudbr0).

Host shows “Alert” — usually clock drift. Run chronyc tracking and confirm the system time is in sync.

System VMs won’t start — NFS mount failure. Run showmount -e %%MGMT_IP%% and try a manual mount from the management IP.

VMs have no public connectivity — verify the /28 IP block is assigned to the vRack in the OVH control panel and that the vRack NIC is enslaved to cloudbr0 (bridge link show).

SSVM/CPVM on wrong bridge — traffic type labels are wrong. Fix them in the database, restart the agent, and recreate the system VMs.

Live migration fails (after adding more hosts) — ensure port 16509 (libvirt) and 49152-49216 (QEMU migration) are reachable between hosts over the vRack.

Security Considerations

This guide prioritizes getting a working deployment over hardening. Before running anything important on it, address these:

gpgcheck=0 on the CloudStack repo — we disabled GPG verification because CloudStack’s RPM repo doesn’t consistently sign packages. This means dnf won’t verify package integrity. If you’re concerned about supply chain attacks, download packages manually, verify checksums against the Apache release page, and install with rpm -ivh.

auth_tcp = "none" on libvirt — unauthenticated TCP is fine when libvirt is only reachable over the vRack (private L2). If you add more hosts, any machine on that vRack segment can control your hypervisor. For multi-host deployments, switch to TLS (auth_tls) or SASL authentication.

No iptables INPUT filtering — we set INPUT ACCEPT with no restrictions. The host relies entirely on OVH’s upstream firewall and the assumption that the vRack is trusted. At minimum, add INPUT rules to restrict management ports (8080, 8250, 16509, 3306) to the management subnet and drop everything else from the vRack.

Root SSH with authorized_keys — CloudStack manages the hypervisor over SSH as root. This is by design, but it means anyone with the management server’s private key (/var/cloudstack/management/.ssh/id_rsa) has root on every host. Protect that key and restrict SSH access by source IP where possible.

What’s Next

Apache CloudStack Dashboard

You now have a working CloudStack cloud on a single OVH bare metal machine. From the UI you can:

For an OVH Advance-2 (roughly $80/month), you get 64GB RAM and plenty of CPU to run a dozen or so VMs. Compare that to the equivalent on AWS or GCP and the math speaks for itself — especially for always-on workloads.

For deeper reading, the CloudStack Administration Guide covers everything from network models to storage plugins. The CloudStack mailing lists are also surprisingly active and helpful if you get stuck.