Most homelab virtualization platforms were built primarily for x86. Proxmox VE is officially x86-only, while ARM support in platforms like VMware ESXi and Hyper-V remains limited, experimental, or outside typical homelab deployment patterns. If you have a Turing Pi 2.5 with RK1 compute modules and want proper service isolation, the practical option is Incus on Turing Pi 2.5.
Incus is the Linux Containers project’s maintained fork of LXD. It provides LXC system containers and QEMU/KVM virtual machines through a unified CLI and REST API, and runs cleanly on Ubuntu 22.04 ARM64 on the RK3588 SoC used by RK1 modules.
This article focuses primarily on system containers. They are ARM64-native, well-tested on this hardware, and deliver near-native performance with minimal overhead. Incus also supports QEMU/KVM virtual machines, which we will cover later in the guide, including the additional kernel and performance considerations relevant to RK3588. If your cluster is not set up yet, start with the complete RK1 setup guide before continuing.
Quick Overview: Incus on Turing Pi 2.5
- What this guide covers: Incus installation, LXC system containers, and optional QEMU/KVM VM support on Ubuntu 22.04 ARM64
- Primary use case: LXC system containers for service isolation and dev environments on RK1 nodes
- Secondary use case: QEMU/KVM VMs for stronger isolation (requires /dev/kvm, treat as experimental)
- Why not Proxmox on ARM: Proxmox VE is officially x86-only, community ARM builds lag behind releases
- Prerequisites: Working Turing Pi 2.5 cluster, Ubuntu 22.04 ARM64 on RK1 nodes
- What you’ll have at the end: A working Incus install with isolated containers running on ARM64
Part 1: Why Incus for ARM Homelabs
Proxmox VE is popular in homelab environments for good reason: it is polished, well-documented, and has a large ecosystem around it. The limitation for ARM users is that Proxmox VE remains officially x86-focused. Community ARM builds exist, but they tend to lag behind upstream releases and are better suited to experimentation than long-term infrastructure.
That leaves a gap for ARM-native virtualization and service isolation platforms. Incus fits that role well. It is the maintained fork of LXD and provides first-class ARM64 support, a consistent CLI and REST API, and an optional web UI. For an incus lxc arm homelab setup, it is one of the cleanest currently maintained options available.
The most important distinction is that Incus supports both system containers and virtual machines. LXC system containers share the host kernel rather than virtualizing hardware. There is no separate kernel boot process or hardware emulation layer, which keeps overhead extremely low. On RK3588, containers run at near-native ARM64 performance while still providing filesystem, process, and network isolation. For most Turing Pi 2.5 workloads, containers are the practical default.
Incus also supports QEMU/KVM virtual machines for workloads that require stronger isolation or a separate kernel environment. VM support on ARM continues to improve, but containers remain the better fit for most RK1 deployments because they make more efficient use of the available CPU and memory resources.
Part 2: Installing Incus on RK1 (Ubuntu 22.04 ARM64)
Incus packages for Ubuntu are distributed through the Zabbly repository, which provides up-to-date ARM64 builds for Ubuntu 22.04 and newer releases. On RK1 nodes running Ubuntu 22.04 ARM64, this is the cleanest installation path.
Note: Verify that the Zabbly repository is still the recommended installation method at the time of reading. Package sources for Incus on Ubuntu may change over time. Check the official Zabbly Incus repository for the latest installation instructions.
Run the following on your RK1 node:
# Install prerequisites
sudo apt-get install -y curl gnupg
# Add the Zabbly signing key
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.zabbly.com/key.asc | sudo gpg --dearmor -o /etc/apt/keyrings/zabbly.gpg
# Add the Zabbly stable repository
sudo sh -c 'cat > /etc/apt/sources.list.d/zabbly-incus-stable.list << EOF
deb [arch=arm64 signed-by=/etc/apt/keyrings/zabbly.gpg] https://pkgs.zabbly.com/incus/stable $(lsb_release -sc) main
EOF'
# Update and install
sudo apt-get update
sudo apt-get install -y incus incus-ui-canonical
The incus-ui-canonical package installs the optional web UI. Skip it if you prefer CLI-only management.
Initializing Incus
sudo incus admin init
The init wizard walks you through storage pool configuration, networking, and clustering options. For a single-node setup, accepting the defaults is a reasonable starting point. Choose dir as the storage backend unless you have a dedicated block device to hand off, in which case zfs or btrfs will give you better snapshot and clone support.
Add your user to the incus-admin group so you can run incus commands without sudo:
sudo usermod -aG incus-admin $USER
newgrp incus-admin
Verifying the Installation
incus version
incus list
A clean install returns the Incus version and an empty container list. If either command errors, check that the incusd service is running with sudo systemctl status incus.
Part 3: Your First System Container
incus launch images:ubuntu/22.04 my-container
Incus fetches the ARM64 Ubuntu 22.04 image from the Linux Containers image server and starts the container. On RK1 nodes running Ubuntu 22.04 ARM64, the first launch may take a minute or two while the image downloads and initializes. Subsequent launches from the same image are nearly instant.
Drop into a shell inside the container:
incus exec my-container -- bash
You are now inside an isolated Ubuntu userspace environment running on the same RK3588 kernel as the host. From here, install packages, configure services, and run workloads exactly as you would on a bare-metal Linux system.
RK3588 cgroup Compatibility
Some Ubuntu 22.04 ARM64 images for RK3588 may boot using a hybrid cgroup layout, which can prevent Incus containers from starting correctly.
If container launches fail with cgroup-related errors, enable unified cgroup v2 support by editing:
sudo nano /boot/firmware/ubuntuEnv.txt
Append the following to the existing extraargs= line:
systemd.unified_cgroup_hierarchy=1
Example:
extraargs=systemd.unified_cgroup_hierarchy=1
Reboot the node afterward:
sudo reboot
If the container was previously created before the reboot, start it again with:
incus start my-container
incus exec my-container -- bash
Installing a Service Inside the Container
As a simple validation test, install nginx inside the container:
# Inside the container
apt-get update
apt-get install -y nginx
systemctl start nginx
exit
Get the container’s IP address and test from the host:
incus list my-container
Then verify connectivity from the RK1 host:
curl http://<container-ip>
You should receive the default nginx welcome page from inside the container.
Each container runs with its own isolated network stack connected through the Incus bridge network. Services inside the container are reachable directly on the container’s assigned IP address, while still remaining isolated from the host filesystem and process space.
Checking Resource Usage
incus info my-container
This displays container state, memory usage, CPU time, network activity, and storage information. On RK1 nodes, LXC system containers carry very little overhead compared to full virtual machines: a basic Ubuntu container running nginx typically consumes only tens of megabytes of memory while idle.
Stopping and Removing a Container
incus stop my-container
incus delete my-container
Containers stop cleanly and can be snapshotted before deletion if your storage backend supports snapshots, making it easy to preserve or clone working environments.
Part 4: Practical Use Cases for Containers on Turing Pi 2.5
Isolated development environments per project. Each project can run inside its own container with separate dependencies, language runtimes, and configuration. Rebuilding environments takes seconds, and the host system remains clean and predictable even when testing multiple stacks side-by-side.
Dedicated service containers. Running Pi-hole in one container, Gitea in another, and Vaultwarden in a third keeps each service isolated at the filesystem, network, and process level. For ARM homelabs, this is one of the most practical alternatives to deploying a full VM per service.
Staging before k3s. Test a workload in an Incus system container first. Validate service behavior, resource usage, and configuration locally before packaging it for Kubernetes. This reduces the chances of debugging both application issues and orchestration issues simultaneously. The k3s persistent storage and load balancing guide covers the multi-node orchestration side in more detail.
Running a different distro or OS version than the host. The host may run Ubuntu 22.04 while containers run Ubuntu 24.04, Debian, Alpine, or Fedora userspaces on the same Linux kernel. The Linux Containers image server provides ARM64 images for all of these, making Incus useful for compatibility testing and multi-environment development workflows.
Isolating noisy workloads. CPU and memory limits can be applied per container to prevent runaway workloads from consuming the resources of the entire node. Incus maps these controls directly to Linux cgroups, making resource isolation lightweight and efficient on RK3588 systems.
Part 5: Virtual Machines on Incus
Incus also supports QEMU/KVM virtual machines alongside system containers. Before relying on VM workloads, verify that KVM support is available on the RK1 host:
ls /dev/kvm
If /dev/kvm exists, hardware virtualization support is available and Incus can use QEMU/KVM to run ARM64 virtual machines with a separate kernel and stronger isolation from the host system.
In testing on Ubuntu 22.04 ARM64 for RK1, LXC system containers provided the smoother and more practical experience for most homelab workloads due to their lower overhead and fewer compatibility edge cases. VM support on RK3588 is improving, but containers remain the better fit for most Turing Pi 2.5 deployments today.
Ubuntu 22.04 on RK1 should include KVM support in the kernel, but verify this before building infrastructure that depends on virtual machines.
Part 6: Performance Expectations
LXC system containers run with very little overhead on RK3588 because they share the host kernel rather than virtualizing hardware. For most homelab services, the performance difference between a container and the bare host is minimal in practice.
One hardware characteristic worth understanding is that RK3588 uses unified system memory shared between CPU and GPU workloads. There is no separate memory pool for graphics or accelerated workloads. If you are running LLM inference or other GPU-accelerated workloads alongside containers on the same node, plan memory allocation carefully.
Thermal behavior also matters under sustained load. Multiple active containers, inference workloads, or storage-heavy services can push RK3588 into thermal throttling without adequate airflow. The RK1 benchmark data provides useful context around sustained CPU performance, thermals, and overall system headroom on this platform.
Part 7: When to Use Incus vs k3s
These are not competing tools. They solve different problems, and a well-configured Turing Pi 2.5 cluster may use both.
Use Incus when: you want service-level isolation for individual workloads, need full userspace environments with their own init systems and service managers, are building isolated development environments, or are managing services on a single node without multi-node orchestration requirements.
Use k3s when: you are running workloads across multiple nodes, want declarative deployments with rollback support, need persistent storage management across the cluster, or are building toward higher availability and automated scheduling.
A practical pattern is to run per-node services and development environments in Incus containers, then move workloads that benefit from orchestration into k3s. The k3s persistent storage and load balancing guide covers that side of the stack in more detail.
Part 8: What You’ve Set Up
At this point, you have a working Incus installation on Ubuntu 22.04 ARM64, the ability to launch isolated LXC system containers with close to bare-metal performance, and a practical way to run multiple services on a single RK1 node with clean separation between workloads.
If /dev/kvm is available on your system, Incus can also provide ARM64 virtual machine support through QEMU/KVM for workloads that require stronger isolation or separate kernel environments.
The Turing Pi 2.5 use cases article explores the broader range of workloads suited to this platform, including homelab services, local inference, and edge deployments. Together, these tools provide a flexible foundation for building ARM-native infrastructure on Turing Pi 2.5.
Previous Articles in This Series
Turing Pi 2.5 + RK1 Complete Setup Guide: Hardware assembly, OS flashing, networking, and bringing an RK1 cluster online for the first time.
What Can You Actually Build with Turing Pi 2.5?: A practical look at workloads suited to this platform, including homelab services, local inference, CI/CD, and edge deployments.
Run LLMs Locally on ARM: RK3588 + Ollama + llama.cpp Guide: Running local LLM inference on RK3588 with Ollama and llama.cpp, including model sizing and performance considerations.
k3s on Turing Pi 2.5: Persistent Storage and Load Balancing on ARM: Persistent storage with Longhorn, load balancing with MetalLB, and production-oriented k3s configuration on ARM.
RK1 Compute Module Benchmarks: CPU, memory, thermal, and sustained workload benchmarks for RK1 compute modules.
Building a 4-Node Turing Pi 2.5 Cluster: Complete Parts List, Real Costs & What You Actually Need: A complete breakdown of parts, pricing, power, cooling, and hardware selection for a 4-node build.
FAQ
Is Incus a good Proxmox alternative for ARM homelab builds?
Yes. For ARM64 platforms like Turing Pi 2.5 with RK1 modules, Incus is one of the most practical virtualization and service-isolation platforms currently available. Proxmox VE remains officially x86-focused, while community ARM builds tend to lag behind upstream releases.
Incus provides both LXC system containers and QEMU/KVM virtual machines under a unified CLI and REST API, runs cleanly on Ubuntu 22.04 ARM64, and is actively maintained by the Linux Containers project. For an incus lxc arm homelab setup, it offers a mature and lightweight ARM-native workflow without the overhead of a traditional x86-centric hypervisor stack.
Can you run virtual machines on Turing Pi 2.5 RK1 with Incus?
Yes. Incus supports QEMU/KVM virtual machines on ARM64, and Ubuntu 22.04 on RK1 should include KVM support in the kernel. Verify availability by checking whether /dev/kvm exists on the host system. For most Turing Pi 2.5 homelab workloads, LXC system containers remain the more practical choice due to their lower overhead and smoother ARM64 experience, but Incus VM support is available for workloads that require stronger isolation or separate kernel environments.
What is the difference between Incus system containers and VMs on ARM?
LXC system containers share the host Linux kernel and use namespaces and cgroups for isolation. They carry very little overhead and run close to bare-metal ARM64 performance, which makes them well suited for services, development environments, and lightweight homelab workloads on RK1 nodes.
VMs use QEMU/KVM for hardware virtualization, running a separate kernel with stronger isolation from the host system. The tradeoff is additional CPU, memory, and storage overhead compared to containers, along with greater dependence on kernel virtualization support.
Does Incus work on Ubuntu 22.04 ARM64 on RK3588?
Yes. Incus installs cleanly on Ubuntu 22.04 ARM64 through the Zabbly package repository, and LXC system containers work well on RK3588-based RK1 nodes. The Linux Containers image server provides ARM64 images for Ubuntu, Debian, Alpine, Fedora, and other distributions, making it easy to run multiple userspace environments on the same host.
System containers run with very little overhead on this platform. QEMU/KVM virtual machine support is also available, though KVM functionality should be verified on your specific kernel configuration.
When should I use Incus instead of k3s on Turing Pi 2.5?
Use Incus when you want isolated service environments on a single node, need separate userspace environments with their own service managers, or are building development and staging setups. Use k3s when you need multi-node orchestration, declarative deployments, automated scheduling, or persistent workloads distributed across the cluster.
How are Incus system containers different from Docker containers?
Incus system containers behave more like lightweight virtual machines. They run a full Linux userspace with their own init system, background services, networking stack, and process tree while still sharing the host kernel.
Docker containers are typically application-focused and designed to run a single service or process per container. They are usually built from application images and integrated into orchestration workflows such as Docker Compose or Kubernetes.
On Turing Pi 2.5, Incus containers are useful for isolated development environments, self-hosted services, and lightweight infrastructure workloads, while Docker containers are often better suited for packaging and deploying individual applications.