After the cluster is running, the next question is usually straightforward: what should actually run on it? Turing Pi 2.5 self-hosted apps now cover a far broader range of workloads than earlier ARM homelab systems realistically supported. The RK1 compute modules provide a low-power, always-on ARM64 platform capable of handling storage services, media servers, development tooling, observability stacks, local AI inference, and network infrastructure within a compact cluster setup.

If you are still bringing your cluster online, the complete setup guide from unboxing to k3s covers the path to a functioning ARM64 cluster. If you are still deciding on module tiers, storage, and cooling, the Turing Pi 2.5 build guide covers the hardware planning side.

This article is intentionally distinct from the use cases and project ideas article. That guide focuses on project ideas and deployment scenarios. This one focuses on the software itself: which services run well on ARM64 systems, what deployment considerations matter for different workloads, and how to match those workloads to practical node configurations.


Quick Overview: Turing Pi 2.5 Self-Hosted Apps by Category

  • What this guide covers: Self-hosted software that runs well on Turing Pi 2.5 and RK3588 ARM64 systems, including storage platforms, media servers, observability stacks, development tooling, network infrastructure, and local AI inference
  • ARM64 compatibility note: Many actively maintained self-hosted platforms now provide ARM64-compatible container images, although feature parity and release timing may still vary between ARM64 and x86 builds
  • Deployment approach: Workloads throughout the guide are mapped to practical 8 GB, 16 GB, and 32 GB node configurations based on typical resource requirements
  • How to use this guide: Use the workload notes, deployment considerations, and node allocation guidance to plan services incrementally as your homelab stack grows

Part 1: Personal Cloud and File Storage

Service ARM64 SupportNotes
NextcloudOfficial ARM64 imagesCan function as a full self-hosted productivity suite when paired with Collabora or OnlyOffice; community-tested on RK3588 systems
SeafileARM64 images availableLightweight file sync platform with efficient delta synchronization and lower baseline overhead than Nextcloud
SyncthingARM64 nativePeer-to-peer file synchronization with no central server requirement; minimal RAM usage; useful for cross-node replication and backups

File storage workloads benefit more from storage throughput and latency than raw CPU performance. NVMe-backed nodes reduce latency during large uploads, thumbnail generation, OCR ingestion, and synchronization operations. For persistent volume configuration in a k3s environment, the k3s persistent storage and load balancing guide covers Longhorn-backed volumes that work well with these workloads.


Part 2: Media Servers

Jellyfin is one of the most commonly used open-source media servers for ARM64 systems, with official ARM64 images and active maintenance. Hardware transcoding on RK3588 is possible but typically requires manual configuration of hardware acceleration paths and compatible ffmpeg support rather than automatic detection. Plex also provides ARM64 support; hardware transcoding generally requires Plex Pass along with additional RK3588-specific configuration. Navidrome focuses on music streaming and has a very small resource footprint, making it easy to run alongside other services on an 8 GB node.

For media workloads on ARM hardware, direct play is usually the simplest and most efficient approach. When a client requests a format that must be transcoded, CPU utilization and memory usage increase significantly. Where possible, configure clients to prefer formats already stored in the media library to reduce real-time transcoding overhead on RK3588 systems.


Part 3: Home Automation and IoT

  • Home Assistant: official ARM64 support and active maintenance. Commonly deployed on 8 GB ARM64 systems without issue. The core service itself is relatively lightweight; memory usage typically increases through add-ons, integrations, databases, and historical retention.
  • Node-RED: flow-based automation platform with a small resource footprint. Often deployed alongside Home Assistant or used independently for event-driven automation workflows.
  • Zigbee2MQTT: requires a compatible USB Zigbee coordinator such as the Sonoff Zigbee 3.0 USB Dongle Plus. Commonly deployed in Docker containers with USB passthrough. In k3s environments, workloads usually need node affinity or node pinning so the service remains attached to the node holding the physical USB adapter.

Home automation and IoT workloads are typically always-on, low-throughput, and latency-sensitive rather than computationally intensive. This profile maps well to the Turing Pi 2.5 platform: low idle power consumption, quiet operation, and local-first service availability make it well suited for self-hosted automation stacks without cloud dependency.


Part 4: Network Services

These are among the lightest workloads commonly deployed on the platform. Most can run on a shared 8 GB node with minimal resource contention, particularly in smaller homelab environments.

  • Pi-hole: ARM64 native. DNS-level ad blocking with a very small resource footprint.
  • AdGuard Home: official ARM64 images available. Provides a more feature-rich management interface along with built-in support for encrypted upstream DNS protocols such as DoH, DoT, and DNS-over-QUIC.
  • Unbound: recursive DNS resolver commonly paired with Pi-hole or AdGuard Home as an upstream resolver. When configured appropriately, it can reduce dependence on third-party DNS providers.
  • Nginx Proxy Manager: ARM64-compatible reverse proxy manager with a web interface for SSL certificate management and service routing.
  • WireGuard: kernel-level VPN protocol with native ARM64 support. Generally performs efficiently on RK3588 systems with relatively low CPU overhead compared to older VPN protocols.

For stability, DNS and reverse proxy services are often best placed on nodes that are not regularly rebooted or used for experimental workloads. Network infrastructure services tend to become dependencies for the rest of the homelab stack once other applications begin routing through them.


Part 5: Development Tools

ServiceARM64 SupportRole
GiteaOfficial ARM64 imagesSelf-hosted Git service with a lightweight web UI and relatively small baseline resource requirements
ForgejoCommunity-maintained ARM64 imagesCommunity-driven fork of Gitea with a largely compatible workflow and interface
Woodpecker CIARM64-compatible container imagesLightweight CI/CD system commonly paired with Gitea or Forgejo
code-serverARM64 support availableBrowser-based VS Code environment; moderate idle RAM usage with heavier consumption during indexing, extensions, and active development sessions

Running development infrastructure on ARM64 hardware is one of the most practical long-term uses of this platform. Source control, lightweight CI pipelines, container builds, and browser-based development environments all map well to always-on ARM clusters. For broader project planning examples, the use cases article explores development-oriented homelab workflows in more detail.

One operational consideration: CI workloads are highly bursty. Pipeline execution can temporarily increase CPU, memory, storage I/O, and network usage well above idle baseline levels, particularly during container image builds or dependency installation steps. A dedicated 16 GB node for CI runners provides useful headroom and reduces the likelihood of resource contention with other continuously running services.


Part 6: Productivity and Knowledge Management

  • Joplin Server: synchronization backend for Joplin note clients. ARM64-compatible with relatively low resource requirements in smaller deployments.
  • Outline: collaborative knowledge base and wiki platform with ARM64 images available. Typically better suited to a 16 GB node due to higher memory usage than lighter note-taking or documentation tools.
  • Stirling PDF: browser-based PDF processing toolkit with ARM64 support. Performs document conversion and manipulation locally without requiring external cloud services.
  • Paperless-ngx: document management platform with OCR, tagging, and full-text search capabilities. ARM64-compatible and actively maintained. OCR processing, thumbnail generation, and indexing workloads are usually the most resource-intensive parts of the pipeline.

For document-heavy workloads, storage performance matters more than idle CPU usage. Paperless-ngx repeatedly reads, writes, indexes, and reprocesses files during ingestion, particularly when handling scanned PDFs and OCR workflows. NVMe-backed storage can noticeably improve ingestion throughput and reduce delays when processing larger document batches.


Part 7: Password and Secrets Management

  • Vaultwarden: Bitwarden-compatible password management server with ARM64 support and relatively low resource requirements. Commonly deployed on shared 8 GB nodes alongside other lightweight services.
  • Infisical: secrets management platform designed for development and infrastructure workflows. ARM64 images are available for most deployment paths. Provides centralized secret storage and runtime injection for containerized applications and CI/CD environments.

These services map well to always-on ARM64 systems because their steady-state resource usage is usually modest while their practical value is high. In many homelab setups, Vaultwarden becomes one of the first externally accessible services deployed on the cluster.

For password and secrets infrastructure specifically, backup strategy and access control matter more operationally than raw compute performance. Small databases, configuration files, and encryption keys should be included in regular backups before exposing these services outside the local network.


Part 8: Monitoring and Observability

ServiceARM64 SupportRole
PrometheusOfficial ARM64 imagesMetrics collection, time-series storage, and alert evaluation
GrafanaOfficial ARM64 imagesVisualization and dashboard layer commonly paired with Prometheus and Loki
Uptime KumaARM64-compatible images availableService uptime and endpoint monitoring with notification support
LokiARM64 images availableLog aggregation platform; storage and memory usage depend heavily on retention and ingestion volume

Uptime Kuma is used as a validation tool in both the complete setup guide and the k3s loadbalancing walkthrough to confirm that services remain reachable through the network stack. Prometheus and Grafana provide deeper visibility into system behavior under sustained load, including CPU usage, memory pressure, storage activity, and container health.

Monitoring data grows continuously over time. Prometheus retention windows, scrape intervals, and Loki log ingestion rates all influence long-term storage usage and memory consumption. Conservative retention settings are usually easier to expand later than aggressively overprovisioning storage from the beginning.


Part 9: AI and Inference Workloads

  • Ollama: ARM64-compatible inference runtime commonly used for local LLM deployment on RK3588 systems.
  • llama.cpp: lightweight inference framework with strong ARM64 support and granular control over quantization, threading, and context configuration.
  • Open WebUI: ARM64 images available; browser-based interface commonly paired with Ollama for multi-model management and chat workflows.
  • Whisper.cpp: ARM64-compatible local speech-to-text inference using quantized Whisper models without requiring external APIs.
  • LocalAI: OpenAI-compatible API layer with ARM64 deployment options available for multiple inference backends and model runtimes.

The LLM inference setup guide covers deployment workflows, quantization strategies, model selection, and practical throughput expectations for RK3588 systems in more detail.

For local inference workloads, available unified system memory is usually the primary constraint. Smaller quantized models run comfortably on 8 GB and 16 GB nodes, while 32 GB configurations provide more flexibility for larger models, longer context windows, and concurrent inference workloads.


Part 10: Security and Access Control

  • Authelia: authentication and single sign-on middleware with ARM64 images available. Relatively lightweight compared to larger identity platforms and commonly deployed behind reverse proxies such as Nginx Proxy Manager.
  • Authentik: identity and access management platform with support for LDAP, SAML, OAuth2, and OpenID Connect. ARM64 deployment is supported. Typically better suited to a 16 GB node due to higher memory usage and additional background services.
  • CrowdSec: collaborative security platform with ARM64 support that analyzes logs and behavioral patterns to identify and block suspicious activity. Often deployed alongside reverse proxies and public-facing services.

Access control becomes increasingly important once services are exposed beyond the local network. Authentication layers such as Authelia or Authentik can centralize login workflows across multiple self-hosted services while reducing the need to manage authentication independently for each application.


Part 11: Dashboards and Portals

  • Homarr: self-hosted dashboard platform with ARM64 images available and integration support for many common homelab services. Provides a more dynamic UI with application widgets, status integration, and customizable layouts.
  • Homepage: lightweight service dashboard with ARM64 support and a minimal resource footprint. Configuration is primarily YAML-based and does not require a dedicated database for smaller deployments.

Dashboard services are lightweight but operationally useful once the number of self-hosted applications begins to grow. Centralized portals simplify navigation, status visibility, and service discovery across multi-node homelab environments.


Part 12: ARM64 Compatibility Notes

ARM64 support across the self-hosted software ecosystem has improved substantially in recent years. Many actively maintained projects now publish official multi-architecture container images that include linux/arm64 builds alongside traditional x86 targets.

Before deploying any service, verify ARM64 image availability on Docker Hub, GitHub Container Registry, or the project’s official release documentation. Image availability can vary by version, and some projects publish ARM64 support only for specific tags or deployment methods.

For older or less actively maintained software, community ARM64 builds, alternative container images, or source compilation may still provide workable deployment paths depending on the complexity of the application and its dependencies.

Most of the applications covered throughout this guide also rely on standard infrastructure components such as PostgreSQL, MariaDB, Redis, or object storage backends, all of which now provide mature ARM64 deployment paths as well.

RK3588-Specific Considerations

  • Hardware-accelerated transcoding for Jellyfin and Plex on RK3588 systems often requires additional manual configuration, including compatible ffmpeg builds, device mapping, and hardware acceleration settings. Support and stability can vary depending on kernel version, container runtime, and media stack configuration.
  • NPU acceleration through the RK3588 NPU is workload-dependent and requires software with explicit runtime support. Many inference frameworks continue to rely primarily on CPU execution paths unless specifically configured otherwise.
  • ARM64 container image availability and version parity can differ from x86 releases for some projects. Before deployment, verify that the desired image tag includes linux/arm64 support and aligns with the project’s current release version.

Part 13: Workloads Better Suited to Different Hardware

  • Large Windows VM workloads: RK1 modules are designed primarily for ARM64 Linux environments. Emulation and virtualization options exist for some x86 workloads, but performance and compatibility can vary substantially depending on the application stack.
  • Sustained multi-stream transcoding: direct play is generally efficient on RK3588 systems, but multiple concurrent transcoding sessions can place significant load on CPU resources, storage throughput, and unified system memory.
  • LLM inference workloads that exceed available memory: larger models require careful quantization and memory planning. Available unified system memory places practical limits on model size, context length, and concurrent inference workloads.
  • x86-only applications without ARM64 support: applications without ARM64 container images, binaries, or viable source build paths may require emulation layers that introduce additional complexity and performance overhead.

None of these represent failures of the platform itself. They are examples of workloads designed around different performance assumptions, deployment targets, or hardware profiles. The RK1 benchmark article provides empirical context for understanding sustained CPU, memory, storage, and thermal behavior under real workloads.


Part 14: Node Allocation Guide

CategoryRecommended NodeNotes
Network services (Pi-hole, DNS)8 GBCommonly shared with other lightweight infrastructure services
Home automation8 GBDedicated nodes are often beneficial for stability when using many integrations or add-ons
File storage16 GB + NVMeStorage throughput and latency typically matter more than raw CPU performance
Media server16 GBHardware-accelerated transcoding on RK3588 may require additional configuration
Development tools16 GBCI pipelines and container builds can create short-lived resource spikes
LLM inference32 GBAvailable unified system memory influences practical model size, context length, and concurrency
Identity and SSO16 GBLarger identity platforms such as Authentik generally require more memory than lighter middleware solutions
Monitoring stack8–16 GBResource usage depends heavily on retention windows, scrape intervals, and log volume

For sustained workload behavior across different RAM tiers, the RK1 benchmark data provides empirical measurements for CPU performance, memory bandwidth, thermals, and long-duration workload stability under load.


Part 15: Building Your Stack Incrementally

Start with one or two services from different categories. A DNS filter and a password manager are a common starting combination: both provide immediate practical value, both have relatively small resource requirements, and both integrate cleanly into most homelab environments.

Validate resource usage under real workloads before expanding the stack further. Idle RAM usage alone is often a poor indicator of how services behave during synchronization jobs, indexing operations, backups, CI pipelines, or concurrent client activity.

  • For per-service isolation on a single node without the operational overhead of Kubernetes, Incus containers on TuringPi 2.5 provide lightweight system containers and virtual machine management on ARM64 hardware.
  • When you need multi-node orchestration, service scheduling, load balancing, or distributed persistent storage, the k3s setup and loadbalancing guide is the natural progression path.
  • Add monitoring early. Understanding how the cluster behaves under sustained load is usually more operationally valuable than deploying additional services without visibility into resource usage and failure conditions.

Previous Articles in This Series


Turing Pi 2.5 Self-Hosted Apps: What You Have Here

The turing pi 2.5 self-hosted apps landscape now covers a substantial portion of what most homelab and small-scale infrastructure environments require. File storage, observability, development tooling, AI inference, identity management, media streaming, and network infrastructure all have increasingly mature ARM64 deployment paths.

Turing Pi 2.5 is best understood as a flexible ARM64 infrastructure platform rather than a single-purpose device. The workloads covered throughout this guide represent a practical cross-section of what modern self-hosted ARM environments can support with current software ecosystems and container tooling.

Return to this article as your stack evolves. The services covered here represent some of the most commonly deployed and operationally mature workloads currently available for ARM64 homelab environments, but the broader self-hosted ecosystem extends well beyond this list. Many additional databases, automation platforms, developer tools, media applications, and infrastructure services now provide viable ARM64 deployment paths as well. Each section is intended to remain useful as a long-term reference for workload planning, compatibility validation, and node allocation decisions rather than only as an initial setup checklist.


FAQ

What self-hosted apps run well on Turing Pi 2.5 in 2026?

Many commonly used turing pi 2.5 self-hosted apps now provide ARM64-compatible deployment paths, including Nextcloud, Jellyfin, Home Assistant, Pi-hole, AdGuard Home, Gitea, Vaultwarden, Prometheus, Grafana, and Ollama. Compatibility and feature support can vary by version and deployment method, so verify linux/arm64 image availability before deployment.

Can I run Nextcloud and Jellyfin on the same Turing Pi 2.5 cluster?

Yes. A Turing Pi 2.5 cluster can run both Nextcloud and Jellyfin, either on separate nodes or on shared hardware depending on workload intensity. Media transcoding, storage throughput, and concurrent client activity are usually the primary factors that determine whether dedicated nodes are beneficial. NVMe-backed storage is recommended for both services.

How many services can I run on a 4-node Turing Pi 2.5 cluster?

A 4-node turing pi 2.5 homelab stack can support a substantial number of lightweight and moderate workloads simultaneously, but the practical limit depends heavily on RAM tier, storage performance, workload type, monitoring retention, and inference usage. Lightweight DNS, dashboard, and automation services consume far fewer resources than media transcoding, CI pipelines, or LLM inference workloads.

Is ARM64 self-hosted software support good in 2026?

ARM homelab self-hosted software support has improved substantially as more projects publish multi-architecture container images and official ARM64 builds. Most actively maintained self-hosted platforms now offer at least partial ARM64 deployment support.

What is the best self-hosted stack for a Turing Pi 2.5 homelab?

The best self-hosted apps arm 2026 stack depends on the type of workloads you plan to prioritize. A common starting point includes a DNS filtering service such as Pi-hole or AdGuard Home, a password manager such as Vaultwarden, observability tools like Prometheus and Grafana, and file synchronization through Nextcloud or Syncthing. From there, media servers, development tooling, and local AI inference workloads can be added incrementally as requirements grow.