

Edge AI
Inference when low latency and cost are critical
Self-hosted
Development
We combined practices from building server products: built-in BMC with CLI for remote bootstrapping from scratch to Kubernetes and Machine Learning stacks, support for compute modules from different vendors, including Nvidia
$ tpictl login
$ tpictl info
$ tpictl get nodes
$ tpictl flash node 1 -f ./ubuntu.img
$ tpictl reboot node 1
$ tpictl serial node 1
$ tpictl drain node 2
$ tpictl poweroff node 2