Hexel Compute provides two ways to run workloads: Agent Hosting. Deploy a Docker image and receive a permanent endpoint. Your agent stays in memory with millisecond wake times. The platform handles authentication, health checks, and automatic recovery. Sandboxes. Allocate an isolated execution environment in milliseconds. Run Python code, shell commands, and file operations. Release when done.Documentation Index
Fetch the complete documentation index at: https://hexelstudio.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
Tiers
Every sandbox and agent deployment runs on a specific tier.| Tier | CPU | Memory | Best for |
|---|---|---|---|
| Nano | 2 cores | 8 GB | Lightweight agents, chatbots |
| Standard | 4 cores | 16 GB | Most AI agents |
| Performance | 8 cores | 32 GB | Data processing, RAG pipelines |
| Enterprise | 16 cores | 64 GB | Large models, heavy compute |

