Native worker overview
Native workers run Qwen2.5 14B (abliterated/uncensored) on your machine using node-llama-cpp. They serve Max tier requests and earn 3-5x more than browser workers.
Why go native?
- Higher earnings — Max tier jobs pay significantly more than Free or Pro
- Better model — 14B parameter model produces higher quality responses
- Real GPU utilization — uses CUDA, Metal, or Vulkan for full hardware acceleration
- Runs headless — no browser tab needed, runs as a background process or service
Requirements
- Node.js 18+ (22+ recommended)
- GPU with 10GB+ VRAM:
- NVIDIA: GTX 1080 Ti, RTX 3060 12GB, RTX 3080, RTX 4070+, etc.
- Apple Silicon: M1 (16GB RAM), M2, M3, M4
- AMD: RX 6800+, RX 7800+ (via Vulkan)
- ~10GB disk space for the model (downloaded on first run)
- Stable internet connection
Quick start
npx @c0mpute/worker --token <your-token>
That's it. One command. node-llama-cpp handles model download, GPU detection, and compilation automatically.
Get your token
- Go to c0mpute.ai/worker
- Login with Privy
- Scroll to Native Worker section
- Click Get Worker Token
- Copy and save the token — it's shown only once
See Worker tokens for more details.
Platform guides
- Linux setup — NVIDIA CUDA
- Windows setup — WSL recommended
- macOS setup — Apple Silicon / Metal
- Troubleshooting — common issues