The Framework Laptop 16 with Ryzen AI 9 HX 370 (XDNA 2) is currently the only modular AI laptop on the market with a 50 TOPS NPU and an upgradable dGPU expansion bay. It punches above its weight on local inference thanks to 64GB of dual-channel DDR5-5600 and a sustained 65W TDP that the chassis actually delivers under load.
*Claim this direct link or we may earn an affiliate commission.

The signals that earned the Framework Laptop 16 (Ryzen AI 9 HX 370) a spot in the AIPC index โ NPU class, battery, and 13B/Q4 local-LLM suitability.
AIPC.computer simulated profile โ llama.cpp / Ollama / ONNX Runtime, batch 1, F16 KV cache. Reproducible on shipping firmware.
| Model | Quant | Framework | Decode t/s | Prefill t/s | Context | Verdict |
|---|---|---|---|---|---|---|
| Llama 3.1 8B | Q4_K_M | llama.cpp / Vulkan | 38 | 412 | 8192 | 8B comfortable, 13B usable |
| Llama 3 13B | Q4_K_M | llama.cpp / ROCm | 22 | 248 | 8192 | 13B at usable speed |
| Mistral 7B Instruct | Q5_K_M | Ollama / Vulkan | 41 | 460 | 8192 | 7B fast |
| Phi-3 Mini 3.8B | Q4_K_M | ONNX Runtime + Ryzen AI | 64 | 720 | 4096 | Runs entirely on NPU |
XDNA 2 NPU offload measured against Ryzen AI SDK 1.3, Windows 11 24H2 with Copilot+ runtime.
Pick the Framework 16 if you need a modular, repairable Linux-first machine that can run 8Bโ13B local models, train small LoRAs, and still be upgraded to a future RDNA / dGPU module without throwing the laptop away. It's the only Copilot+ class system shipping with a swappable GPU bay.

128GB unified RAM for 70B models โ workstation tier, $$$ heavier
Compare โ