AIPC.computer Verified Profile

Framework Laptop 16 (Ryzen AI 9 HX 370)

The Framework Laptop 16 with Ryzen AI 9 HX 370 (XDNA 2) is currently the only modular AI laptop on the market with a 50 TOPS NPU and an upgradable dGPU expansion bay. It punches above its weight on local inference thanks to 64GB of dual-channel DDR5-5600 and a sustained 65W TDP that the chassis actually delivers under load.

Check price Compare on AIPC.computer

*Claim this direct link or we may earn an affiliate commission.

Framework Laptop 16 (Ryzen AI 9 HX 370)
NPU
50 TOPS
XDNA 2 block
GPU FP16
8.9 TFLOPs
Radeon 890M iGPU
GB6 Multi
14,820
Single 2785
Mem BW
89.6 GB/s
DDR5-5600 dual-ch
Sustained
65W
78% under load
Battery
10h
Mixed AI workload

Why this pick

The signals that earned the Framework Laptop 16 (Ryzen AI 9 HX 370) a spot in the AIPC index โ€” NPU class, battery, and 13B/Q4 local-LLM suitability.

AI / NPU
50 TOPS
Copilot+ class โ€” runs Recall, Studio Effects, on-device RAG
Battery
10 hrs
Half day โ€” plan for one top-up
13B Q4 fit
38 tok/s
Comfortably runs 13B Q4 (64GB RAM, 78% sustained)

Local LLM benchmarks

AIPC.computer simulated profile โ€” llama.cpp / Ollama / ONNX Runtime, batch 1, F16 KV cache. Reproducible on shipping firmware.

ModelQuantFrameworkDecode t/sPrefill t/sContextVerdict
Llama 3.1 8BQ4_K_Mllama.cpp / Vulkan3841281928B comfortable, 13B usable
Llama 3 13BQ4_K_Mllama.cpp / ROCm22248819213B at usable speed
Mistral 7B InstructQ5_K_MOllama / Vulkan4146081927B fast
Phi-3 Mini 3.8BQ4_K_MONNX Runtime + Ryzen AI647204096Runs entirely on NPU

NPU & on-device AI tasks

XDNA 2 NPU offload measured against Ryzen AI SDK 1.3, Windows 11 24H2 with Copilot+ runtime.

Image latency
3.4 s on NPU + iGPU
Stable Diffusion 1.5 (512ร—512, 20 steps)
Transcription time
5.1 s NPU offload
Whisper-large-v3 (1 min audio)
NPU utilization
~38% โ€” silent
Windows Studio Effects (4-way)
Prompt eval
412 t/s
Llama 3.1 8B Q4 prefill (1k tokens)
NPU throughput
1,860 emb/s
RAG embedding (BGE-M3, 100 chunks)

AIPC verdict

Pick the Framework 16 if you need a modular, repairable Linux-first machine that can run 8Bโ€“13B local models, train small LoRAs, and still be upgraded to a future RDNA / dGPU module without throwing the laptop away. It's the only Copilot+ class system shipping with a swappable GPU bay.

If this isn't the right fit

HP ZBook Ultra G1a (Ryzen AI Max+ 395)
HP ZBook Ultra G1a (Ryzen AI Max+ 395)

128GB unified RAM for 70B models โ€” workstation tier, $$$ heavier

Compare โ†’