Geekbench AI Corporate
1.1.0
Geekbench AI is a cross-platform AI benchmark that uses real-world machine learning tasks to evaluate AI workload performance.
Size
539.4 MBVersion
1.1.0
539.4 MBVersion
1.1.0
Report this app
Description
Geekbench AI Corporate Overview
Geekbench AI is a cross-platform AI benchmark that uses real-world machine learning tasks to evaluate AI workload performance. Geekbench AI measures your CPU, GPU, and NPU to determine whether your device is ready for today’s and tomorrow’s cutting-edge machine learning applications.
Features of Geekbench AI Corporate
- Real-World AI Performance
Geekbench AI runs ten AI workloads, each with three different data types, giving you a multidimensional picture of on-device AI performance. Using large datasets that mimic real-world AI use cases, both developers and consumers can measure on-device AI performance in just a few minutes with Single Precision, Half Precision, and Quantized scores. - Measure AI on CPU, GPU, or NPU
Geekbench AI breaks down AI performance across the hardware stack — select the GPU, CPU, or your device’s dedicated NPU for testing. You can also choose from available AI frameworks on your device, like Core ML or QNN. Developers can determine the best combination of frameworks and models for particular workloads, and consumers can easily quantify the impact of dedicated AI hardware. - Compare AI Performance Across Platforms
Geekbench AI runs identical workloads on Android, iOS, Windows, macOS, and Linux. Our benchmark is built for hardware across the capability spectrum, whether you’re testing a smartphone with an ultra-low-power NPU or a dedicated workstation with a kilowatt-plus of dedicated AI compute. Instantly compare results using our Geekbench AI results browser.
System Requirements for Geekbench AI Corporate
RAM: 2 GB
Operating System: Windows 10 and 11
Space Required: 1 GB
What's new
- Framework and runtime upgrades - This includes an upgrade to ONNX Runtime 1.19 on Windows, improving Half Precision support on AMD and Intel CPUs, and changes to the Core ML configuration, which improves performance on iOS 18 and macOS 15. OpenVINO now falls back to the CPU in Single Precision workloads if the device does not support the required data types rather than convert to a different data type at runtime. ArmNN has been upgraded to v24.08, and Samsung’s ENN to v3.1.8.
- Better results on some Android systems - Some Android devices suffered a bug through the Play Store deployment that limited the hardware Geekbench AI had access to. Extracting bundled libraries to the file system mitigates the issue, improving performance.
- Score validation improvements - By randomly re-validating extra iterations in a small percentage of cases, the robustness of the benchmark is improved. Validation is now parallelized in most workloads, reducing the time spent verifying results — this means the benchmark should take less time to run.
- Requantize all the things - LOTS of models for LOTS of supported frameworks have been requantized, improving output quality, accuracy, and performance across workloads. Expect to see score increases.
- Smaller tweaks - Geekbench AI now uses per-image normalization ranges in Depth Estimation, improving accuracy calculations, and our intern adjusted some dates in the source code — give them a big hand, please.