top of page
Search

State of AI Report: Compute Index V2 update!

Compute is at the heart of all progress in AI, and being able to analyze the reported compute use in all AI research papers gives a unique perspective on where things are moving. Today the team at State of AI published a unique new resource to which we have added a specific Zeta Alpha perspective. This blog post is a slightly modified version of the Substack post by the State of AI Report team.

Image by State of AI Report.

The new State of AI Report Compute Index first of all shows the ranking of AI centric supercomputers (image above). You can now also find live counts of AI research papers using chips from NVIDIA, TPUs, ASICs, FPGAs, and AI semi startups. To access the specific papers that mention the hardware, go directly to the Zeta Alpha platform. NVIDIA is 2 orders of magnitude ahead of others First, the king of the hill: NVIDIA chips compared to Google’s TPU, ASICS, FPGAs, and chips from AI semiconductor challengers Graphcore, SambaNova Systems, Cerebras, Habana/Intel, and Cambricon. We also included Huawei’s Ascend 910 which has become more attractive for Chinese AI labs, now that the US has restricted access to the top NVIDIA gear, but the number of mention in papers is still very low


This graph clearly shows the NVIDIA dominance with over 21k papers using their technology. By contrast, all FPGAs sum to 740 papers, Google’s TPU come in at 257, and the sum of the 5 AI startups comes to 172 papers. This is an enormous gap (note the logarithmic y axis). NVIDIA’s most popular chip for AI research: the V100 In the following graph, you see NVIDIA specific data. Their most successful chip for AI research in 2022 is still the V100, released in Dec 2017. Rising fast is the RTX 3090 and the A100, a workhorse for AI workloads in private, public and national HPC clusters.



Both the RTX 3090 and the A100 are still roughly 50% of the V100 volume. The installed base is still being used. Since first publishing this data in mid-October 2022, we can now see the appearance of NVIDIA’s latest chip, the hotly awaited H100.

Graphcore leads its peer AI semiconductor startups While overall counts are very low amongst AI semiconductor startups in AI research, the most usage in papers comes from Graphcore. Following them is Habana/Intel, Cambricon, Cerebras and SambaNova.


Of course mentions in publications is not equal to commercial succes, but we take the view that usage of chips in AI research papers (early adopters) is a leading indicator of industry usage. And let us just remind you what happened to Tensorflow...


It seems a good idea to base your decisions on data and leading indicators.

A few notes - Papers using AI semi startup chips mostly have authors from the startup. - The numbers from 2022 are extrapolated from a result count from the Zeta Alpha index of 4 Dec '22. We have filtered by sources to exclude GitHub documentation and Hugging Face resources.

 

Follow Zeta Alpha on Twitter @zetavector to stay up to date with everything that's happening there! See the live charts here: www.stateof.ai/compute


323 views

Recent Posts

See All
bottom of page