The State of AI Report is a comprehensive analysis of the most interesting developments and implications of artificial intelligence (AI) across research, industry, politics and safety. The report is produced by AI investors Nathan Benaich and Ian Hogarth, and reviewed by leading AI practitioners in industry and research. The report aims to trigger an informed conversation about the state of AI and its future direction.
What the Report Covers
The report covers the following key dimensions:
- Research: Technology breakthroughs and their capabilities.
- Industry: Areas of commercial application for AI and its business impact.
- Politics: Regulation of AI, its economic implications and the evolving geopolitics of AI.
- Safety: Identifying and mitigating catastrophic risks that highly-capable future AI systems could pose to us.
- Predictions: What the authors believe will happen and a performance review to keep them honest.
Key Themes
Some of the key themes in the 2022 report are:
- New independent research labs are rapidly open sourcing the closed source output of major labs.
- Despite the dogma that AI research would be increasingly centralised among a few large players, the lowered cost of and access to compute has led to state-of-the-art research coming out of much smaller, previously unknown labs. Meanwhile, AI hardware remains strongly consolidated to NVIDIA.
- Safety is gaining awareness among major AI research entities, with an estimated 300 safety researchers working at large AI labs, compared to under 100 in last year’s report, and the increased recognition of major AI safety academics is a promising sign when it comes to AI safety becoming a mainstream discipline.
- The China-US AI research gap has continued to widen, with Chinese institutions producing 4.5 times as many papers than American institutions since 2010, and significantly more than the US, India, UK, and Germany combined. Moreover, China is significantly leading in areas with implications for security and geopolitics, such as surveillance, autonomy, scene understanding, and object detection.
- AI-driven scientific research continues to lead to breakthroughs, but major methodological errors like data leakage need to be interrogated further. Even though AI breakthroughs in science continue, researchers warn that methodological errors in AI can leak to these disciplines, leading to a growing reproducibility crisis in AI-based science driven in part by data leakage.
The Report also provides insights from other sources such as McKinsey which found that AI adoption has more than doubled since 2017, though the proportion of organizations using AI has plateaued between 50 and 60 percent for the past few years. A set of companies seeing the highest financial returns from AI continue to pull ahead of competitors by making larger investments in AI, engaging in increasingly advanced practices known to enable scale and faster AI development, and showing signs of faring better in the tight market for AI talent.
In Summary
The report concludes with a set of predictions for the next year, such as:
- A major lab will release an open source model that can generate high-quality images from text descriptions.
- A new generation of low-code/no-code tools will enable non-experts to build and deploy simple AI applications.
- A major social media platform will launch a feature that uses AI to create personalized avatars for users.
- A large-scale cyberattack will exploit an AI vulnerability or use an adversarial example to cause damage or disruption.