Genius Makers
I finished reading Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World by Cade Metz, and I enjoyed it immensely. It covers the history of machine learning from Frank Rosenblatt’s perceptrons to the present day, and though it doesn’t go into much technical depth, it provides a fascinating look at the personalities, companies, and technologies that make up the .
The book focuses on a handful of AI researchers, presenting their accomplishments and their deals with corporations. It discusses the formation of DeepMind, its acquisition by Google, and the extraordinary performance of AlphaGo. It presents Yann LeCun’s work on convolutional neural networks (CNNs) and explains how he ended up at Facebook (now Meta). I was particularly fascinated by the bizarre formation of OpenAI, which was originally founded to serve humanity, but is now concerned with serving its shareholders.
Genius Makers covers many topics related to AI, but there’s one glaring omission. Published in early 2021, it just misses the revolution catalyzed by large language models (LLMs). It mentions the extraordinary performance of BERT (bidirectional encoder representations from transformers), but says nothing about how transformers work or how they’ve shaken up the field of machine learning.