Google I/O: The Banality Of Miracles
This week's computational festival laid out deep human truths, only by accident.
Full disclosure: I worked for Google for over seven years, and regard them as an excellent employer with many first-class people.
This week Google took us to the state of the art in Artificial Intelligence, both as a technology and as a business. They showed us much. Not simply about what they have built, but even more, about how a leading tech company conceives of the future of human experience, and how much they gloss over their own good work in order to sell us something bland.
If you came a few minutes late, if you left a few minutes early, if you blinked, you missed miracles. Three-dimensional remote interaction with other people, using standard cameras. Simultaneous language translation between people that’s getting faster and better. A global satellite network that can spot wildfires when they’re the size of a one-car garage, so human catastrophes may be prevented. Drones that have delivered drugs to victims of hurricanes. AI advancing science and medicine to further the human project.
Except for those few miraculous minutes, the most important public two hours of the year for Alphabet, Google’s parent company, saturated its audience with banalities. An animation of an old male owl talking with badgers. An animation of an old man in a flying car powered by a giant chicken. Another old man saying something nonsensical about the power of the ocean. Unimaginative ways to shop by yourself. A number of products that were called “Project (Something-or-other),” as if they were military missions or space adventures. Strange, and possibly unhealthy, interactions with technology and each other, thanks to AI “agents”, the new term for software that can execute commands over long periods with little supervision.

I get it, to an extent. This was Google I/O, the company’s big annual conference for external software developers. Google wants to get them jazzed about using its AI tools for animation, app creation, and Web development, since those are both subscription businesses and pathways to increase our dependence on the Web, which is good for Google’s ad business.
There’s not much of a living that most developers can make from the fire-spotting satellites and the protein-folding predictors. But if these people make lots of animations or find uses for agents, that will add to everyone’s bottom line, and pay for the growing computing power in Alphabet’s data centers.
That said, when you are in the future-creating business, which you surely are when you’re talking about products still in development, or when you’re throwing out lots of demos about how people will work with your new tech, you are also describing your expectations about how people will live.
In the world sketched out at I/O, we will apparently share our hallucinatory episodes with AI. In one startling promotional video for Gemini Live, a real-time AI product on phones, a person walking around talking to the AI agent mistook a garbage truck for a convertible, a streetlamp for a thin building, and their own shadow for a stalker. Gemini Live corrected her in its chirpy manner, without ads for antipsychotic medications or recommendations for nearby psychiatrists. This was either restraint or a missed commercial opportunity.
It was, however, an important clue about the good and bad uses of AI. When this technology is used to help people live in the physical world (as with the drone deliveries and the fire spotting) or when it helps us connect with more people (as with the translation or the 3D videocalls), it’s positive. But when AI interacts with us by acting as a virtual “friend” (the agent), or worse, when it imitates us to interact with others (as with a new program that searches our communications history to write convincing emails in our voice) more often than not it diminishes us.1
The good from AI will come from tools which help us communicate with each other, not from software programs so convincing that they spoof everyone’s humanity.
Google gave the most time to animation tools, including sound and text/speech generation, as well as images. This generative AI material is a heavy user of computation, and thus the best way for Google to fill those expensive new TPU computing pods2 it bragged about building out.
The deflating thing about these kinds of computational miracles is, they arrive somewhat tired. Gen AI models of necessity mine the past, scanning large repositories of data (aka, the past) to create statistically likely (aka, average) representations of what the owners want.
Thus, the images are low on the interesting or quirky details that really locate things in the world, or exhibit a creator’s unique taste. The stories are most effective when they deal in unreal things, like an owl talking to a badger or a chicken lifting a car under its wings.
They are weakest when they try to be human. It was telling that the owl clip lasted 23 seconds, and used different settings and angles, while the cliched old salt talking about the ocean was a single fixed-point shot of 8 seconds. Reality and its people are still hard to spoof.
While these tools automate animation, crushing costs and exponentially increasing output, it’s unlikely they will create meaningful human moments. As advanced as it seems, Gen AI is a fundamentally backward-looking technology, since it ransacks the past, in the form of stored data in large language models, to create a statistically-determined output of something that people already like. That is the past, infinitely riffed upon, and even describing it feels like defining the word “cliche.”
There was also, of course, the usual broken language that confuses this computation and its underlying software with cognition itself - “it thought for 37 seconds” was how someone termed the runtime of a program - but that’s been a hallmark of marketing computers for decades.
The really remarkable thing is that Google briefly showed us how much AI can help humanity, while it spent hours on ways to dehumanize our singular human experience, through a triumph of cliches. But then, how else are you going to burn all that computation?
Banality tends to fade, and authentic meaning tends to win out, because deep down we yearn to understand others and be understood, more than we want to see a giant chicken carry a convertible heavenward.
It’s a shame that it’s so hard to find a large enough business model for that.
Google wasn’t the only one dehumanizing us through agents this week.
I pity the late life memories of anyone who walks this path. For our kids’ birthdays my wife and I took great joy in cooking up treasure hunts and homemade cakes, building out spider webs and making pinatas. The kids, now adults, still talk about that stuff, because we made it all and enjoyed it together.
I was taken aback by what this post said about our tech pundit class. Among whom, it apparently goes without saying, it’s now normal to spend $500 on a five year-old’s birthday party. The 40% of Americans living from one paycheck to the next may look askance.
Which, my god, can now do 42.5 quintillion, or 10 to the 18th power, floating point operations a second. That is some impressive engineering. These computing pods will, of course, consume immense amounts of electricity as they create animations of wise old talking owls and wise old sea salts, but that was not mentioned in I/O.
In the future Agentic Jen will comment for you!
BLECK!