Glue on your pizza

Fri Jun 7, 2024

So, a brutal sinus infection has thrown off my posting schedule. My sinus is so inflamed that it’s put massive pressure on the nerves in my gums so that it feels like I have toothache in about ten teeth. And it’s lasting a long time with nothing I can do to speed up the healing process. It sucks. But, as one doctor said, all I can do is take a big dose of Vitamin P (Patience).

Normally, I like to obsess about software for a couple of months before we head into the summer tech detox. And even though I missed the end-of-month deadline, May was a good month. In past years, I’ve written an entire post dedicated to Google I/O. But this year, we were treated to three major tech events in May: Apple’s “Let Loose” iPad event, OpenAI’s introduction to GPT-4o, and of course, the 2024 Google I/O keynote.

I’ll be using my traditional thumbs-up-and-thumbs-down format for this post because all three of these events threw down some unexpected highs and lows. It was wild.

Let’s get into it.

Apple’s Let Loose event

The star of the show was a much thinner iPad Pro featuring the latest M4 chip and, for the first time, an OLED display. The iPad Air got an upgrade, moving from the M1 chip to the M2 chip, and introducing a larger 13-inch model. Apple also announced a new “Apple Pencil Pro” with new gestures, haptics, and gyroscope. And lastly, they threw in a redesigned Magic Keyboard for an enhanced iPad-as-laptop experience.

As you may know, I’m a big OLED fan. Apple’s double-layered OLED display technology outputs a staggering 1600 nits peak HDR brightness. With the double layer mitigating burn-in issues while boosting colour and brightness, this OLED display is one of the most advanced I’ve ever seen. Big thumbs up and hoping this display tech comes to MacBooks too. I’d be very interested in a thinner, OLED, 16-inch MacBook Air.

Apple’s iPad ad showing a giant hydraulic press violently crushing a bunch of musical instruments and art supplies was… soul crushing (sorry). The reaction may have been overblown (OMG, AI will destroy human creativity!) but Apple did admit they missed the mark. Apple’s rarely apologizes, so that is something.

OpenAI’s Introduction to GPT-4o

OpenAI introduced their new flagship language model, GPT-4o. This model boasts GPT-4 level intelligence but with significant improvements in handling text, audio, and visual data. It’s also designed for faster performance. The “o” in GPT-4o stands for “omni,” reflecting its multimodal capabilities.

I was blown away by the accessibility use cases resulting from ChatGPT’s ability to parse live video. Check out this one for the visually impaired. This technology would be perfect in smart glasses. And I never considered that smart glasses could be useful for those who can’t see, but this example has changed my mind.

The new emotive voice model, while perhaps a bit too expressive, is genuinely impressive. It’s too bad that they probably stole Scarlett Johansson’s voice for it. Scarlett Johansson has threatened legal action due to the obvious similarity between her voice and the “sky” voice model. Sam Altman openly referenced the movie “Her” and admitted to asking Johansson’s permission to use her voice and being denied, so he doesn’t have much of a defence.

Google I/O

As expected, there was a strong emphasis on Artificial Intelligence (AI), particularly the new capabilities of the Gemini language model and its integration with various Google products. Google unveiled more details about Android 15, including features like a “Private Space” for secure apps and data, and an AI-powered “Live Threat Detection” system. A major announcement was Project Astra, Google’s next-generation large language model which also features multimodal capabilities like CPT-4o.

I’m a big Gemini fan already and Project Astra will be an even more incredible OS for a future pair of smart glasses (I’m sensing a theme here?). There’s this perception that Google is constantly playing catch-up with OpenAI. It’s true that OpenAI’s strategy is to be first to market (I mean, they didn’t even pretend they weren’t trying to upstage Google I/O by hosting their event shortly before Google did). But Google has been doing the AI thing for years - and they’re really good at it - I’d even say they’re the best at it.

Yes, Google has been doing the AI thing for years. And they’ve been telling us about it for years at I/O. But every year, I/O is more boring than the last. The energy level at Google these days just sucks the life out of the room. What’s going on over there, guys? Bring back the skydivers, please.

But the most damaging thing that Google has done is throwing all that careful, measured progress down the toilet by jumping the gun on AI-generated search summaries. You’ve probably read all the insane examples by now of Google confidently giving people shitty advice. The most famous one, of course, being “How do I stop cheese from sliding of my Pizza?” To which Google suggested mixing in an eight of a cup of glue into your pizza sauce. Google sourced this information from a sarcastic post on Reddit.

So this is what the world has come to: Google has agreed to pay Reddit $60 million per year to use their data for an AI that can never have a sense of humour, let alone know the difference between a fact and a joke or peer-reviewed journalism and satire. And they’re giving that AI the keys to the Internet. Anyone else missing Encyclopaedia Britannica right now?


Next: »