It is Glasses.. how we will consume AI!

Meta has AI glasses, and yesterday Sergey Brin, $GOOGL founder, acknowledged the mistakes with google glass, and $APPL who shutdown vision pro in January today turned around announced they are launching smart glass.

Many wondered will smartphones be replaced by something else in how AI will be consumed by consumers… looks like Glass is the way to go.

I will keep an eye on the development :slight_smile:

1 Like

Nope. About 40% of 20-50 year olds wear glasses, so you have to convince 60% of the population to put on a pair of spectacles for which they will have only occasional use. Seems unlikely to me.

Jonny Ives and Sam Altman think it will be “third device” for the desktop.

Altman and Ive then reportedly told staff that the plan was for their device to be a user's third one, something they would put on their desks after an [iPhone] and a [MacBook Pro]. It would be able to go on a desk or in a pocket, and it would be unobtrusive.

I’m not sure I buy the idea of a third device either, but it seems more likely to me than convincing 60% of the world to put on glasses if they don’t need them.

Your premise starts from those who are currently wearing corrective lens will only be willing to wear glasses. At some point in the past, the idea of people carrying phone everywhere sounded far fetched. Today smartphones are omnipresent. So, carrying smartphone all the time with you may become antiquated in future.

1 Like

They will most certainly have to find a way to make them more comfortable while at the same time, allowing for the massive amount of personalization those that wear glasses currently enjoy.

That seems to be a pretty tall ask - especially considering the fact that something like $16 billion is spent every year (in the USA) on eye glass avoidance (e.g. contacts and Lasik surgery).

And your comparison to cell phones does not take into account the fact that no one is paying $16 billion a year to avoid having to use a cell phone.

A lot of people who “don’t wear glasses” … Do wear sunglasses … Cause they look cool.

Just saying.
:disguised_face:
ralph

1 Like

There may be an engineering reason (currently at least) that they are doing that. Because the quantity of data that each device will have to send up to the main AI data centers is gargantuan. It includes “constant” video and audio so the device/system can learn everything about the surroundings at all times. And that will take quite a lot of bandwidth. More bandwidth than is available on a typical cellular connection if these things ever become popular. So they will need to connect to a WiFi connection that has higher bandwidth available. Furthermore, if they were to constantly use cellular bandwidth, the power consumption will be quite high and they will require a larger battery for the mobile scenario. If they connect via a cellphone to use its data connection, then BOTH batteries will drain relatively rapidly under constant use.

All the choices have downsides. Glasses have downsides because not everyone wants to wear glasses, and power requires battery which adds weight. I’ve worn glasses for 50+ years and lighter is better. Always. Phone has downsides, need to “take it out”, again power constraints, size and weight, etc. But everyone already has a phone, so maybe no big deal. Puck has downsides, it’ll need to be big enough for a decent sized battery, if it does AI processing on board, and it will, it’ll need to dissipate heat (like a phone does under heavy use), and field of view, etc. Of course, “everyone” is used to the puck from existing Amazon Alexa and Google/Apple Home devices.

I don’t know if either of the form factors will “work” because I’m not sure the AIs are up to conversational answers yet. If the only interface will be voice then I don’t know if it’ll work well [yet]. I think the screen on the phone is still necessary for a while yet. For example “AI, I want to book a flight to Aruba in a week from Thursday”, reply “There are 19 flights available that day, do you want direct or connection?”, “direct”, “Do you want morning or evening?”, “morning”, “There are 4 flights, one at 6am, one at 7am, one at 9am, and one at 10am, do you want me to book one for you?”, “what are the prices?”, “The 6am one is the lowest at $450, the 10am one is the highest at $940”, “What is the 9am price?”, “It is $650”, “Okay, book it for me an my wife and both kids”, “I can only book one at the $650 price, the other 3 passengers will have a higher price”, “Can you check afternoon prices?”, etc. Now, you can go through this whole conversation, and it could take 10 minutes, or you can simply display the price matrix on a screen and get all the info you need in 15 seconds. Which would you choose?

2 Likes

Spoken word is frightfully slow, about 130 words per minute. Most people can read at double, some triple, and some faster than that. Better yet, you can skim a whole page at a glance and let your eye be drawn to the most likely options, something that you simply can’t do aloud.

So I it is likely to be a combination: “Hey, book me a flight from here to there next Thursday”. And maybe “and look for some options for the weekend, too.” From there there AI picks the most likely and displays them on a screen, from which you pick and say “book x” and it does the rest.

So a combination of audio, video, and AI doing the messy background stuff of booking, printing (somewhere) or sending confirmation to your email, and paying.

Glasses ain’t gonna do it. But they might be a relay station between your phone (or wifi) and the puck/screen at home. So glasses for mobile (but not for everyone), a puck and a screen at home, or as used to call them: Desktops. :wink: (But that’s not trendy or hyperhypey so we have to call it something else.)