One more thing...

I recently posted a case for Apple entering into another growth phase. I believe this can hold true if we are provided another positive catalyst for the stock. The catalyst would be a product the world has yet to see (one more thing…)

It’s not hard to find opinions on what that next product might be. Unfortunately in recent times many of those original product ideas never saw the light of day. Here are a few examples we’ve heard about over the years.

Apple TV (a physical TV set, not just the box)

Apple TV streaming service (premium channel bundles you can subscribe to, and maybe even getting into the content business itself)

Apple Car (a physical car, not just the software running inside the car)

Regarding that last example, we heard the team assembled for this effort (project Titan) was recently stripped down and its members repurposed. It sounds like Apple will continue to provide just the software running inside the car (CarPlay) and not the car itself.

However, there was product team assembled earlier this year, and that group has yet to be disbanded.

http://seekingalpha.com/news/3068066-report-apple-creates-la…

The FT reports Apple (NASDAQ:AAPL) “has assembled a large team of experts in virtual and augmented reality and built prototypes of headsets that could one day rival Facebook’s Oculus Rift or Microsoft’s HoloLens.”

The team is said to include “hundreds of staff from a series of carefully targeted acquisitions, as well as employees poached from companies that are working on next-generation headset technologies.”

The FT also notes Apple has bought Flyby Media, a developer of software that lets mobile devices “scan” real-world objects and add them to a virtual collection.

The update above was from January 2016. It is likely a smaller team had started with a concept much earlier and ramped up to make it consumer-ready. Is a new Apple AR product forthcoming?

Fast forward to today, and we have Tim Cook sharing his thoughts on AR/VR.

http://seekingalpha.com/news/3208765-apple-ceo-tim-cook-perc…

Apple CEO Tim Cook perceives augmented reality a larger opportunity than virtual.

On AR: “My own view is that augmented reality is the larger of the two, probably by far, because this gives the capability for both of us to sit and be very present talking to each other, but also have other things visually for both of us to see. Maybe it’s something we’re talking about, maybe it’s someone else here that is not here, present, but could be made to appear to be present with us. So there’s a lot of really cool things there.”

On VR: “Virtual reality sort of encloses and immerses the person into an experience that can be really cool, but probably has a lower commercial interest over time, or less people would be interested in that.” He does, though, go on to note that Apple (AAPL +4.5%) has a lot of consideration for virtual reality as it relates to gaming and education.

While AR and VR initiatives have already been most notably underway at Alphabet, Facebook, Samsung and Microsoft, efforts by Intel, Snapchat and Line have developed within the last few weeks, demonstrating broad increased movement throughout the space. With Cook’s comments today, it appears Apple’s entry to the arena is not far off.

Here is my prediction. With each iPhone generation we’ve seen significant improvements to the onboard camera and display. I think the next device we’ll see will be a headset designed to fit an iPhone Plus, and this pairing will give its wearer an immersive AR experience.

The introduction of such a product would initiate a two-way catalyst. The first is sales of the unit itself. The next is the spur of additional sales needed to make it all work. The iPhone 7 Plus.

Again, speculation, but worthy of consideration.

Best,
–Kevin

18 Likes

I just hope the headset isn’t one of those Beats atrocities.

2 Likes

I should mention why I think an Apple AR product will be specifically geared toward the iPhone 7 plus.

First, the size and resolution of the screen. I don’t think the regular iPhone is large enough to fit over a person’s eyes.

Second, the second camera addition. You need a second camera to provide depth perception (just like your eyes), and depth perception is what will make this type of AR really work. Sure, you can approximate depth through software by associating observed objects with known models. But this type of association will never be perfect. Two cameras, however, would be the edge the AR software needs to make Apple’s AR experience stand out from the rest. Two eyes are better than one.

Comments and opinions are most welcome!

Best,
–Kevin

First, the size and resolution of the screen. I don’t think the regular iPhone is large enough to fit over a person’s eyes.

Second, the second camera addition. You need a second camera to provide depth perception…

I question that these two are compatible. To fit over both eyes the camera would be held in landscape mode. In landscape mode the two cameras are one over another, not next to one another. Depth perception would require them to be on the same axis as the eyes; right/left eyes with top/bottom camera just isn’t going to work.

1 Like

Depth perception would require them to be on the same axis as the eyes; right/left eyes with top/bottom camera just isn’t going to work.

Do me a favor please. Tilt your head 90 degrees and see if you still believe the above to be true.

1 Like

If I am upright and holding a phone screen in front of both eyes so that it fills my vision I am holding it in landscape mode. If, while doing this, I tilt my head 90 degrees, right or left, either the phone screen is no longer in front of both eyes OR the phone moved with me and YES, it still isn’t going to work.

(Totally OT, but…
https://notalwaysright.com/a-new-dimension-of-stupidity-part…
https://notalwaysright.com/a-new-dimension-of-stupidity/5408…)

2 Likes

RHinCT,

I think what Phoolio was suggesting is if you yourself turn your head 90 degrees - not you looking through the phone while doing this. And when you do turn your head sideways, can you still discern objects in 3D?

Of course the answer is yes, and that is how depth perception in 3D space works. You only need two light processing receivers (your eyes or a set of lenses) placed a distance apart in order to do this. It doesn’t matter if they are placed on horizontal axis, a vertical axis or a diagonal axis. It is called binocular vision and the images received by each source is processed by your brain no matter where they located - your brain adjusts to this.

The brain is a wonderfully complex processor and it can process images in 3D even when you turn your head sideways. Computer software can do the same thing, and can programmed to know when its “eyes” are turned sideways, upside down or anywhere in between.

Best,
–Kevin

3 Likes

The FT reports Apple (NASDAQ:AAPL) "has assembled a large team of experts in virtual and augmented reality and built prototypes of headsets that could one day rival Facebook’s Oculus Rift or Microsoft’s HoloLens."

One more thing… why invest in a company in catch up mode? Just look at Microsoft after Windows/Office topped out.

One more thing… I love my Mac.

Denny Schlesinger

3 Likes

One more thing… I love my Mac
Yep - I miss my Commodore 64! I thought the Commodore PET computer was pretty cool too.
A

3 Likes

Yes but you will spill your drink.

Cheers
Qazulight

1 Like

It doesn’t matter if they are placed on horizontal axis, a vertical axis or a diagonal axis.

Correct. But vertical or diagonal will not match up to the real world 3D we see.
3-D Photography is always done horizontally.

https://www.google.com/search?q=3D+Camera&client=safari&…

JT

Hi upndn,

https://www.google.com/search?q=3D+Camera&client=safari&…

Respectfully, 3D photography is not dependant on horizontal layout.

Those images of 3D cameras are depicted with horizontal view finders because that is where your eyes are. 3D image capturing will still work if the view is turned on different axis.

For proof of this, turn one of those cameras sideways. Do you think the camera will still shoot in 3D, or no?

Best,
–Kevin

Like Cook, I also believe AR has a whole lot more room to run than VR. I have the Samsung Gear VR which, while cool, is an completely isolating experience.

The intrigue of AR is that you’re still present in the world. It’s what Google Glass attempted to do. I can’t imagine AAPL’s vision for this is strapping your phone to your head and using the camera to see.

Part of the reason Google Glass failed was because it was too nerdy. There’s no way people are going to interact in public with someone having a phone strapped to their head. Imagine going to dinner with someone and, instead of eye-contact, you make “lens-contact”.

1 Like

While one can see in 3D with one’s head turned, the view is different than with the head horizontal. The point is not that horizontal is the only way, but that to give the eyes appropriate information the sensors have to be in the same orientation as the eyes.

1 Like

The point is not that horizontal is the only way, but that to give the eyes appropriate information the sensors have to be in the same orientation as the eyes.

Exactly what I said but tamhas said it in much more elequent words.

1 Like

I see I am alone on this so we might have to agree to disagree.

All that is needed to calculate depth are two different collection sources placed a preset distance apart to detect vergence.

Vergence is when your eyes converge on the same object. Your brain calculates depth perception by determining that vergence angle your eyes make. If the vergence angle is large, something is closer, if the vergence angle is small, then it is further away.

But, you only need one eye to actually see the image. The second eye gives the brain better information as to its depth (how far away or close it is). But even people with one eye can sense depth by other detection means supplied to the brain (such as motion, texture, shading, etc).

Binocular disparity is what people typically think of as the depth perception cue. Since your eyes aren’t in the same place, they get slightly different images of the same object. This information is used by the visual cortex to determine depth.

Horizontal placement of your eyes has nothing to do with your brain detecting depth, only that your brain knows your happen to eyes that are horizontally placed and at a known distance apart (typically 6.5cm).

My point is that software can be used to convert all the image information received from the two different lens sources and present it in a consistent manner for you to process, just like how your brain does it.

Think of it this way. If I ejected a satellite equipped with two cameras into space, do you think 3D processing of the data collected matters if the satellite moves about in 360 degree rotation?

Software has the benefit of being reprogrammed to convert data in any manner we see fit - and represent images accordingly - including in 3D.

Respectfully,
–Kevin

5 Likes

The point is not that horizontal is the only way, but that to give the eyes appropriate information the sensors have to be in the same orientation as the eyes.

Since the phone is in essence a very powerful computer, when it collects the visual information (in whatever orientation the cameras are in), it can, through software, process that visual information and present it on the screen in the appropriate way that the eyes need it.

“why invest in a company in catch up mode”

The company that’s first to market isn’t always the winner. Example: Netscape vs. Microsoft, though MSFT did play very dirty pool. The point is to come up with a very good product at an opportune time. The market is extremely young for AT and VR, so there’s plenty of room for new products.

Microsoft did a lot of “Me, too” with inferior products, but it’s interesting that their latest “Me, too” Surface tablet is actually considered to be very good.

1 Like