Quantcast
Channel: Social Media – Moor Insights & Strategy
Viewing all articles
Browse latest Browse all 25

Facebook F8 Day 2: 10 Year Roadmap For VR/AR, Wireless, Brain Interfaces And Open-Sourced AI

0
0

SAN JOSE, CA – APRIL 18: A view of the booth demonstrating Facebook Spaces at Facebook’s F8 Developer

Yesterday marked the last day of F8 2017, Facebook’s annual developer conference, which I’ve been attending the last several days in San Jose (my rundown of Day 1 is here). Attendance for Day 2 was considerably down from Day 1, but I certainly wasn’t complaining—no line to get into the keynote and it wasn’t raining. I was a little disappointed in the overall lack of significant hardware news from the Day 2 keynote like some sort of Amazon.com Echo/Dot or Google Home knock-off. There was some hardware and some very pleasant surprises towards the end that made it worth my while as it provided insights to where Facebook is headed. And why they’re headed there. Here’s my rundown of the keynote with some points of analysis and opinion.

Improving the “360 experience”

Facebook Chief Technology Officer Mike Schroepfer kicked off Day 2’s keynote by referencing Facebook’s 10-year “roadmap” and setting the agenda for the day—a deeper dive into the three main areas Facebook is focusing on in the coming decade: connectivity, AI, and AR/VR. To be clear, this isn’t a real product development roadmap, but more of a directional statement of vision.

Schroepfer proceeded to introduce two new 360 camera designs—the x24, and the fun-sized version, x6. With 6DoF capabilities (6 degrees of freedom, for the uninitiated) Facebook claims these cameras will provide “some of the most immersive and engaging content” ever shot for VR purposes. The 6DoF is interesting as it adds the element of depth to VR video. I saw some cool demos from Intel at CES 2017 using HypeVR that enables you to look around objects and it blew me away. Anshel Sag got up close and personal with the new cameras and will be doing a deeper dive later next week.

Schropefer also introduced the 360 Capture SDK, which allows developers and hence the developer’s users users to capture their own custom VR experiences through 360 photos and videos, and then upload it to a VR headset, or their own Facebook News Feed. These feature additions are smart as it leverages community in a way that sharing regular photos today can for non-360 folks. Facebook with YouTube is a leader in being the first to allow users to share VR videos today and I just started doing it with my new Gear 360 camera.

The other bit of “360” news, is that Facebook has developed three new AI techniques that purportedly will improve the resolution of 360 captures—AI view prediction, gravitational view prediction, and content-dependent streaming technology for non-VR devices. By using these techniques to predict where exactly to focus the highest concentration of pixels, which Facebook claims will improve VR experiences under difficult network conditions. This is important stuff and the more quickly you can do these VR tricks, the faster the industry will get to a killer experience. We aren’t there now.

Why is VR and 360 so important to Facebook? Two reasons. VR is a new platform where new eyeballs and new ad units will be. Facebook doesn’t want to get caught off-guard again like when they went public and they were criticized for their reliance on “desktop.” Remember that? It took them a year and they did a masterful job going “mobile”. VR is a new advertising platform and they need to be there. Finally, VR is also a new paid content unit. As Netflix did a judo move on Blockbuster, why can’t Facebook do the same to Netflix with VR games and movies?
Improving low and high-speed connectivity

Facebook’s business model is driven by the number of users and their activity level to drive effective advertising. Facebook has nearly 2B users but they eye more possible users in emerging regions and realize they need any form of internet to get access to those markets. In those areas, they are investing in cheap or free internet. In other areas with internet, the goal is to get less expensive or faster wireless internet with lower latency. In a sense, Facebook is circumventing carriers which I’m keeping my eye on the long-term

Yael Maguire, director of Facebook’s Connectivity Program, took the stage after Schroepfer to discuss the work that Facebook is doing to help build “communities through connectivity”. Maguire noted that not all communities are capable of being connected by traditional methods—urban, densely populated areas don’t have enough bandwidth to support large amounts of users using large amounts of data, and on the flipside, it’s often too expensive to deploy technology in rural, remote communities. Maguire touted Facebook’s new Tether-tenna—a helicopter tethered to a wire containing fiber and power—a technique that takes the essence of the traditional radio tower, and makes it portable and immediately deployable in times of need. Maguire noted that there was still a lot of kinks to work out with Tether-tenna—figuring out what to do about wind and lightning, for one.

Free wireless services in impoverished regions helps family and public services. It also gets another billion people get on Facebook. This is why Facebook is doing this.

Maguire also spent a lot of time explaining millimeter wave technology (mmWave) to the audience like they’d never heard about it before, and talking up Facebook’s efforts in it. To be fair, some probably never had, but everyone has heard of 5G and 5G relies heavily, not exclusively on mmWave. While Facebook managed to achieve some new benchmarks in terms of speed (36 Gbps over 13 km, 16 Gbps from ground to a Cessna, and 80 Gbps with an optical cross-link). I thought it was bizarre that they chose to focus so heavily on a technology, that frankly isn’t all that novel. What was even more bizarre is that there was no recognition of the companies who are really driving mmWave like Qualcomm, Intel, Samsung and Ericsson who pioneered all of this. Also, there was no talk about 5GNR, the “real” 5G.

Great connectivity in cities is important. But why Facebook? It’s important for untethered AR and VR experiences. It’s also a potential play to cut the carriers out of part of the equation in case ISPs start to play hardball with net neutrality. Facebook’s market cap is $420B and if you’re 5G only, legacy voice calling isn’t important as it’s replaced by VOLTE, you have massive datacenters everywhere, then it’s all about building out the RAN (radio area network). Think about it, Facebook the carrier.
AI improves photo and video analytics, Caffe2 gets open sourced

The next segment of the keynote was given to Applied Machine Learning Director Joaquin Quiñonero, who for the most part spent his time rhapsodizing on how AI has drastically changed the capacity of computers to analyze and understand image and video content. The only real announcements in this segment was that Facebook would be open-sourcing Caffe2 (their framework to build and execute AI algorithms on mobile devices), and that they were building partnerships in AI with giants Amazon.com, Intel, Microsoft, NVIDIA, and Qualcomm, amongst others.

Again, I have to pose the question—with all these AI capabilities, they still couldn’t identify a gun and/or blood in the “Facebook killer” video? It seems that maybe they out to direct some of the focus and capabilities they’re putting into targeted ads into really making sure things like this tragedy on video on their network doesn’t continue to happen. Even if this results in some videos being flagged for removal falsely while they work out the glitches—kids playing with toy guns, hunting videos, etc.—to me that seems preferable to a brutal murder streaming on Facebook for 2 hours before being taken down.

So why does Facebook invest so much in AI? Many reasons. They can go through every one of your pictures, videos, friends and know pretty much everything about you. They know your race, mood, income level, how you relax, if you’re stressed. All this to create better ads and ad placement and ad timing.

At this portion of the keynote my attention and interest was beginning to flag. While Day 2 keynotes typically possess less thunder than Day 1 kickoffs, I had yet to see anything particularly new and/or exciting. As it would turn out though, they’d saved the really cool stuff for last.

A future, augmented

Michael Abrash, Chief Scientist of Oculus VR, took the stage next, and outlined his vision for a future where the “real and virtual worlds mix freely.” He described the rise of personal computing, and shared his belief that virtual computing—that is AR and VR—will be the next great wave that will revolutionize the world. His context went all the way to PARC Labs where he made some excellent analogies and gave some history lessons that parallel the huge wave we are entering experientially and technologically.

Abrash said this will require light, comfortable, power-efficient, and most importantly socially acceptable AR glasses, that augment both your vision and hearing. He stressed the importance of the glasses being see-through, with virtual images overlaid—nobody wants to interact with someone whose eyes they can’t see. He conceded that the technology is not available yet for AR glasses to really be possible, and that it’s going to take years of hard work, investment, and major technological advancements in a number of fields.

I concur with Abrash that it will take five, maybe seven years. The timeframe difference depends on the “ramp” to real “volume”.

After listening to Abrash on Wednesday afternoon, he had me convinced. I think Mike Abrash is a rock star, and really gets what it will take to make VR, AR and MR viable in the marketplace—just the person Facebook needs at the helm of technology at Oculus. Hoping the rest of the industry has that patience. Are we in that trough of despair yet with AR/VR/MR?

What’s going on in building 8? Oh, just brain and skin-control interfaces

The last segment of the keynote was some seriously mind-bending stuff. Regina Dugan, VP of Engineering at Building 8, formerly at Google and DARPA, wrapped things up by telling us about two very unique projects in “silent speech” techniques. She spent the most time telling us about their efforts to build a system that would allow a person to type at 100 words per minute, by decoding their neural activity. That’s correct—typing with your brain. There are already invasive techniques that do just this, but at a much slower pace—Facebook’s goal is to develop a faster, non-invasive system (possibly using quasi-ballistic photons), that could serve as a speech prosthetic for those with communication disorders, or as a means for input to the AR systems Abrash envisioned in the previous segment. For that purpose, even just developing a simple yes/no brain click would be in Dugan’s words “transformative.”

The other project Dugan told us about is aimed at allowing people to “hear” through their skin, through an “artificial cochlea” that uses electrodes to trigger the sensors in your skin, hardwired to your brain. They’re designing a new “haptic” vocabulary for this system, and Dugan showed the crowd a neat demonstration in which one person communicated with another in this manner, using basic vocabulary words she had just learned in the last hour to complete simple directions (“throw white sphere”).

It all sounds like it’s out of a science fiction novel, but Dugan assured the crowd that we were actually way closer than you might think to these two projects becoming viable communication methods. It was an exciting, inspiring note to wrap up the keynote on.

Why Brain-Computer interface? Today we connect on Facebook with smartphones and PCs from human to device to Facebook to device to human. Those messy things like keyboards, touch-screens and microphones and hand controllers get in the way today. Tomorrow it could be our brains doing the UI manipulation and therefore, Facebook wants in on that.
Why Facebook does what it does

While it was hard to get really excited about any big news (or lack thereof) from the first half of Day 2’s keynote, I really enjoyed the more visionary topics the last two speakers focused on. AR glasses, near-telepathic communication methods? That’s a future I think we can all get a little excited about. Facebook is doing some really neat stuff—they just have to make sure they keep their eye focused on the future, and maybe—just maybe—put their advanced AI to work in keeping live murders out of people’s news feeds.

When analyzing Facebook or, for that matter, Google and Amazon.com, it’s vital to know how they make their money to understand context for why they do what they do. Facebook is in the business of selling advertising space at very high rates to billions of people everywhere they are. To maximize those ad rates, they need to know everything about you, your friends, what you say, what you do, how you feel and where you are to build the best ad profiles. To reach the broadest base per person, Facebook needs to be in your face on every platform 24×7.

With that said, here is why Facebook does what it does in the context of day 2:

  • VR, AR glasses and 360: Create the next new ad platform just as “mobile” was to “desktop” to serve ads in and build rich profiles. Also, attract users with fun filters to keep them on your platform and connected so you can build profiles and serve ads.
  • Low-speed wireless: Get the next billion people hooked on Facebook services who don’t have internet today.
  • High-speed wireless: Improve the untethered VR/AR/360 ad platform and position the company to cut out ISPs in case net neutrality gets out of hand.
  • Artificial Intelligence: With just the consumer’s pictures, audio and video, derive important context to build better ad profiles and serve you more relevant ads. Add this data to all other forms of data and you have Gigantic Data, which can only be made sense of and actionable with AI. Again, to serve you better ads. Oh, and as we saw in Day 1 with the camera as the next AR platform, AI helps to improve video and photo filter-based AR so you stay longer on Facebook and don’t head to Snapchat. Finally, Caffe2 gets outsourced so it’s higher quality as improvements are made to it so Facebook doesn’t have to shoulder all the R&D.
  • Brain control interface: If we can start communicating and building community with our brains skipping the keyboard and mouse, it’s important Facebook is out there up-front as it can’t be late as they were with mobile..

All this may sound very pessimistic, but why else would Facebook be investing so much into the future? The company is worth $420B because Wall Street says that’s the future value of its amortized cash flow going forward is $420B, which is all based on their ability to build ad profiles and sell ads in the future. There we have it. All that brilliant technology for ads.


Viewing all articles
Browse latest Browse all 25

Latest Images

Trending Articles





Latest Images