Untethered from San Francisco to Boston: Intel’s Project Alloy and MBTA Mechanics Get New Eyes

Andrew Wheeler posted on August 25, 2016 |
Source: http://www.engineering.com/DesignSoftware/DesignSoftwareArticles/ArticleID/12977/Untethered-from-San-Francisco-to-Boston-Intels-Project-Alloy-and-MBTA-Mechanics-Get-New-Eyes.aspx

IDF 2016 – San Francisco

This city is probably best known for gold-mining, Joe Montana, earthquakes, Robin Williams, hippies, Steph Curry and tech-induced rapid-fire gentrification. Cultural fluctuations and pop culture icons aside, last week saw the late Andy Grove’s legendary processor company Intel put on Intel Developer Forum (IDF) 2016, where it made some big announcements in the “mixed reality” space. Mixed reality is a term that encompasses both virtual reality (VR) and augmented reality.

Before getting into the details of Project Alloy, it may be worth noting that there is a small “language battle” going on to describe technology that comes from the convergence of technologies that were previously thought of as singular. First of all, remember that these terms exist in the 3D space for the most part.

Project Alloy was built by Intel. The hardware requirements and design files are going to be released later this year in an effort to promote both Intel hardware and Windows Holographic, the new OS in development by Microsoft. (Image courtesy of Intel.)Project Alloy was built by Intel. The hardware requirements and design files are going to be released later this year in an effort to promote both Intel hardware and Windows Holographic, the new OS in development by Microsoft. (Image courtesy of Intel.)

For example, HP came out with “blended reality” to describe a workflow realized in itsSprout computers—3D scanning (capturing reality data), 3D computation (augmenting or changing the data with software) and 3D printing (turning captured reality data into new reality data).

There is the “digital thread” from 3D Systems and “reality computing” from Autodesk—and now “mixed reality” (perhaps originating from Magic Leap) seems to be catching on. But at least mixed reality conflates two of the many interesting 3D technologies that seem to be swimming to the same points in the near future.

The technological combinations include those of photogrammetry, light detection and ranging (LiDAR), generative design, spatial design, artificial intelligence, sensor data, robotics, robotic digital fabrication, machine vision, additive manufacturing, computer numerical controlled (CNC) machining, drones and mobile devices, as well as augmented reality and virtual reality. They’re all bumping into each other. Overall, these terms are abstract expressions of a linguistic failure to extrapolate some pithy yet cohesive description of a workflow that moves either circuitously or singularly in the digital and physical realms, or even within either the digital or physical alone.

But on top of all of these terms, Intel is distinguishing its VR escapade from even the catchy and popular “mixed reality” by calling it “merged reality.” Whatever it is, Intel’s leap of faith is certainly noteworthy to those who have a vested interest in producing VR experiences.

Project Alloy

After the presentation of Project Alloy at San Francisco’s Moscone Center, Intel appeared to have jumped headfirst into the VR space. But there were a lot of potentially zipped-up vagaries in the big reveal.

Remember, this itself was a capricious move by Intel. Nobody was expecting to see the reveal of an untethered VR headset that allows you to move around multiple rooms while integrating external reality data (your hands, other people, walls, furniture, stairs) in realtime, “merging” physical and virtual worlds.

The headset uses embedded cameras and motion sensors to detect when objects are close enough to cause problems. When such objects are detected, they do something that is perhaps similar to “breaking the fourth wall” in film—they transform into digital data and peek through into your virtual world. Pretty cool stuff. It’s like a digital doorway. You walk through the physical and into the digital.

Intel isn’t actually making the headset, which is a notion that most people would find surprising. However, it did present onstage what looked like a finished and mass-produced hardware product.

CEO Brian Krzanich presented the user experience of Project Alloy, and the big denouement: The hardware requirements and design files are going to be made available as open source docs for people to create their own versions by next year. But there is also no website set up for Project Alloy, and no exact date was specified for when Intel would post the files.

Look, Ma, all hands! (Image courtesy of Intel.)Look, Ma, all hands! (Image courtesy of Intel.)

Before leaving the stage, Krzanich introduced Terry Myerson of Microsoft to talk about holographic computing. As you may or may not know, Microsoft HoloLens is a mixed reality device that allows users to see both the digital and physical intertwined. In a way, the HoloLens is a reverse experience of what Intel demonstrated with Project Alloy. With HoloLens, you can see all physical information, but digital information is projected over it, and you can interact with it physically. It’s funny that at the digital doorway, where the user interacts with digital data, the difference between the two becomes negligible from a user perspective.

Digital mock-up of Windows Holographic OS. Skype and Microsoft Edge are lurking in the corners. (Image courtesy of Microsoft.)Digital mock-up of Windows Holographic OS. Skype and Microsoft Edge are lurking in the corners. (Image courtesy of Microsoft.)

There’s good news for you and a small team: if you’re interested in electronics design and want to work with whichever manufacturer you’d like, with hardware and specs free from Intel, this could be your year.

For now, you’ll just have to wait for more announcements from Intel.

But forget waiting. There are a lot of start-ups and small companies that are jumping into the mixed reality space in innovative ways that impact engineers, machinists and mechanics.

Take Boston-based AMA, for example.


MBTA stands for the Massachusetts Bay Transportation Authority and KCS is an acronym for Keolis Commuter Services, which will be giving its mechanics pairs ofOsterhout Design Group (ODG)R-7 Smartglasses. Here’s the hook: these R-7 Smartglasses come packed with customized software designed by AMA XpertEye, which is based out of Cambridge, Mass.

AMA XpertEye

Similar to the idea of Microsoft creating a universal holographic OS for mixed reality headsets (which it calls Windows Holographic) and using it as a standard OS with open-source headset hardware designs like Intel is offering with Project Alloy, AMA XpertEye built its own Android-based OS for mixed reality devices from Epson, Vuzix, ODG and Google.

AMA XpertEye’s software allows field mechanics wearing the ODG R-7 Smartglasses to augment their vision by streaming a remote video connection to expert technicians in KCS’s central maintenance station.AMA XpertEye’s software allows field mechanics wearing the ODG R-7 Smartglasses to augment their vision by streaming a remote video connection to expert technicians in KCS’s central maintenance station.

The field mechanic’s Smartglasses are tethered and accessible only by a smartphone connection, and the technician at a central facility accesses the connection by laptop.

During the video communication, either party can take snapshots of the video, annotate them with written message or instructions, and send them back and forth to each other. The video can be saved and repurposed as a training video for a specific maintenance job or function.

In the field, the range of locations affects the character of a specific or routine issue. KCS covers a wide swath of MBTA tracks and stations, stretching from the northern border of New Hampshire to Providence, R.I., all the way to Worcester, Mass.

Testing the Smartglasses in Three Locations

If you’re near Somerville, Mass., you may see a field mechanic with some unusually large looking sunglasses with a wire hanging off them talking to him or herself at an MBTA maintenance station. Or, if you happen to live near a MBTA maintenance facility in the Boston community of Readville, you might see the same thing. However, I don’t think you’ll see the Smartglasses in action at the end of an MBTA train line, the third location, unless you work for KCS.

Organizations like KCS are interested in the mixed reality Smartglasses with AMA XpertEye’s custom Android OS, because they know that remote field mechanics and other workers become more versatile when using them. The expert technicians will save workers in Somerville from having to walk 30 minutes back and forth from the site to a main facility, and field mechanics equipped with the R-7 Smartglasses could help KCS save resources by enabling mechanics to fix different issues on the trains where they are located, instead of having to transport them to the main facility for repairs.

For field mechanics, it certainly beats the radio connection headsets they use now. As for AMA XpertEye, the fact that its custom Android OS can be ported to different mixed reality headset hardware shows who Microsoft is competing with in the mixed reality OS sector: Google.

Bottom Line

Android smartphones have traditionally used Qualcomm processors and Windows phones have used Intel processors. And if you’re curious, Apple uses its own A9 processors in the more recent iPhones, which are commonly manufactured by its smartphone rival Samsung.

Recommended For You

Making a Mechanical Octopus with Fusion 360
Mixed Reality: Apple Dives in and Microsoft HoloLens Comes out Enterprise
Rendering with AutoCAD 2016 and Its New Enhancements
Robotic Exoskeleton and Virtual Reality Help Paraplegics to Walk

Posted in Uncategorized | Leave a comment

Salam Aidilfitri

Posted in Uncategorized | Leave a comment

New Blog

Please go to new blog at http://razakschool.utm.my/syazli/

Posted in Uncategorized | Leave a comment

New blog at http://razakschool.utm.my/syazli/

Please go to new blog at http://razakschool.utm.my/syazli/

Posted in Uncategorized | Leave a comment

How The Growth Of Mixed Reality Will Change Communication, Collaboration And The Future Of The Workplace by Pete Sena (@petesena)


How The Growth Of Mixed Reality Will Change Communication, Collaboration And The Future Of The Workplace

Posted 19 hours ago by Pete Sena (@petesena)

  • 50


Next Story

Telegraph Academy Bootcamp Teaches People Of Color How To Code



Pete Sena is the founder and chief creative officer of Digital Surgeons.

More posts by this contributor:

How to join the network

Sci-fi tech, meet Wall Street.

A recent report from investment bank Goldman Sachs predicted that within 10 years, virtualreality hardware will be an $80 billion industry. This “base case” forecast assumed that adoptionwill be slow, as compared to that of smartphones and tablets, but, the report noted, “as the technology advances, price points decline, and an entire new marketplace of applications (both business and consumer) hits the market, we believe VR/AR has the potential to spawn a multi-billion dollar industry, and possibly be as game changing as the advent of the PC.”

While the conversation around VR (virtual reality) and AR (augmented reality) often focuses on gaming and video entertainment, the Goldman report theorizes that these use cases willaccount for less than half of the software market.


As a sometimes-gamer, it’s fun to think about strapping on a headset and diving headfirst into my favorite virtual worlds. But to limit our imagination to these applications is ignoring the unlimited potential of a hybrid reality created by augmented and virtual technology to affect every business and industry.

By combining analog, two-dimensional ways of working with new mixed–reality experiences, we can transform our ability to communicate, collaborate and create. The challenge for businesses will not be to provide a more immersive experience, but a more valuable experience.

The continued disruption of communication modalities

Message carriers were put out of work by the telegraph, the telephone was disrupted by the Internet and the good old-fashioned conference call was replaced by VoIP video conferences and screen-share-enabled unified communications systems.

Before the Internet, the historical evolution of long-distance communication technology was always toward replicating human connection in its clearest form: a face-to-face conversation. The telegraph may have missed the human voice, but its relative speed was a step toward an immediate verbal response.

Ironically enough, the first words spoken across a telephone line in 1876 by Alexander Graham Bell to his assistant Thomas A. Watson were, “Mr. Watson, come here — I want to see you.”

Mixed reality has the potential to allow a global workforce of remote teams to work together and tackle an organization’s business challenges.

Most digital communication across the Internet lacks the verbal, facial and body language cues of a face-to-face conversation, but the reach of our messages and the media at our disposal (photos, videos, memes, gifs, articles, etc.) has made it a medium of undeniable allure and value.

Why would I call a friend on the phone and tell them about a great concert when I can post a status and let all my friends know at once, all while showing them a video of me belting out my favorite song with the performer?

That being said, to say there is sometimes communication breakdown across the Internet is an understatement that requires no further explanation for anyone that has ever read a Comments section.

Don’t get me wrong, a connected world is undoubtedly a better world. I defer to the mission statement of the Mark Zuckerberg-led Internet.org for a perfect summation:

“The internet is essential to growing the knowledge we have and sharing it with each other. And for many of us, it’s a huge part of our everyday lives. But most of the world does not have access to the internet. Internet.org is a Facebook-led initiative with the goal of bringing internet access and the benefits of connectivity to the two-thirds of the world that doesn‘t have them. Imagine the difference an accurate weather report could make for a farmer planting crops, or the power of an encyclopedia for a child without textbooks. Now, imagine what they could contribute when the world can hear their voices. The more we connect, the better it gets.”

But the more we connect, the more important it is that we connect better.

Virtual, augmented and mixed experiences that exist at the intersection of our physical and digital worlds will bring the humanity of the face-to-face conversation back into the evolution of our communication.

Don’t make the mistake of equating these virtual experiences solely with sci-fi and gaming applications in which you have a surrogate and exist in a different, alternative reality system.

Mixed reality, or hybrid reality, merges real and virtual worlds to produce new environments where physical and digital objects co-exist and interact in real time.

I’m not talking about plugging into the Matrix as a means for improved communication. I’m talking about the ability for two people across the world to put on a headset and share any experience they choose — whether it’s to sit next to each other and physically flip through a photo album or to visit their dream destination.

Five or 10 years ago, we used text to communicate. Today, we communicate and share with photos and videos. Tomorrow, with VR, we’ll be able to communicate with experience.

What does this mean for the future of the workplace?

For one, it means improved collaboration. Mixed reality has the potential to allow a global workforce of remote teams to work together and tackle an organization’s business challenges. No matter where they are physically located, an employee can strap on their headset and noise-canceling headphones and enter a collaborative, immersive virtual environment.

Language barriers will become irrelevant as AR applications are able to accurately translate in real time. Imagine Google Translate acting in real time between two or more people.

It also means a more flexible workforce. While many employers still use inflexible models of fixed working time and location, there is evidence that employees are more productive if theyhave greater autonomy over where, when and how they work. Some employees prefer loud workspaces, others need silence. Some work best in the morning, others at night.

Employees also benefit from autonomy in how they work because everyone processes information differently. The classic VAK model for learning styles differentiates Visual, Auditory and Kinesthetic learners.

Visual learners will appreciate the immersion and optic stimuli of mixed reality. If nothing else, auditory learners will benefit from the reduction in auditory distractions that plague the modern open office space. Kinesthetic learners that learn best by moving, touching and doingwill benefit from being able to explore and collaborate in mixed reality. Conference calls that cause kinesthetics to tune out can be replaced by interactive, tactile modes of work-like whiteboarding sessions.


This greater autonomy in where, when and how employees work will serve to maximize productivity by empowering them to complete tasks in the manner that is best for them. Itwill allow employees to enter and work in “flow” states of complete absorption.

Named by renowned psychologist Mihaly Csikszentmihalyi, flow refers to “the mental state of operation in which a person performing an activity is fully immersed in a feeling of energized focus, full involvement, and enjoyment in the process of the activity.”

Video gamers should immediately recognize this mental state, as game design is particularly adept at inducing flow states where hours and hours fly by and the player is completely enveloped in the game.

Csikszentmihalyi theorizes that in order to retain flow and “stay in the zone,” the activity must reach a balance between the activity’s challenges and the participant’s abilities. If the challenge is too great, it promotes anxiety — too easy, and it promotes boredom.


The seesaw between anxiety and boredom is far too familiar to the modern workforce. Without fail, we try to get heads down on a project, and the emails, slack messages and “do you have a minute?” desk drive-bys keep us from ever being able to focus. Anxiety rears it ugly head.

We finally get the project done and while we are waiting for feedback from the client or organizational leadership, the communication channels miraculously quiet down. This is where boredom comes in.

Mixed reality is conducive to inducing flow states because of its ability to immerse employees in designed experiences that match their learning styles, preferences for stimuli and ability. But perhaps more importantly, it can serve to limit the distractions that cause anxiety and the latency that leads to boredom.

The challenge for businesses will not be to provide a more immersive experience, but a more valuable experience.

Distractions are eliminated by the worlds we are able to design that only push the messages imperative to the work we are doing.

Latency, or the time between an action and its response, is eliminated when our work is memorialized digitally as we complete it. A client or supervisor is able to join our work process digitally at any time to track and review progress.

Last, but certainly not least, mixed reality creates solutions for the universal problem of finite resources.

Aside from eliminating the monetary travel cost and the opportunity cost of time spent on red-eye flights and in jet-lagged meetings that plague global business, mixed reality reduces an even more sparse resource — real estate.

Related Articles

The Value In Virtual And Augmented RealityInvestments In Virtual And Augmented Reality Hit Nearly $700M in 2015Augmented Reality Will Make Us Smarter

On a macro level, population is increasing and space is not. Reducing the need for large offices by creating virtual workspaces will make the office park a relic.

On a micro level, just think about your own office. There are never enough conference rooms, and never enough workspaces. That awesome whiteboard you just covered with great ideas? Your colleague is coming in 30 seconds after you finish for a client call and needs it erased.

Mixed reality workspaces that memorialize our work while we complete it will not require furious note taking and cell phone picture snapping in those 30 seconds.

In fact, those 30 seconds will not exist, because whether we are sitting at our desk, in our home or in Starbucks, accessing a perfectly designed virtual workspace is as simple as putting on your headset.

The future of communication and collaboration at work will be defined by virtual, augmented and mixed reality experiences that provide economic value. To equate this collision of our physical and digital worlds solely with play and entertainment is to miss one of the great upcoming technological evolutions of our workforce.

Posted in Uncategorized | Leave a comment

The Speech that Made Obama President

Posted in Uncategorized | Leave a comment

6 Elevator Pitches for the 21st Century

Posted in Uncategorized | Leave a comment

Public Lecture

Posted in Uncategorized | Leave a comment

Doctors turn to big data for help with cancer care

By Jeff Blagdon on March 28, 2013

Don’t miss any stories Follow The Verge

Source: http://www.theverge.com/2013/3/28/4155530/doctors-create-big-data-tool-for-cancer-care

hospital records (rhimage shutterstock)


The ability to query vast swaths of data is transforming government and industry, and now doctors are beginning to apply the same ideas to cancer treatment. Today, the American Society of Clinical Oncologists is announcing that it’s completed work on a prototype for a "learning health system" called CancerLinQ that collects and analyzes cancer care data from the millions of patient visits on file around the country.

"Information is locked away in unconnected servers and paper files."

Currently, doctors only have easy access to clinical trial data, which only represents about three percent of the 1.6 million patients diagnosed with cancer every year. In a press release, Society president Dr. Sandra M. Swain says that very little is known about the majority of people who get cancer treatment "because their information is locked away in unconnected servers and paper files." The goal of CancerLinq is to make this data accessible to doctors, in order to help guide treatment.

But making all this information more freely available isn’t without privacy concerns. In 1997, Carnegie Mellon computer science professor Latanya Sweeney famously showed how easy it was to de-anonymize health care information, pulling out a Massachusetts governor’s record from a "scrubbed" data set. ASCO says that the CancerLinQ project has undergone "extensive technology and legal analysis."

It’s hoped that the project will grow to encompass almost all patients in the country

It will be another 12 to 18 months before ASCO is ready to roll out the new technology to doctors. For now, the project is starting small, with data from just 100,000 breast cancer patients, but doctors hope to expand it to encompass almost all patients in the country, especially those whose health conditions would exclude them from consideration for clinical trials. As Dr. Charles Penley tells The Wall Street Journal, "someone who has other conditions — heart disease, diabetes, et cetera… these are patients that we take care of in the real world every day."

Posted in Uncategorized | Leave a comment

Location Data Can Uniquely Identify Cellphone Users

A new study demonstrates how easy it is to identify people from the location-tracking data on their cellphones.
By Francie Diep Posted 03.27.2013 at 4:29 pm 1 Comment

Location Tracking Location Tracking Rendering by Christine Daniloff/MIT of an original image by Yves-Alexandre de Montjoye et al.

Just a few data points from a location-tracking cellphone are enough to identify most people, a new study found. It doesn’t matter if those data are "anonymized" so they aren’t linked to any identifiers such as address or phone number. Just four random points are enough to put names to 95 percent of the anonymized users in a cellphone database.

The study fits in with growing evidence that fairly publicly available data—cellphone location data is open to many location-tracking apps, for example—is not as anonymous as you might think.

The research team, including technology researchers from the U.S., Belgium and Chile, looked at 15 months’ worth of location data from 1.5 million cellphone users in a "small European country." The data weren’t particularly detailed. They simply tagged people by their closest cellphone tower once an hour. Many apps get similar data from Apple and Android.

The team then figured out the math to identify 95 percent of the phone-users from just four randomly selected data points. Given 11 data points, they could identify all of the users.

The danger of being able to identify cellphone users so easily is that you could deduce some pretty private information just from where people go. You could see if someone attends certain religious or political meetings, visits an HIV/AIDS or reproductive clinic, or hangs out with an ex or a business rival.

At the same time, cellphones and smartphone apps aren’t about to give up tracking their users’ locations. Looking at my own phone, it’s hard to identify which apps don’t track my location (The AllRecipes Dinner Spinner, maybe?) Sometimes I really appreciate the geo-location, like when I’m looking for a nearby restaurant, or lost in the city.

The new study’s authors don’t come down hard on one side or another. "This formula is something that could be useful to help the debate and decide, okay, how do we balance things out, and how do we make it a fair deal for everyone to use this data?" Yves-Alexandre de Montjoye, one of the study’s authors and a doctoral student at MIT’s Media Lab, told MIT News.

De Montjoye and his colleagues published their work in the journal Scientific Reports.

Posted in Uncategorized | Leave a comment