Spatial Computing devices let us interact with and control the real and “make believe” world around us. They give us a way to use the physical 3D world as a canvas for a digital experience.
People can hold, wear, and use small computers on their hands, head, body, and face to interact with digital experiences. These computers provide rich layers of interactivity to Augmented Reality and Virtual Reality (AR and VR). But this technology isn’t just for people. Real objects, like cars, robots, and houses can use computers to “see” the real world and interact with it.
Spatial Computing already has applications in domains as diverse as warehouse logistics, automated vehicles, vocational training, and gaming. At Fishermen Labs, we believe that its use cases will only continue to expand, as new technology will help us create a fully immersive, seamless digital world.
Components of Spatial Computing
Spatial Computing depends on a wide range of inputs and outputs. These inputs and outputs are sent and received between humans and computers, back and forth, in a loop. This creates an experience for the user, allowing them to complete the task at hand.
Interaction Technologies
The points of input and output are called Interaction Technologies, for which several devices can trigger or receive. Interaction Technologies include:
- Hand Tracking (AR or VR)
- Body Tracking (AR or VR)
- Voice Control
- Eye Tracking
- Haptics
- Trackers / Sensors / Proximity / LiDAR
Hand Tracking
Users use their hands in a variety of ways to interact directly with virtual content. They can grab, pinch, push, slide and swipe. Hand recognition in AR is used mostly to control interactions, where they need to be immediately visible to be used. In VR, hand movements can be sensed even out of the user’s view, and used for additional interaction options, including triggers, joysticks, and buttons.
Body Tracking
We can track body movement through wearables like vests, gloves, headwear, or a combination of these. This allows us to interact with others, try on apparel, safely drive autonomous vehicles, or track a user’s location in real space.
Voice Control
Communicating with our voice is natural. That’s why we have Amazon Alexa, Google Home, and Siri. Devices listen for input to coordinate with the entire system.
Eye Tracking
Eye tracking senses where you are looking. For safety reasons, it also tracks where you are not looking. This data helps understand a user’s intent, allowing computers to reduce their computational load by only displaying what is immediately relevant.
Haptic (touch)
Adding a sense of touch helps make the experience feel more natural. Imagine buttoning up a shirt in VR with haptic gloves! Touch is a sense that gives us a lot of feedback as to whether we have successfully interacted with our environment or not. Some gloves already “tighten” when you grab a digital object, making it feel real.
Tracking, Sensors, LiDAR, Proximity
These inputs use a large number of sensors to track the user and their physical environment. They can tell us what the environment looks like, how it’s laid out, where the user is, how they are moving, what they are interacting with, and what they intend to interact with.
Examples of Spatial Computing
Gaming & Entertainment
Spatial Technology provides a level of depth, interaction, and immersion that doesn’t exist in traditional media. It can pull users into a story and experience, allowing the story to adapt and be told using the user’s true surroundings and environment. This is something brands should be cognizant of.
Automated Logistics Warehouses
Amazon deploys autonomous robots to pick and pack products. Through a variety of sensors, proximity calculations and coordinates, they safely navigate to a specific location in a warehouse, move to the correct physical shelf, pick the product, put it in a bin, and return to its original location. They do all this while safely navigating physical structures, humans, and other robots.
Autonomous & Semi-Autonomous Vehicles
Tesla and Waymo use a variety of spatial sensors to recognize their surroundings, moving objects, and people. They can navigate challenging terrain while receiving input from users (Tesla), or even without input (Autonomous).
Education and Training
AR and VR technologies provide ways to train for real world scenarios without the risk. Use cases include AR/VR surgery, training production line workers, and artistic production.
The Future of Spatial Computing
AR and VR already provide a foundation for a more feature rich, interactive, faster, lighter, and cheaper future. As they evolve, more of the real world will seamlessly “tie in” as part of the experience. 5G, Ultra Wide Band, and the Internet of Things will make this possible.
In the future, it will be more common to digitally map any room or environment, and account for any and all objects present inside. Inputs and outputs will be sent and received between all connected devices. As a result, all surroundings will be completely integrated as one digital whole, explorable digitally and physically at the same time.