Hardware

Virtual reality gaming and the pursuit of ‘flow state’

Comment

Maggie Lane

Contributor

Maggie Lane is a writer and producer of virtual reality experiences and covers the industry for various publications.

More posts from Maggie Lane

You need to stop procrastinating. Maybe it’s time for some…

Bulletproof Coffee, Modafinil, nootropics, microdoses of acid, caffeine from coffee, caffeine from bracelets, aromatherapy, noise-canceling headphones, meditation, custom co-working spaces or productivity apps?

Whatever your choice, workers today (especially in the tech industry) will do just about anything to be more productive.

What we seek is that elusive, perfect focus — or flow state. According to researchers, someone in flow will experience a lack of sense of self, a decline in fear and time distortion. It is peak performance coupled with a euphoric high. All your happy neurotransmitters fire, and your dorsolateral prefrontal cortex performs differently — you do not second-guess yourself, you quite simply just flow into the next stages of the activity at hand. And you happen to be performing at the highest level possible. Sounds amazing, right?

But how do we invite this state in? A detailed piece in Fast Company outlines how extreme sports (professional surfing, steep incline skiing, skydiving, etc.) are the quickest way we’ve found to tap into human flow. Yet, these hobbies are just that — extreme. They require a large amount of skill and can be dangerous. For example, Steven Kotler, a pioneer in flow state research, broke almost 100 bones as a journalist researching the topic.

It all leads back to our collective (and very American) obsession with input versus output — are we achieving the most possible with the energy we put in? For all the bells and whistles at our disposal, we as a society are steadily declining in productivity as time goes on.

In 2014, a Gallup Poll found that the average American worker only spends a depressing 5 percent of their day in flow. A 2016 Atlantic article hypothesized that the main reason we’re decreasing in productivity as a workforce is that we’re not introducing new technologies quickly enough. Tech like robotics and smartphones could add a productivity push, but aren’t being integrated into the workplace. Business models are for the large part not that different from 10 years ago. In essence, we’re bored — we’re not being challenged in an engaging way, so we’re working harder than ever but achieving less.

But what if getting into flow state could be as easy as playing a video game?

Gameplay in RaveRunner

I first met Job Stauffer, co-founder and CCO at Orpheus Self-Care Entertainment, when I was, in fact, procrastinating from work. I was scrolling through Instagram and saw a clip of Job playing RaveRunner. As I love rhythm games, I immediately requested a build. Yet, I’d soon learn that this wasn’t just a simple VR experience.

RaveRunner was built for Vive, but easily ran on my Rift. When I first stepped into the game, I felt a bit overwhelmed — there was a lot of dark empty space; almost like something out of TRON. It was a little scary, which is actually very helpful for entering flow state. However, my fear soon dissipated as before me was a transparent yellow lady (Job calls her “Goldie”) dancing with the beat — providing a moving demo for gameplay. Unlike the hacking nature of Beat Saber, where you smash blocks with lightsabers, in RaveRunner you touch blue and orange glowing circles with your controllers, and move your whole body to the rhythm of the music.

There’s a softer, feminine touch to RaveRunner, and it wasn’t just Goldie. Behind the design of this game is a woman, Ashley Cooper, who is the developer responsible for the gameplay mechanics that can help a player attain flow. “Being in the flow state is incredibly rewarding and we strive to help people reach it by creating experiences like RaveRunner,” says Cooper. RaveRunner is a game you can get lost in, and by stimulating so many senses it allows you to let your higher level thoughts slip away — you become purely reactionary and non-judgmental.

In essence — flow.

After playing in this world for an hour, I called Job and learned more about his company. Apart from RaveRunner, Orpheus has also rolled out two other experiences — MicrodoseVR and SoundSelf. I got my first hands-on demo of all three products in one sitting at a cannabis technology event in Los Angeles, Grassfed LA. Grassfed is specifically geared toward higher-brow, hip tech enthusiasts; and the Orpheus suite of products fit right in.

As I lay in a dome with meditative lighting, a subwoofer purring below me, SoundSelf gave me one of the most profound experiences I’ve ever had in VR. I chanted into a microphone and my voice directly influenced the visuals before me. It felt like my spirit, the God particle, whatever you want to call it, was being stimulated from all these sensations. It was such a beautiful experience, but also was pure flow. I felt two minutes pass in the experience. I would have bet a hundred dollars on this. But I was inside for 10. Time didn’t make sense — a key indicator of flow state.

Next up was Microdose VR. I first tried Microdose VR in 2016 at the Esalen Institute in Big Sur. Esalen is the birthplace of the human potential movement, and so it was fitting that it was there, where I initially grasped the potential of VR for transformational experiences. Every other experience I had tried up to that point had been First Person Shooters or 360-video marketing pieces. And not to slight those experiences, but I felt that VR must be able to do MORE. Android Jones’ Microdose blew my mind. Like with SoundSelf, I completely lost track of time. I was directly impacting visuals with my body movements, and sound was a big factor as well. It was the first time I could easily imagine staying in VR for hours. Most of all, it was an experience that was only possible within VR. The game was the biggest euphoric rush I’ve felt in VR, and that feeling occurred again at this event.

We have the power as consumers to play games that tie in intrinsically with self-care but often don’t have options available. Job was propelled down this path when he asked himself “if I invest one hour of my time per day into playing a video game, what will I personally gain from that time invested, and will I even have time left over to do genuinely good things for myself?”

Orpheus is pioneering the fusion of game design with traditional self-care practices like meditation, dance/exercise, listening to music and creating art: “In short, we simply want players to feel amazing and have zero regrets about their time spent playing our games, allowing them to walk away knowing they have leveled up themselves, instead of their in-game avatars alone.”

One thing that will make it easier for people to try these experiences are portable headsets such as the ViveFocus and the Oculus Quest. Being untethered will allow people to travel with VR wherever they may go. Job sees this fundamental shift right ahead of us, as “video games and self-care are about to become one in the same. A paradigm shift. This is why all immersive Orpheus Self-Care Entertainment projects will be engineered for this critically important wave of VR.”

Orpheus is not a VR-only company, although their first three experiences are indeed for VR. As they expand, they hope to open up to a variety of types of immersive experiences, and are continually looking for projects that align with their holistic mission.

At the end of the day, I love that Orpheus is attempting to tap into a part of the market that so desperately needs their attention. If we don’t make self-care a major part of VR today, then we’ll continue to use VR as a distraction from, as opposed as a tool to enhance, our daily lives.

As for me, along with the peppermint tea, grapefruit candle and music that make my focus possible, I’ll now be adding some Orpheus games into my flow repertoire.

More TechCrunch

For over six decades, the nonprofit has been active in the financial services sector.

Accion’s new $152.5M fund will back financial institutions serving small businesses globally

Meta’s newest social network, Threads is starting its own fact-checking program after piggybacking on Instagram and Facebook’s network for a few months. Instagram head Adam Mosseri noted that the company…

Threads finally starts its own fact-checking program

Looking Glass makes trippy-looking mixed-reality screens that make things look 3D without the need of special glasses. Today, it launches a pair of new displays, including a 16-inch mode that…

Looking Glass launches new 3D displays

Replacing Sutskever is Jakub Pachocki, OpenAI’s director of research.

Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs

Intuitive Machines made history when it became the first private company to land a spacecraft on the moon, so it makes sense to adapt that tech for Mars.

Intuitive Machines wants to help NASA return samples from Mars

As Google revamps itself for the AI era, offering AI overviews within its search results, the company is introducing a new way to filter for just text-based links. With the…

Google adds ‘Web’ search filter for showing old-school text links as AI rolls out

Blue Origin’s New Shepard rocket will take a crew to suborbital space for the first time in nearly two years later this month, the company announced on Tuesday.  The NS-25…

Blue Origin to resume crewed New Shepard launches on May 19

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

In the coming months, Google says it will open up the Gemini Nano model to more developers.

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results