As fashion has its metaverse moment, one app looks to bridge real and virtual worlds for sneakerheads

Comment

Fashion is having its moment in the metaverse.

A riot of luxury labels, music, and games are vying for attention in the virtual world. And as physical events and the entertainment industry that depends on them shuts down, virtual things have come to epitomize the popular culture of the pandemic.

It’s creating an environment where imagination and technical ability, not wealth, are the only barriers to accumulating the status symbols that only money and fame could buy.

Whether it’s famous designers like Marc Jacobs, Sandy Liang, or Valentino dropping styles in Nintendo’s breakout hit, Animal Crossing: New Horizons; HypeBae’s plans to host a fashion show later this month in the game; or various crossovers between Epic Games’ Fortnite and brands like Supreme (which pre-date the pandemic), fashion is tapping into gaming culture to maintain its relevance.

One entrepreneur who’s spent time on both sides of the business as a startup founder and an employee for one of the biggest brands in athletic wear has launched a new app to try build a bridge between the physical and virtual fashion worlds.

Its goal is to give hypebeasts a chance to collect virtual versions of their physical objects of desire and win points to maybe buy the gear they crave, while also providing a showcase where brands can discover new design talent to make the next generation of cult collaborations and launch careers.

https://www.instagram.com/p/B_szSLRDdW3/

Aglet’s Phase 1

The app, called Aglet, was created by Ryan Mullins, the former head of digital innovation strategy for Adidas, and it’s offering a way to collect virtual versions of limited edition sneakers and, eventually, design tools so all the would-be Virgil Ablohs and Kanye Wests of the world can make their own shoes for the metaverse.

When TechCrunch spoke with Mullins last month, he was still stuck in Germany. His plans for the company’s launch, along with his own planned relocation to Los Angeles, had changed dramatically since travel was put on hold and nations entered lockdown to stop the spread of COVID-19.

Initially, the app was intended to be a Pokemon Go for sneakerheads. Limited edition “drops” of virtual sneakers would happen at locations around a city and players could go to those spots and add the virtual sneakers to their collection. Players earned points for traveling to various spots, and those points could be redeemed for in-app purchases or discounts at stores.

We’re converting your physical activity into a virtual currency that you can spend in stores to buy new brands,” Mulins said. “Brands can have challenges and you have to complete two or three challenges in your city as you compete on that challenge the winner will get prizes.”

Aglet determines how many points a player earns based on the virtual shoes they choose to wear on their expeditions. The app offers a range of virtual sneakers from Air Force 1s to Yeezys and the more expensive or rare the shoe, the more points a player earns for “stepping out” in it. Over time, shoes will wear out and need to replaced — ideally driving more stickiness for the app.

Currency for in-app purchases can be bought for anywhere from $1 (for 5 “Aglets”) to $80 (for 1,000 “Aglets”). As players collect shoes they can display them on their in-app virtual shelves and potentially trade them with other players.

When the lockdowns and shelter-in-place orders came through, Mullins and his designers quickly shifted to create the “pandemic mode” for the game, where users can go anywhere on a map and simulate the game.

“Our plan was to have an LA specific release and do a competition, but that was obviously thrown off,” Mullins said.

The app has antecedents like Nike’s SNKRS, which offered limited edition drops to users and geo-located places where folks could find shoes from its various collaborations, as Input noted when it covered Aglet’s April launch.

While Mullins’ vision for Aglet’s current incarnation is an interesting attempt to weave the threads of gaming and sneaker culture into a new kind of augmented reality-enabled shopping experience, there’s a step beyond the game universes that Mullins wants to create.

Image Credits: Adidas (opens in a new window)

The future of fashion discovery could be in the metaverse

“My proudest initiative [at Adidas] was one called MakerLab,” said Mullins.

MakerLab linked Adidas up with young, up-and-coming designers and let them create limited edition designs for the shoe company based on one of its classic shoe silhouettes. Mullins thinks that those types of collaborations point the way to a potential future for the industry that could be incredibly compelling.

“The real vision for me is that I believe that the next Nike is an inverted Nike,” Mullins said. “I think what’s going to happen is that you’re going to have young kids on Roblox designing stuff in the virtual environments and it’ll pop there and you’ll have Nike or Adidas manufacture it.”

From that perspective, the Aglet app is more of a Trojan Horse for the big idea that Mullins wants to pursue. That’s to create a design studio to showcase the best virtual designs and bring them to the real world.

Mullins calls it the “Smart Aglet Sneaker Studio”. “[It’s] where you can design your own sneakers in the standard design style and we’ll put those in the game. We’ll let you design your own hoodies and then [Aglet] does become a YouTube for fashion design.”

The YouTube example comes from the starmaking power the platform has enabled for everyone from makeup artists to musicians like Justin Bieber, who was discovered on the social media streaming service.

“I want to build a virtual design platform where kids can build their own brands for virtual fashion brands and put them into this game environment that I’m building in the first phase,” said Mullins. “Once Bieber was discovered, YouTube meant he was being able to access an entire infrastructure to become a star. What Nike and Adidas are doing is something similar where they’re finding this talent out there and giving that designer access to their infrastructure and maybe could jumpstart a young kid’s career.”

More TechCrunch

Looking Glass makes trippy-looking mixed-reality screens that make things look 3D without the need of special glasses. Today, it launches a pair of new displays, including a 16-inch mode that…

Looking Glass launches new 3D displays

Replacing Sutskever is Jakub Pachocki, OpenAI’s director of research.

Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs

Intuitive Machines made history when it became the first private company to land a spacecraft on the moon, so it makes sense to adapt that tech for Mars.

Intuitive Machines wants to help NASA return samples from Mars

As Google revamps itself for the AI era, offering AI overviews within its search results, the company is introducing a new way to filter for just text-based links. With the…

Google adds ‘Web’ search filter for showing old-school text links as AI rolls out

Blue Origin’s New Shepard rocket will take a crew to suborbital space for the first time in nearly two years later this month, the company announced on Tuesday.  The NS-25…

Blue Origin to resume crewed New Shepard launches on May 19

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

In the coming months, Google says it will open up the Gemini Nano model to more developers.

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals