Wearing her own paintings, artist to showcase for the first time publicly at 'Sonder'
MINNEAPOLIS — Standing in front of her paintings, dressed in a bespoke suit made of a material featuring her same artwork, Sarah Edwards appears to have made an unapologetic entrance into the public art world. However, she says she's feeling quite vulnerable.
And though only she and I stood in the gallery room at the Chambers Hotel, Edwards said, "It feels like I'm standing naked in front of a big room of people."
As CEO of Fashion Week MN and marketing agency Some Great People, Edwards has spent her professional career connecting community members and putting the spotlight on other artists. But Saturday, Feb. 4 at Sonder, an interactive art experience at the Chambers Hotel, she's finally going to share her own creations.
"[I'm] anxious, for sure," Edwards said. "Impostor syndrome."
However, Edwards says she's grateful to be able to do it alongside roughly 20 other artists who will be occupying rooms and hallways at the Chambers Hotel. Each room will feature a different local maker. In one room, you'll see couture fashion courtesy of local designer Keiona Cook. In another, you can create AI-generated fashion, with the help of REM5 Studios CEO Amir Berenjian.
"When people come here on Saturday, they'll be able to--through text--input their wildest dreams which will be a one-of-one shirt, pants, or shoes and we'll be showcasing it up on the big screen," Berenjian said.
Berenjian said he's looking forward to showcasing his studio's work alongside so many other creatives
"We all feed off each other's energy and are all inspired by each other," Berenjian said.
Producers of Sonder also say you can expect to be greeted by a hip-hop violinist when you enter and ballet dancers will serve you cocktails. It's all part of the experience Edwards hopes to be, in her words, "weird."
"Maybe there's that outfit you've never been able to wear," Edwards said. "One person said to me, 'It's going to be like Burning Man meets the Met Gala.' And I'm like, 'OK!'"
Talking about the creative community together is when Edwards truly lights up. It's why she named the event after a poetic, albeit fictional, definition of "sonder." For background, this particular definition comes from what started as a web glossary of made-up definitions author John Koenig penned to articulate "strangely powerful emotions," which was eventually published into the 2021 New York Time's Best Selling "Dictonary of Obscure Sorrows."
In it this "Dictonary,", you'll find: "Sonder: The realization that each random passerby is living a life as vivid and complex as your own."
Edwards is admittedly obsessed with that notion and hopes the vivid and complex spirits that make up artists and their neighbors can join together to create a beautiful experience.
"My favorite part of all of it is the connections that are made," Edwards said. "Build relationships, figure out ways that you can partner, celebrate, uplift all the awesome artists that are a part of this."
After its first public demo of live text-to-image generation in late December, the immersive web platform SIMULACRA announced two more AI tools to the experience. You can now explore real-time text-to-architecture and text-to-fashion opening up exponential possibilities for creation within the social 3D web platform. These tools are a glimpse into the future of content creation for both virtual worlds and physical/digital products.
Text-to-architecture: This feature allows creators to seamlessly update the walls, flooring, and artwork within the virtual apartment layout, real-time. Through the power of AI, the user can swap out hardwood floors for carpet, a ball pit, or even moon rocks. This feature removes the pain point of making wholesale changes through complex 3D modeling software and allows for infinite creativity from anyone with or without 3D art experience.
Text-to-fashion: The future of self expression is increasingly transitioning to digital. With this new AI feature, user creativity is completely unlocked by allowing complete one of one avatar customization. This system is not only the future of how users can represent themselves in SIMULACRA, this is the future of how brands will empower their communities to create and customize their products in both the physical and virtual world.
As a platform focused on storytelling and education, SIMULACRA is leveraging these AI tools to accelerate content creation for brands, educators, and creatives so they can create rich and meaningful digital experiences for their communities.
"We believe that AI is a key component in the democratization of content creation for the next-generation of digital experiences," said Amir Berenjian, CEO of REM5 STUDIOS. "We’re excited to see what our partners build that goes far beyond gaming".
To demonstrate this capability, SIMULACRA is making an easy-to-use demo publicly available, and accessible to anyone with a PC or mac in a Chrome web browser. No downloads are required. The demo is multi-user so feel free to experience it with others.
Try it here right now: https://www.simulacra.io/AI
For private tours, or to meet up with the SIMULACRA team, please email firstname.lastname@example.org.
A look back at internet trends and a look forward at the potentially far-reaching new applications of the technology.
Imagine a meeting with a group of 20 colleagues to plan the opening of a new retail store. You’re staring at a checkered screen of faces on a video call; images of 2D rendering of a 3D space. There is no spatial awareness, eye contact, or body language.
Now, imagine that meeting taking place in a virtual reality environment. You are embodying a 3D avatar with accurately represented facial expressions and body movement and can communicate through spatial audio within a digital twin of the new store.
Now, you’re back in the physical world, walking down the street in a bustling city, stuck staring down at your smartphone for directions and a recommendation on a place to stop for dinner. But instead, you now have access to that same information as a digital overlay on the physical world right from your glasses or contacts.
The Metaverse = The Spatial Internet
Welcome to the metaverse. The internet with a sense of presence. At its core, the concept of the metaverse is a 3D interface or digital layer that allows us to access information and interact or communicate with others in a more natural and frictionless engagement.
The metaverse isn't a singular medium; it's a seamless combination of immersive technologies including virtual reality (VR) and augmented reality (AR). If you’re a surgeon on the operating table with a patient, you may want an augmented reality display that provides you with real-time information and instructions. But if you’re still in medical school, a virtual reality simulation in which you can learn through repetition, get hands-on training from a teacher anywhere on the planet, fail safely, and benefit from a real-time data feedback loop is likely preferable.
The potential is here—augmented reality to seamlessly interact with information and virtual reality to help democratize experiences.
The Business of the Metaverse
A simple mention of the “M” word can generate a wide range of opinions ranging from excitement to skepticism to confusion. Is it marketing hype driven by greed and capitalism? Or does it represent the next evolution of how humans leverage technology?
Adding fuel to the conversation, institutions including McKinsey & Company and Citi Group have projected the metaverse to generate $5 to $13 trillion in value by 2030. These projections are supported by the fact that spatial and social platforms such as Roblox and Minecraft are already experiencing over 200 million monthly active users and billions of dollars of e-commerce with seemingly no end to the growth in sight. Judging from conversations with parents of younger kids, Generations Z and Alpha are skipping over legacy social media and going straight to the spatial web accessed on both PC and mobile devices, and the transition to head mounted displays is on the horizon. Humans + Computers: 1980s to Today
To contemplate the idea that is the metaverse, it’s important to contextualize past paradigm shifts as it relates to our relationship with technology and its role in our daily lives.
The first significant wave in mainstream computing occurred in the 1980s as the computer moved into the home. Throughout the 90s, businesses and consumers quickly found value in the internet through access to massive amounts of information and basic communication. Then came the dot com bubble of late 2000 and, for a while, headlines seemed to label the internet as a potential passing fad. Jump ahead to 2010 and 77 percent of American households had personal computers, the internet had more than 200 million websites, and Facebook had over 600 million users.
The internet wasn’t a fad.
Next up, the mobile and cloud era that ramped up at an even quicker pace. In 2022, more than 85 percent of Americans own a smartphone with that figuring jumping to 96 percent for ages 18-29. Nearly every aspect of how we work, learn, play, transact, and socialize has been interwoven with technology.
The Next Generation of the Internet
Since the dawn of computing, the primary way we interact with technology has been through flat screens and flat interfaces; a keyhole view into the expansive digital world. It lacks the very fundamental sense of presence, a feature that is key to the human experience. Our brain always unconsciously maps the 3D world around us both visually and audibly, storing information, memories, and emotions.
So, in some ways, transitioning into the world of spatial computing brings us back to how we naturally engage with the world around us.
But it’s early.
Aspirations of spatial computing, virtual reality, and augmented reality is nothing new. But now, the computing power and the underlying infrastructure have begun to catch up to those aspirations. Like with many disruptive technologies, billions of dollars are currently being invested into this space across industries such as retail, education, healthcare, marketing, entertainment, training, and more.
Companies such as Meta (Facebook), HTC, Microsoft, Apple, HP, Nvidia, Unity, Google, Amazon, and Qualcomm are ramping up innovation in hardware, software, and infrastructure. Following the massive consumer adoption by Generations Z and Alpha, enterprise is embracing the new normal and leveraging metaverse technology for remote work and internal upskilling with companies such as Accenture and Walmart already deploying tens of thousands of virtual reality headsets.
Where Does the Metaverse Go From Here?
We are undoubtedly on the doorstep of the next generation of how humans interact with technology. Like its technological predecessors, the metaverse has the potential to impact how we work, learn, play, and connect. And when it's here, we won't be talking about it. It will be seamlessly integrated into all aspects of our lives just like the internet.
The metaverse is not dead. It’s just in its infancy.
Early signs show that generative AI will drastically shape how we build virtual worlds - to a point.
Generative AI has already appeared everywhere, as discussions spread faster than the common cold in a tech conference. The metaverse will be impacted too, but we got one step closer at CES. There, NVIDIA unveiled updates to its Omniverse platform that connects with Move.ai for body movements and Lumirithmic for facial meshes. Think of it as a cohesive toybox to deploy experiences with a spectrum of assisted tools, each ready to pluck and apply. “NVIDIA has one of the broadest spectrums in the world for deploying generative AI applications at scale,” said Jesus Rodriguez in a great piece.
I agree, but the ramifications are much wider than that. In 2023, we may see a wave of generative AI tools pout into metaverse platforms, perhaps as bits and pieces rather than a cohesive package like Omniverse. Simulacra is one of the first metaverse platforms to offer these features. Meta unveiled a ‘Builder Bot’ tool that lets users speak objects into existence within Horizon Worlds, commanding shapes to pop into virtual spaces. We will see more tools which will ease the building process and, perhaps, bring more people into the fold.
At the same time, I wanted to step away from speculation. Pundits often overhype generative AI, letting flights of fancy lift them from reality. Yes, the number of research papers on the topic is increasing, and we may see an AI tool that can build a fully-featured world through mere sentences. The seeds of this vision were planted in 2022 and before. But we are a long way from this cohesive package.
Still, green shoots are peeking out of the ground. Companies already offer generative AI solutions that can also benefit metaverse projects. For example, they can make:
Executive summary A summary of my findings, with an expansion on each point throughout the piece:
How is AI related to the metaverse? AI is already being used across multiple metaverse platforms, from curating experiences that it recommends to you to generating avatars based on inputs. The difference here is scale; generative AI has the potential to step beyond smaller frameworks, to generate whole worlds and assets based on text or voice inputs.
Take video games, where AI is already making waves in the industry. It’s not a surprise; of all industries, video games will likely be the most disrupted by the trend. James Gwertzman and Jack Soslow at a16z Games point to the time-sucking processes of asset creation, experience design, and iterative testing. Assets that could have taken weeks to finalise can take just a few hours, with some minor tweaks and embellishments. The fast process expedites key parts of the video game process that may have taken months or years, condensed into a short burst which can then be applied to the next phase of the process. The swift rendering is important, too. If assets can be tweaked and then tested quickly, then it saves the employee’s time with iterative evolution and finalisation.
The same benefits apply to metaverse projects, too. Virtual worlds can be built based on simple inputs, to lay the framework. Gameplay additions can be added as well, then tweaked based on the requirements of the user. Once built, the virtual world can be populated with NPCs that follow set instructions, speaking without needing voice actors. “There is going to be a time when developers are going to spend less time writing code and more time expressing what they need,” said Danny Lange, senior vice president of AI at Unity. “That increased productivity is going to be transformative for applications of the metaverse outside a narrow space, like gaming.”
These experiences will not match the quality of AAA titles in console gaming. God of War: Ragnarok would lack impact if Kratos – played by the award-winning Christopher Judge – spoke with a replicable voice and intonation. Nor would AI tools match the exact specifications for the vines in Vanaheim, or the frosted mountains of Niflheim. But it cuts the process down so much that it makes it easier for anyone to make worlds or spaces, which encourages innovation and experimentation. It’s the elixir of tools, potential and connectivity that could make metaverse platforms thrive.
Auto-generating virtual worlds as templates One is generating worlds. I’m not the best builder - my Lego sets attest to that. The same goes for making a website, even with helpful frameworks. But building virtual worlds, on any platform, takes the complexity to a new level. Professionals worry about the position and design of objects, or the interactions of NPCs, or the exact height and shape of locations. A far cry from a simple 2D screen with images or words.
Savvy creatives can build web-based experiences via A-Frame, or use the tools already supplied within metaverse platforms. Horizon Worlds has a great building experience, as does Rec Room. They all work, but additional components can help smooth out the ideation process. Plus, frictionless tools expand adoption across industries; Wordpress made it easier for people to build websites, and TikTok’s editor made it easier to create short-form content. Generative AI can auto-populate virtual worlds via text or voice prompts, which can save time and effort. The same tools can also generate images and clips that can be published across multiple formats, from websites to YouTube.
We already have tech that can do this. NVIDIA GET3D can generate 3D images based on 2D inputs that can then be used as assets. “GET3D brings us a step closer to democratizing AI-powered 3D content creation,” said Sanja Fidler, vice president of AI research at NVIDIA. “Its ability to instantly generate textured 3D shapes could be a game-changer for developers, helping them rapidly populate virtual worlds with varied and interesting objects.”
Going beyond the hype Though exciting, generative AI has sparked sky-high thinking that filled the heads of pseudo-professionals with fumes of nonsense. One narrative focuses on replacement, and how it will replace large swathes of the workforce. Nathan Benaich, general partner at Air Street Capital, argues for nuance in an interview with Sifted. “The reality is – even though the progress right now seems like it’s exponential in images, video and text – I think there’s just so many nuances that companies need to solve for when they take these capabilities and build products for many people to use in a workflow.”
Personally, I do not see artists disappearing from the process. Humans are needed at all stages of the creative process in order to present a cohesive vision. Think of it as two humans on either side of an AI machine. One types out what's needed; the broad shape of an idea, with parameters and conditions. The machine churns and works, doing the hard work of bringing the image to reality. When it is spat out on the other side, the second human clips and shapes the asset into a closer vision of what the team needs. The AI is in the middle picking up the hard work of design, building, and colour – but not a replacement.
I see generative AI as a supplementary tool for metaverse development. Like video games, it will provide an assistive supplement to the creation of virtual spaces.
Predictions on how to use generative AI well with the metaverseGenerative AI will have a huge impact. It assists with the creative process, laying out the framework of a virtual world which can then be tweaked, edited, or replaced in the final version. I can see a deluge of low-quality virtual worlds sprouted by an AI, as a hodgepodge of items dedicated to popular IPs. We will no doubt see a virtual world where assets are inspired by a blend of Pirates of the Caribbean and Harry Potter. (Also, it would not shock me if these worlds are popular too, because players search with high-volume keywords across platforms).
I also predict that a metaverse platform will catapult to mainstream stardom if it is web-based and offers generative AI elements. The winning platform may be frictionless to use (perhaps web-based), easy to generate worlds, and have a discovery feature based on keywords. The potent cocktail would generate a creator-led ecosystem which may rise in popularity swiftly. Simulacra is well-positioned, based on its previously-discussed announcements.
Finally, these outputs need to be complemented with a robust set of creator tools to shape the content itself. Generative AI could work on virtual platforms, but it could lead to a deluge of low-quality content if the tools to shape them do not function well. If a world is made, but it is difficult to edit, then users may clock off quickly. Streamlined editors are important for experimentation and adoption; the same is true here, as a complement to the technology.
I do not believe AI will replace human creativity. Ultimately it is a tool, and it can be used well or poorly. As Mokokoma Mokhonoana once said, “the usefulness of a thing is dependent not on what it is, or how it can be used, but on the needs or wants of the user.”
Virtual reality headsets can add a deeper component of three-dimensional communication, as it's a more natural form of engagement, experts said.
When COVID-19 forced workers home, companies quickly shifted communications strategies to videoconferencing platforms like Zoom and Microsoft.
But as the pandemic lengthened, companies realized that they needed to take more than daily planning virtual. Even factories that stayed open had to update training procedures for people who would normally travel to learn about new equipment.
Enter the metaverse. Companies and organizations in Minnesota took immersive technology used in gaming to create new onboarding and training materials with computer-generated environments made to look and sound real while changing the way people communicate.
Now they say the technology is here to stay and are working on even more ways to use it — with both employees and customers.
Experts around the Twin Cities view the metaverse as the next iteration of how human beings leverage and interact with internet-based technology. This follows the introduction of the personal computer, dial-up internet, mobile phones, and browser- and app-based videoconference platforms, said Amir Berenjian, CEO of Rem5, a St. Louis Park-based virtual reality studio and development company.
For Uponor North America in Apple Valley, the U.S. headquarters for the global pipe manufacturer, Rem5 Studios created a virtual reality training system where new employees working remotely and customers outside the region can tour the company's unique manufacturing process, as well as quality controls and testing.
A few years ago, the company would have flown those workers to the Twin Cities.
"This is more scalable and cost-effective," Berenjian said.
Companies like Ford partner with VR companies to give their remote designers a place to collaborate in real time.
Rem5, also for Uponor, created an augmented-reality experience that displays 3-D holograms of Uponor products to show how they are individually fitted into one final piece and operate, allowing a person to learn about the product, inspect parts and interact with it without having to transport the physical part itself. Anyone with a mobile device connected to the internet can access the experience from anywhere in the world.
This technology can alter how companies and organizations engage with clients, too. Instead of hauling equipment to trade shows or to another business for demonstrations, VR can be added as a means to illustrate how equipment and machines function in the real world.
Using VR headsets
Virtual-reality headsets add a deeper component of 3-D communication, as it is a more natural form of engagement, Berenjian said. Body language, walking in various directions while holding a conversation or even turning one's head to see where a sound is coming from can be achieved in the virtual world.
That doesn't happen in two-dimensional engagements like Zoom, he said.
"The reason I like to go down that path is to demystify how people think we're taking a step away from human connection when we introduce virtual technology," Berenjian said. "We're actually taking a step back when we do [video chat]."
In using a virtual-reality headset, all of one's visual input becomes controlled by the application. Everything seen is computer-defined, nearly eliminating a person's ability to multitask like they would on a phone call, or even a videoconference call where a person can cook food or wash dishes while they talk, said Victoria Interrante, a professor at the University of Minnesota's Department of Computer Science and Engineering
"It evokes a different mode of interpretation and interaction with what you're doing," Interrante said.
How commonplace VR headsets are, however, depends. Not only is price a factor, but comfort as well. Some users can experience nausea or dizziness while in a headset for prolonged periods.
"Once the technology gets to the point where it's as physical comfortable to be in VR as it is to be in the real world, then I think we'll see more people adopting it," Interrante said.
A company of avatars
Not every experience in the metaverse requires virtual-reality headsets. Many can be accessed through the internet on a personal computer or mobile device.
While first-person virtual reality allows a user to see a world through their own eyes, third-person VR is a method of puppeteering a digital character that represents them.
Rem5 developed a desktop VR program called 1 City, 2 Realities as a diversity and inclusion training tool for employers. When logged into the online program, people can control their avatars to walk through a virtual gallery of information and images "highlighting systemic racial inequalities in our nation and Minneapolis."
Rem5 has worked with General Mills and Target to make the virtual experience part of employee training.
The company also created a similar program that focuses on privilege, Berenjian said.
An experiential learning opportunity such as this creates empathy, Berenjian said. The emotional response of watching scenes unfold in VR bridges the gap between watching a recapitulation on those events on news channels and actually being there.
"Your brain is more immersed," he said.
Meetings in the metaverse take on different levels of engagement in avatar form. A videoconference meeting with dozens of attendees can become convoluted if there are too many faces within tiny squares on a computer screen.
In the metaverse, dozens of people can still gather, but have one-on-one or group conversations in a room if their avatars huddle together, just like in the real world.
"The knee-jerk reaction is to say, 'I don't want to replace the real world,'" Berenjian said. "We're not talking about replacing anything. We're talking about extending, or enhancing or making it more accessible."
Because immersive technology can make interactions more personable, it's becoming more common in therapy sessions and in diversity education. Meeting in the metaverse just for the sake of doing so, however, is not going to increase engagement with that technology, Berenjian said.
"We need compelling reasons to be in these spaces," he said. "It's novel and it's going to wear off."
Where companies can begin
If companies think a permanent virtual-training option should be available, then they need to think about how much they have to spend. For example, a program that uses VR headsets could be costly, Berenjian said.
The current retail price for a Quest 2 headset made by Meta, the parent company of Facebook, is $399. Multiply that by 10, or even 50, and it can become a huge expense. Google, however, makes a VR device called Viewer, which costs as low as $9. People insert their smartphone into the Viewer to engage with VR apps on their phone.
But as innovators and advocates of Web 3.0, the next iteration of the internet, push a decentralized, and more democratized, system for emerging technologies, the use of augmented and virtual technology will become less expensive, and possibly free.
"We're talking about making this more accessible," Berenjian said.
In the interim, companies will have to do their due diligence to find potential partners that specialize in immersive technology and negotiate the costs. Companies like Rem5 aren't in abundance in the Twin Cities, but do exist here, and there are nationwide players.
Red Wing Shoes, for example, recently partnered with California-based Roblox Corp., the makers of the Roblox online gaming platform, to create a virtual experience called Red Wing BuilderTown through its new Builder Exchange Program.
Eventually, some of those designs will be constructed in the real world for people in need through Red Wing's partnership with Settled, an organization that houses the homeless with tiny homes. Roblox members are also able to shop for Red Wing merchandise within a virtual store.
St. Louis Park tech firm rolls out 'virtual social justice museum' on new platform
REM5 has worked with Target, General Mills, the University of St. Thomas and others to make it part of their employee training.
In the wake of George Floyd's murder, local entrepreneur Amir Berenjian and his team wanted to create a safe and accessible space to talk about some of the Twin Cities area's challenges, including its racial equity gaps, which are among the largest in the country.
Berenjian describes what his St. Louis Park company, REM5, created as akin to a "virtual social justice museum" — an exhibit in the metaverse. Similar to a computer game, people can walk through a virtual gallery of information and images on their home browsers.
REM5 has worked with General Mills, HandsOn Twin Cities, the University of St. Thomas and most recently Target to make the "1 City. 2 Realities" virtual experience part of employee training.
REM5 was founded in 2018 with the idea to make the technology of virtual reality more available to the masses.
The company recently launched the virtual exhibit on a new digital platform that Berenjian hopes can evolve into an easy interface for clients to build their own immersive learning and development experiences within the metaverse without the help of a developer.
"We believe that the metaverse or spatial web is a really, really powerful tool, but it's not accessible for most people so we built experiences and tools that empower non-gamers," he said.
"Think of it like the Wix [website design builder] for the metaverse," he said.
A few decades ago, people needed to hire web developers to build simple blogs or websites. Now, design applications allow novices to quickly create a fully functioning site.
REM5 has a brick-and-mortar lab in St. Louis Park where people can host events and rent time to play games with the help of virtual reality headsets. It also has a REM5 Studios creative agency and the REM5 For Good arm, which creates virtual experiences for education, corporate training and beyond.
It has used virtual reality to help with other diversity and equity training in the past. But in summer 2020, after Minneapolis became the center of a racial reckoning that swept the world, the REM5 team developed the "1 City. 2 Realities" experience.
"I had all this data that basically told this story of Minneapolis and Minnesota and how we hold ourselves up as 'Minnesota nice' and best parks and greatest places to live, yada, yada, but when you start to look at the data around wealth gaps, education gaps, incarceration rates, redlining, we are like the bottom five," he said.
People choose an avatar to "walk" through several gallery rooms with statistics, photos and videos, including murals painted across Minneapolis; a map of minority neighborhoods that were redlined as high-risk for mortgage lending; and 360-degree photo spheres that allow the user to travel through George Floyd Square.
The experience is done through a web browser. REM5, which had been known for its experiences with virtual reality headsets, had to adjust during the pandemic to engage users remotely without them having to wear headsets and be in the same room.
More companies have started to dip their toes in immersive metaverse experiences. This week, Walmart announced it had created a Walmart Land and Walmart's Universe of Play within the Roblox metaverse game platform.