In collaboration with the Minnesota United, REM5 STUDIOS has produced an all-access VR experience that will premiere exclusively at this year's Minnesota State Fair that runs August 28th - September 4th. This all ages experience puts you on the field and behind the scenes at Allianz field for a Minnesota United game day experience. Fans of all levels and those new to the sport will have an amazing time taking in the sights and sounds of what its like to be a part of the action. Through 360 video technology, you'll be able to look all around you in a VR headset as you go into the locker room, sit field side, and explore the supporter's section to take in the full experience. You're in control! This new collaboration between the United and REM5 will find new and exciting ways to bring immersive technologies to the fan experience in and out of stadium. We're excited to bring a new experience to the state fair that will allow for thousands of fans and soon-to-be fans to see what all the hype is about. This FREE VR experience will run every day of the fair at the Loon's showcase at fan central. Simply show up and wait for your turn to go behind the scenes. See you there!
Grab state fair tickets here.
0 Comments
Human-computer interaction will be a huge component of the future, but REM5 STUDIOS is trying to bring technology to the masses right now.
There's a new way to check out sweet spots in St. Louis Park and Golden Valley called the Immersive Virtual Getaway Experience, Trish Foster and Amir Berenjian explain.
Destination marketing organization Discover St. Louis Park aims to drive visitors to Minneapolis suburbs St. Louis Park and Golden Valley through an augmented reality campaign highlighting local attractions. This AR campaign features artwork by St. Louis Park artist Adam Turman and was created by immersive experience agency REM5 Studios. It will let residents and visitors to the St. Louis Park and Golden Valley areas scan a QR code on posters, postcards and kiosks to access an augmented reality lens within the Instagram mobile application. Trish Foster, marketing director of Discover St. Louis Park, told Adweek, “We wanted to create an eye-catching, immersive digital experience that would allow would-be travelers to explore our cities and make them want to visit those places in real life. REM5 Studios took the colorful artwork of resident artist Adam Turman and created an augmented reality Virtual Getaway that does just that. We are always looking for new ways to attract visitors, and we think this is the next evolution in destination marketing.” As part of this campaign, postcards were sent to residents, businesses and meeting and event planners in the St. Louis Park and Golden Valley suburbs, while kiosks have been installed at local hotels and other destinations. The campaign artwork will also be featured in a variety of print publications. Once someone scans the QR code, the lens will appear in the Instagram app, and they’ll be able to tap on prompts to learn about different destinations in the area, such as the St. Louis Park Rec Center and Westwood Hills Nature Center.
After someone completes the experience, they’ll receive a code they can enter on the Discover St. Louis Park website to receive special offers at nearby businesses. People can also decipher hidden clues within the experience to unlock a second code they can email to the marketing organization for a chance to win a prize. People don’t need to travel to Minnesota to access this AR experience. They can click a link on the Discover St. Louis Park website to download a version of the poster they can print and use to experience the campaign wherever they are. Read original article on AdWeek: 12ft.io/proxy?q=https%3A%2F%2Fwww.adweek.com%2Fbrand-marketing%2Fdiscover-st-louis-park-uses-ar-to-encourage-exploration-of-minneapolis-suburbs%2F
MINNEAPOLIS — Standing in front of her paintings, dressed in a bespoke suit made of a material featuring her same artwork, Sarah Edwards appears to have made an unapologetic entrance into the public art world. However, she says she's feeling quite vulnerable.
And though only she and I stood in the gallery room at the Chambers Hotel, Edwards said, "It feels like I'm standing naked in front of a big room of people." As CEO of Fashion Week MN and marketing agency Some Great People, Edwards has spent her professional career connecting community members and putting the spotlight on other artists. But Saturday, Feb. 4 at Sonder, an interactive art experience at the Chambers Hotel, she's finally going to share her own creations. "[I'm] anxious, for sure," Edwards said. "Impostor syndrome." However, Edwards says she's grateful to be able to do it alongside roughly 20 other artists who will be occupying rooms and hallways at the Chambers Hotel. Each room will feature a different local maker. In one room, you'll see couture fashion courtesy of local designer Keiona Cook. In another, you can create AI-generated fashion, with the help of REM5 Studios CEO Amir Berenjian. "When people come here on Saturday, they'll be able to--through text--input their wildest dreams which will be a one-of-one shirt, pants, or shoes and we'll be showcasing it up on the big screen," Berenjian said.
Berenjian said he's looking forward to showcasing his studio's work alongside so many other creatives
"We all feed off each other's energy and are all inspired by each other," Berenjian said. Producers of Sonder also say you can expect to be greeted by a hip-hop violinist when you enter and ballet dancers will serve you cocktails. It's all part of the experience Edwards hopes to be, in her words, "weird." "Maybe there's that outfit you've never been able to wear," Edwards said. "One person said to me, 'It's going to be like Burning Man meets the Met Gala.' And I'm like, 'OK!'" Talking about the creative community together is when Edwards truly lights up. It's why she named the event after a poetic, albeit fictional, definition of "sonder." For background, this particular definition comes from what started as a web glossary of made-up definitions author John Koenig penned to articulate "strangely powerful emotions," which was eventually published into the 2021 New York Time's Best Selling "Dictonary of Obscure Sorrows." In it this "Dictonary,", you'll find: "Sonder: The realization that each random passerby is living a life as vivid and complex as your own." Edwards is admittedly obsessed with that notion and hopes the vivid and complex spirits that make up artists and their neighbors can join together to create a beautiful experience. "My favorite part of all of it is the connections that are made," Edwards said. "Build relationships, figure out ways that you can partner, celebrate, uplift all the awesome artists that are a part of this." https://www.kare11.com/article/news/local/kare11-sunrise/sonder-sarah-edwards-chambers-hotel-minneapolis/89-a444cf0d-3e47-4123-a6bf-34eefc36a050 After its first public demo of live text-to-image generation in late December, the immersive web platform SIMULACRA announced two more AI tools to the experience. You can now explore real-time text-to-architecture and text-to-fashion opening up exponential possibilities for creation within the social 3D web platform. These tools are a glimpse into the future of content creation for both virtual worlds and physical/digital products. Text-to-architecture: This feature allows creators to seamlessly update the walls, flooring, and artwork within the virtual apartment layout, real-time. Through the power of AI, the user can swap out hardwood floors for carpet, a ball pit, or even moon rocks. This feature removes the pain point of making wholesale changes through complex 3D modeling software and allows for infinite creativity from anyone with or without 3D art experience. Text-to-fashion: The future of self expression is increasingly transitioning to digital. With this new AI feature, user creativity is completely unlocked by allowing complete one of one avatar customization. This system is not only the future of how users can represent themselves in SIMULACRA, this is the future of how brands will empower their communities to create and customize their products in both the physical and virtual world. As a platform focused on storytelling and education, SIMULACRA is leveraging these AI tools to accelerate content creation for brands, educators, and creatives so they can create rich and meaningful digital experiences for their communities.
"We believe that AI is a key component in the democratization of content creation for the next-generation of digital experiences," said Amir Berenjian, CEO of REM5 STUDIOS. "We’re excited to see what our partners build that goes far beyond gaming". To demonstrate this capability, SIMULACRA is making an easy-to-use demo publicly available, and accessible to anyone with a PC or mac in a Chrome web browser. No downloads are required. The demo is multi-user so feel free to experience it with others. Try it here right now: https://www.simulacra.io/AI For private tours, or to meet up with the SIMULACRA team, please email brian@rem5vr.com. A look back at internet trends and a look forward at the potentially far-reaching new applications of the technology.
Imagine a meeting with a group of 20 colleagues to plan the opening of a new retail store. You’re staring at a checkered screen of faces on a video call; images of 2D rendering of a 3D space. There is no spatial awareness, eye contact, or body language. Now, imagine that meeting taking place in a virtual reality environment. You are embodying a 3D avatar with accurately represented facial expressions and body movement and can communicate through spatial audio within a digital twin of the new store. Now, you’re back in the physical world, walking down the street in a bustling city, stuck staring down at your smartphone for directions and a recommendation on a place to stop for dinner. But instead, you now have access to that same information as a digital overlay on the physical world right from your glasses or contacts. The Metaverse = The Spatial Internet Welcome to the metaverse. The internet with a sense of presence. At its core, the concept of the metaverse is a 3D interface or digital layer that allows us to access information and interact or communicate with others in a more natural and frictionless engagement. The metaverse isn't a singular medium; it's a seamless combination of immersive technologies including virtual reality (VR) and augmented reality (AR). If you’re a surgeon on the operating table with a patient, you may want an augmented reality display that provides you with real-time information and instructions. But if you’re still in medical school, a virtual reality simulation in which you can learn through repetition, get hands-on training from a teacher anywhere on the planet, fail safely, and benefit from a real-time data feedback loop is likely preferable. The potential is here—augmented reality to seamlessly interact with information and virtual reality to help democratize experiences. The Business of the Metaverse A simple mention of the “M” word can generate a wide range of opinions ranging from excitement to skepticism to confusion. Is it marketing hype driven by greed and capitalism? Or does it represent the next evolution of how humans leverage technology? Adding fuel to the conversation, institutions including McKinsey & Company and Citi Group have projected the metaverse to generate $5 to $13 trillion in value by 2030. These projections are supported by the fact that spatial and social platforms such as Roblox and Minecraft are already experiencing over 200 million monthly active users and billions of dollars of e-commerce with seemingly no end to the growth in sight. Judging from conversations with parents of younger kids, Generations Z and Alpha are skipping over legacy social media and going straight to the spatial web accessed on both PC and mobile devices, and the transition to head mounted displays is on the horizon. Humans + Computers: 1980s to Today To contemplate the idea that is the metaverse, it’s important to contextualize past paradigm shifts as it relates to our relationship with technology and its role in our daily lives. The first significant wave in mainstream computing occurred in the 1980s as the computer moved into the home. Throughout the 90s, businesses and consumers quickly found value in the internet through access to massive amounts of information and basic communication. Then came the dot com bubble of late 2000 and, for a while, headlines seemed to label the internet as a potential passing fad. Jump ahead to 2010 and 77 percent of American households had personal computers, the internet had more than 200 million websites, and Facebook had over 600 million users. The internet wasn’t a fad. Next up, the mobile and cloud era that ramped up at an even quicker pace. In 2022, more than 85 percent of Americans own a smartphone with that figuring jumping to 96 percent for ages 18-29. Nearly every aspect of how we work, learn, play, transact, and socialize has been interwoven with technology. The Next Generation of the Internet Since the dawn of computing, the primary way we interact with technology has been through flat screens and flat interfaces; a keyhole view into the expansive digital world. It lacks the very fundamental sense of presence, a feature that is key to the human experience. Our brain always unconsciously maps the 3D world around us both visually and audibly, storing information, memories, and emotions. So, in some ways, transitioning into the world of spatial computing brings us back to how we naturally engage with the world around us. But it’s early. Aspirations of spatial computing, virtual reality, and augmented reality is nothing new. But now, the computing power and the underlying infrastructure have begun to catch up to those aspirations. Like with many disruptive technologies, billions of dollars are currently being invested into this space across industries such as retail, education, healthcare, marketing, entertainment, training, and more. Companies such as Meta (Facebook), HTC, Microsoft, Apple, HP, Nvidia, Unity, Google, Amazon, and Qualcomm are ramping up innovation in hardware, software, and infrastructure. Following the massive consumer adoption by Generations Z and Alpha, enterprise is embracing the new normal and leveraging metaverse technology for remote work and internal upskilling with companies such as Accenture and Walmart already deploying tens of thousands of virtual reality headsets. Where Does the Metaverse Go From Here? We are undoubtedly on the doorstep of the next generation of how humans interact with technology. Like its technological predecessors, the metaverse has the potential to impact how we work, learn, play, and connect. And when it's here, we won't be talking about it. It will be seamlessly integrated into all aspects of our lives just like the internet. The metaverse is not dead. It’s just in its infancy. https://tech.mn/news/the-metaverse-is-more-than-just-those-goggles Early signs show that generative AI will drastically shape how we build virtual worlds - to a point. Generative AI has already appeared everywhere, as discussions spread faster than the common cold in a tech conference. The metaverse will be impacted too, but we got one step closer at CES. There, NVIDIA unveiled updates to its Omniverse platform that connects with Move.ai for body movements and Lumirithmic for facial meshes. Think of it as a cohesive toybox to deploy experiences with a spectrum of assisted tools, each ready to pluck and apply. “NVIDIA has one of the broadest spectrums in the world for deploying generative AI applications at scale,” said Jesus Rodriguez in a great piece. I agree, but the ramifications are much wider than that. In 2023, we may see a wave of generative AI tools pout into metaverse platforms, perhaps as bits and pieces rather than a cohesive package like Omniverse. Simulacra is one of the first metaverse platforms to offer these features. Meta unveiled a ‘Builder Bot’ tool that lets users speak objects into existence within Horizon Worlds, commanding shapes to pop into virtual spaces. We will see more tools which will ease the building process and, perhaps, bring more people into the fold. At the same time, I wanted to step away from speculation. Pundits often overhype generative AI, letting flights of fancy lift them from reality. Yes, the number of research papers on the topic is increasing, and we may see an AI tool that can build a fully-featured world through mere sentences. The seeds of this vision were planted in 2022 and before. But we are a long way from this cohesive package. Still, green shoots are peeking out of the ground. Companies already offer generative AI solutions that can also benefit metaverse projects. For example, they can make:
Executive summary A summary of my findings, with an expansion on each point throughout the piece:
How is AI related to the metaverse? AI is already being used across multiple metaverse platforms, from curating experiences that it recommends to you to generating avatars based on inputs. The difference here is scale; generative AI has the potential to step beyond smaller frameworks, to generate whole worlds and assets based on text or voice inputs. Take video games, where AI is already making waves in the industry. It’s not a surprise; of all industries, video games will likely be the most disrupted by the trend. James Gwertzman and Jack Soslow at a16z Games point to the time-sucking processes of asset creation, experience design, and iterative testing. Assets that could have taken weeks to finalise can take just a few hours, with some minor tweaks and embellishments. The fast process expedites key parts of the video game process that may have taken months or years, condensed into a short burst which can then be applied to the next phase of the process. The swift rendering is important, too. If assets can be tweaked and then tested quickly, then it saves the employee’s time with iterative evolution and finalisation. The same benefits apply to metaverse projects, too. Virtual worlds can be built based on simple inputs, to lay the framework. Gameplay additions can be added as well, then tweaked based on the requirements of the user. Once built, the virtual world can be populated with NPCs that follow set instructions, speaking without needing voice actors. “There is going to be a time when developers are going to spend less time writing code and more time expressing what they need,” said Danny Lange, senior vice president of AI at Unity. “That increased productivity is going to be transformative for applications of the metaverse outside a narrow space, like gaming.” These experiences will not match the quality of AAA titles in console gaming. God of War: Ragnarok would lack impact if Kratos – played by the award-winning Christopher Judge – spoke with a replicable voice and intonation. Nor would AI tools match the exact specifications for the vines in Vanaheim, or the frosted mountains of Niflheim. But it cuts the process down so much that it makes it easier for anyone to make worlds or spaces, which encourages innovation and experimentation. It’s the elixir of tools, potential and connectivity that could make metaverse platforms thrive. Auto-generating virtual worlds as templates One is generating worlds. I’m not the best builder - my Lego sets attest to that. The same goes for making a website, even with helpful frameworks. But building virtual worlds, on any platform, takes the complexity to a new level. Professionals worry about the position and design of objects, or the interactions of NPCs, or the exact height and shape of locations. A far cry from a simple 2D screen with images or words. Savvy creatives can build web-based experiences via A-Frame, or use the tools already supplied within metaverse platforms. Horizon Worlds has a great building experience, as does Rec Room. They all work, but additional components can help smooth out the ideation process. Plus, frictionless tools expand adoption across industries; Wordpress made it easier for people to build websites, and TikTok’s editor made it easier to create short-form content. Generative AI can auto-populate virtual worlds via text or voice prompts, which can save time and effort. The same tools can also generate images and clips that can be published across multiple formats, from websites to YouTube. We already have tech that can do this. NVIDIA GET3D can generate 3D images based on 2D inputs that can then be used as assets. “GET3D brings us a step closer to democratizing AI-powered 3D content creation,” said Sanja Fidler, vice president of AI research at NVIDIA. “Its ability to instantly generate textured 3D shapes could be a game-changer for developers, helping them rapidly populate virtual worlds with varied and interesting objects.” Going beyond the hype Though exciting, generative AI has sparked sky-high thinking that filled the heads of pseudo-professionals with fumes of nonsense. One narrative focuses on replacement, and how it will replace large swathes of the workforce. Nathan Benaich, general partner at Air Street Capital, argues for nuance in an interview with Sifted. “The reality is – even though the progress right now seems like it’s exponential in images, video and text – I think there’s just so many nuances that companies need to solve for when they take these capabilities and build products for many people to use in a workflow.” Personally, I do not see artists disappearing from the process. Humans are needed at all stages of the creative process in order to present a cohesive vision. Think of it as two humans on either side of an AI machine. One types out what's needed; the broad shape of an idea, with parameters and conditions. The machine churns and works, doing the hard work of bringing the image to reality. When it is spat out on the other side, the second human clips and shapes the asset into a closer vision of what the team needs. The AI is in the middle picking up the hard work of design, building, and colour – but not a replacement. I see generative AI as a supplementary tool for metaverse development. Like video games, it will provide an assistive supplement to the creation of virtual spaces. Predictions on how to use generative AI well with the metaverseGenerative AI will have a huge impact. It assists with the creative process, laying out the framework of a virtual world which can then be tweaked, edited, or replaced in the final version. I can see a deluge of low-quality virtual worlds sprouted by an AI, as a hodgepodge of items dedicated to popular IPs. We will no doubt see a virtual world where assets are inspired by a blend of Pirates of the Caribbean and Harry Potter. (Also, it would not shock me if these worlds are popular too, because players search with high-volume keywords across platforms).
I also predict that a metaverse platform will catapult to mainstream stardom if it is web-based and offers generative AI elements. The winning platform may be frictionless to use (perhaps web-based), easy to generate worlds, and have a discovery feature based on keywords. The potent cocktail would generate a creator-led ecosystem which may rise in popularity swiftly. Simulacra is well-positioned, based on its previously-discussed announcements. Finally, these outputs need to be complemented with a robust set of creator tools to shape the content itself. Generative AI could work on virtual platforms, but it could lead to a deluge of low-quality content if the tools to shape them do not function well. If a world is made, but it is difficult to edit, then users may clock off quickly. Streamlined editors are important for experimentation and adoption; the same is true here, as a complement to the technology. I do not believe AI will replace human creativity. Ultimately it is a tool, and it can be used well or poorly. As Mokokoma Mokhonoana once said, “the usefulness of a thing is dependent not on what it is, or how it can be used, but on the needs or wants of the user.” https://www.immersivewire.com/p/generative-ai-metaverse |
Archives
August 2023
Categories |