Over the last decade, online connectivity has redefined how we interact with games. Live online games produce new content, novel features, and thriving communities as opposed to a game that you purchase and only complete once. Something new is always happening in the game, which gives players a reason to jump back in.
Now, generative AI (gen AI) is taking things to the next level with “living games” — games that adapt, grow and evolve themselves. Imagine a game that can learn how you play and adjust the environment, characters and storylines to fit your unique playing style and skill level. Picture building entire worlds, the perfect mods or custom game items just by describing them. Or having conversations with virtual characters that feel as natural as talking with another human player. These are the kind of experiences gen AI makes possible.
Gen AI is changing how games are made — and played
To stay relevant and exciting, games must constantly evolve to engage existing players and attract new ones. In order to assist them in making well-informed decisions regarding the features and updates that should be added, game developers learn from player data. While AI has helped game makers automate sifting through data and detecting connections and trends, lengthy development cycles and manual processes have historically meant game updates lag months behind timely user requests and shifts in player behavior.
Now, leaders in game development are using gen AI across their production process to create new content faster — from initial ideas to concept art to dialogue and more. This is key as development costs continue to rise, AI frees studio staff to focus their efforts on the most important and interesting creative and development challenges.
Developers can move away from “one-size-fits-all” strategies and create adventures that are tailored to each player with gen AI. Instead of the same set of events and stories for everyone, games can now create new content while people are playing.
Together, these advancements give game developers, technology partners, players and even the game itself the ability to actively participate and drive ongoing improvements.
Google Cloud is helping build the future of games
Google Cloud provides game developers with the technology and tools they need to create interactive games. We offer developers the same services and infrastructure that power products like Google Search and YouTube, alongside advanced AI like Google’s Gemini models and developer platform Vertex AI. Beyond our own expertise and technology, our partner ecosystem connects game developers with game development tools, engines and platforms from across the industry.
Here’s how we’re already helping make living games a reality:
Capcom, creators of hit franchises such as Street Fighter, Monster Hunter and Resident Evil, is using Vertex AI and Gemini to generate hundreds of thousands of ideas for game development. Capcom is able to rapidly generate and iterate on concepts for creating items and environments thanks to Google Cloud AI, which aids in reducing costs and shortening development times for new games. Capcom is able to rapidly iterate and enhance game content thanks to Google Cloud’s AI solutions, which enable faster processing, high-quality image generation, and effective management of large data sets. Klang Games, the studio behind the massively multiplayer online (MMO) game SEED, uses Google Cloud to bring its evolving virtual world to life. With Google Kubernetes Engine (GKE), Vertex AI and Gemini models, Klang will enable hundreds of thousands of autonomous virtual humans, known as Seedlings, to exhibit unique personalities, form relationships, and shape emergent societies through natural conversations and persistent interactions.
With AI-powered characters and quickly-created game environments, Series Entertainment, a fast-growing startup dedicated to making great games with the power of AI, is using Google Cloud’s AI to build games much faster, reducing development times by 90%. Artists are able to quickly adjust the game world in response to player feedback thanks to this speed. Google Cloud’s scalable AI infrastructure is used by ElevenLabs, an AI audio research and deployment company, to assist game developers in creating unique sound effects, quickly translating dialogue, and creating realistic character voices. This partnership helps studios make games more immersive and easier to create by using AI for all kinds of voice and sound needs.
One of the first multi-modal AI agents that automatically tests a game throughout the development pipeline and provides continuous quality assurance at every stage is being created by nunu.AI, a startup dedicated to revolutionizing Quality Assurance (QA) with AI. It is utilizing Google’s Gemini models. By automating repetitive testing tasks, nunu.ai ensures developers can focus on what matters most: creating exceptional gaming experiences for players..
Common Sense Machines (CSM.ai) builds 3D generative AI models and agents to help game artists build immersive game worlds. Working with Cosmic Lounge, CSM can automatically create game-ready 3D assets from simple text descriptions and images, reducing the time it takes artists to create new content.
We’re proud to play a key role in shaping the future of the games industry alongside our customers. To take your player experiences to the next level, head over to Google Cloud for Games and learn more.