At Computex 2023 in Taipei, Nvidia CEO Jensen Huang showcased a remarkable demonstration that provides a glimpse into the future of gaming and AI. The demo featured a visually stunning rendering of a cyberpunk ramen shop, where players could engage in real-time conversations with the virtual characters.
Unlike traditional dialogue options that require clicking, Nvidia’s vision involves holding down a button and using one’s own voice to communicate with video game characters. This innovative approach aims to create a more immersive and natural gaming experience. Huang referred to it as a “peek at the future of games.”
While the demo showcased the potential of this technology, some critics pointed out that the actual dialogue could have been more sophisticated. Nevertheless, the groundbreaking aspect lies in the generative AI’s ability to understand and respond to natural speech, offering a glimpse into a more dynamic and interactive gaming narrative.
The conversation depicted in the demo revolved around the player interacting with the ramen shop proprietor named Jin. Jin expressed concern about the rising crime in the city, attributing the recent damage to his shop to the activities of the notorious crime lord, Kumon Aoki. Jin suggested investigating the underground fight clubs on the city’s east side to confront Aoki.
Although a single video demonstration may not fully showcase the potential of this technology, Nvidia’s achievement in incorporating natural speech interaction within a gaming environment is commendable. The company is expected to release the demo, enabling gamers to experience it firsthand and witness the diverse outcomes that can arise from engaging in real-time conversations with AI characters.
The demo itself was developed by Nvidia in collaboration with Convai, aiming to showcase the capabilities of their middleware called Nvidia ACE (Avatar Cloud Engine) for Games. This suite of tools, including NeMo for deploying large language models, Riva for speech-to-text and text-to-speech functionalities, and other components, can operate both locally and in the cloud.
Powered by Unreal Engine 5 and featuring impressive ray-tracing technology, the demo exhibits stunning visual fidelity. However, some observers have noted that the chatbot aspect of the demo seemed less compelling in comparison, as chatbots have been known to generate more engaging dialogues in previous iterations.
According to Nvidia’s VP of GeForce Platform, Jason Paul, the technology showcased in the demo has the potential to scale beyond single-character interactions. It could potentially enable NPCs to communicate with each other, although further testing is required to validate this possibility.
While it remains uncertain whether developers will fully adopt the entire ACE toolkit demonstrated in the Nvidia demo, it has been confirmed that games like S.T.A.L.K.E.R. 2 Heart of Chernobyl and Fort Solis will utilize a component called “Omniverse Audio2Face.” This feature aims to synchronize the facial animations of 3D characters with the speech of their voice actors, further enhancing the overall immersive experience.
In summary, Nvidia’s recent demo at Computex 2023 showcases a groundbreaking advancement in gaming and AI technology. The ability to engage in real-time conversations with AI game characters using natural speech marks a significant step towards a more immersive and interactive gaming experience. As this technology continues to evolve, it holds the potential to reshape the future of gaming narratives.